Basic literacy and numeracy are foundational skills that are fundamental to learning for every child. They provide the base to build other skills needed in primary school and beyond. Foundational literacy skills include reading and writing, while foundational numeracy skills include fluency in the four basic arithmetic operations.
Students who successfully acquire Foundational Literacy and Numeracy (FLN) skills can comprehend age-appropriate texts, communicate their ideas effectively, understand basic number concepts and begin the journey of thinking critically and solving problems. Students who miss out on the FLN skills keep falling further and further behind with every passing year.
FLN skills include skills like letter recognition, letter-sound association, oral reading fluency, number recognition, fluent knowledge of addition and subtraction facts, etc. It is possible for any teacher to impart these foundational skills effectively to every child, provided they are aware of them and their importance. Equally important, they should know where their children stand and the age-appropriate benchmark on each skill.
Many studies (NAS, EI Study on Foundational Literacy and Numeracy, ASER 2020) have pointed out the low levels of learning in India. One of the main causes of this low learning is the gap in the attainment of foundational skills. We all know that every child reading fluently is critical, but how are teachers expected to teach or measure reading fluency? What are the benchmarks that the students need to attain? What tools can the teachers use to measure student levels in these skills? None of these exist in a systematic way.
While it is important to prioritise FLN skills at the policy level, it is equally important that there is systematically collected data around the actual benchmarks and levels of achievement. Teachers should be able to measure and report what percentage of students are achieving foundational skills. The data should include diagnostic information as the focus is on the action needed to remediate the gaps.
EI has created the SoL App to:
- assess foundational literacy and numeracy skills using a digital platform, thereby enabling reliable data collection at a large scale in an efficient manner
- provide continuous data and benchmarks on foundational learning over a period of time, thereby allowing researchers to understand relationships between different foundational skills and how they develop
- create a rich body of pedagogical content knowledge around the specific errors students make, which can provide teachers, educators, Edtech product developers, and curriculum developers pedagogical information needed to build effective learning solutions and resources
The app will also help to achieve:
- standardisation of test administration (for example through uniformity in providing instructions on different subtasks, which otherwise is difficult to achieve with multiple evaluators)
- reduced dependence on high quality trained manpower for reliable data collection
- collection of individual student responses (and time) on each question, including audio, video and click/tap inputs on different questions
The SoL App is an Android app compatible with tablets and smartphones. It covers a range of literacy and numeracy skills as shown below and allows questions to be customised for testing a group of students.
The app supports multiple vernacular languages including almost all Indian languages. Adequate support is provided to students in terms of instruction voice overs, video tutorials and practice tests before the actual assessment to ensure that the assessment data captured is reliable. It also supports capturing of information related to students’ socio-economic background.
Figure 1 illustrates some screens from the app that aims to capture specific foundational literacy and numeracy skills of students.
How is it different from what already exists?
While there are a few standardised assessments like the Early Grade Reading Assessment (EGRA) and Early Grade Mathematics Assessment (EGMA) that are available across certain languages, the administration of the assessments continues to remain a manual-intensive process, where a trained evaluator conducts a 1-1 assessment with the student and records the student’s responses. Digital tools like Tangerine and SurveyCTO’s EGRA field plug- in are also designed primarily to aid the evaluator in capturing of student responses digitally and work more as a data entry software than an assessment platform.
The main disadvantage of using a digital tool as a ‘data entry’ platform which just marks the student’s response right or wrong is that pedagogical nuances don’t get captured and the end result is just a bit better than a manual 1-1 assessment. Some examples of such pedagogically important nuances are –
- Capturing the exact response given by students, which may help in understanding underlying patterns around common student errors and common erroneous strategies.
Example of a common error: In multi-digit addition, adding from the leftmost column to the rightmost column. Students making this error solve the problem starting from the left column as shown in Figure 2.
Figure 2: Student error in multi-digit addition
Almost 20% of class 2 students (out of around 560 students tested in a study to measure foundational skills by EI) from government schools of 2 Indian states were found to be making this error. This kind of specific error can be identified only when the exact digit-wise answer is recorded.
- Capturing not just the final answer, but the exact process reached to arrive the answer, which may help in understanding the exact step at which the student started making the error.
Example: In the previous example of addition of 2-digit numbers, knowing the sequence of steps can help in understanding if the student is solving left to right or right to left.
- Capturing large scale audio files of students reading can be a powerful way to understand the kind of challenges they face in reading, like say decoding of text, blending of letters, understanding of punctuation etc.
It may take a set of highly skilled, carefully trained evaluators to capture such nuances in a standardised test setting. Our experience of conducting large scale assessments around foundational skills indicates that it is not easy to i) find such skilled evaluators and ii) achieve standardisation at scale.
This is an area where technology can play an important role. Here’s an example of how we use such an approach in one of modules on multiplication process. As indicated in Figure 3, students attempt multiplication problems on a mobile/tablet app and the feedback for the teachers not just indicates their steps but also the errors they have made. The data is further analysed to identify and classify the errors made by students. Figure 4 indicates an example of this. 7 common multiplication errors are identified and codified based on specific student telemetric data.
Figure 3: Multiplication module
Figure 4 Common multiplication errors
Experiences from piloting the SoL App
To obtain feedback on the app’s user-friendliness and collect data, EI has been conducting a pilot with the app in 5 districts across India. The preliminary observations suggest a number of advantages of using the app:
- The geographical reach of the app is comparable to a paper based assessment: Due to the availability of offline support, the app does not need internet connectivity but syncs data whenever connectivity is available. Needless to say, the logistics of printing and couriering papers are largely avoided.
- Adaptivity in the test design: Evaluators using a paper test are trained to skip questions based on student performance but the level of adaptivity in an app-based test is significantly higher (UNESCO 2017). This allows tests to provide better diagnostics on students’ area of difficulty and also be shorter than paper tests.
- Many critical tasks that had to be performed manually are automatically handled by the app: This includes questions requiring timing (including all questions testing fluency) and the process of marking students’ answers. In addition, if a question is found problematic or out of context, it can be easily replaced without requiring reprinting.
- Students enjoy the app experience: While some of this may be a novelty effect, the app has the ability to keep being upgraded or enhanced from an interface viewpoint.
- Facilitators find the test conduction process easy and convenient: Facilitators involved with the pilots have shared that the app has reduced their manual effort, specifically:
- Less manual data entry work resulting in much fewer errors
- Less time per child to conduct the test than a paper-based test
- Standardization: Paper-based FLN tests require facilitators to capture responses on a sheet of paper. Despite multiple trainings, it is difficult to achieve the standardization in such an exercise which the app can achieve.
- Facilitator prompting: Human biases in large scale assessments can result in prompting, over-scoring, incorrect response capture, etc. many of which are overcome with the app.
- Facilitator training: During the pilots, the app-based assessment was found to reduce the training time by a factor of about 2.
- Bringing parents into the assessment process: During the pilots, we are finding parents (especially mothers) taking keen interest in the process. There is a willingness to participate in understanding their child’s learning and ways to improve it. The SoL App is indeed a handy tool for parents to conduct assessments themselves at home.
Digital Divide – Students coming from a low-resource family with limited/no access to a digital device take time to adapt to the app.
Gender Disparity – During the pilots, EI has observed that girls have limited access to a smartphone at home compared to boys and take more time to navigate the app. Many of the boys whom we assessed played games on a smartphone and performed better in timed questions. These observations need further probing.
Data privacy issue – The app requires a child to record their voice as responses. Approvals for such recordings need to be taken by the guardian or child and the data should be saved securely.
We foresee this dynamic and rich measurement of learning to create data-sets available for researchers and practitioners alike and enable the creation of a body of common, accumulated knowledge that furthers the domains of both Learning Sciences as well as Learning Engineering. We believe this can be effective to solve the problem of foundational learning by increasing the ability to get rich data on the mistakes students make and the misconceptions they have and give a lot of structured practice that is appropriate to their current knowledge to remediate learning gaps.