Apart from effecting a devastating human tragedy, Covid has transformed the fundamental nature of our interactions with each other. Nowhere is this more apparent than in the lives of young learners. And nowhere is the impact of this greater than among the lives of the most vulnerable learners across the world. The current in-person time between students and teachers or between students and digital learning resources has reduced drastically, and for millions of learners across India, has been slashed to zero [i] [ii].

Even prior to 2020, findings from ASER surveys over fifteen years showed that half of Std V students struggle to read basic text fluently or do simple arithmetic operations [iii] [iv]. As evidenced by these findings, the need for resources and effective learning was always prevalent, but has now sharpened to a painfully acute level and we must take practical, yet powerful steps to meet it.

What can teachers and developers of educational content do? The onus is on us to ensure that the time learners spend engaging in a task or assessment yields high value for them. We are aware of how important and effective it is to plan the syllabus and the flow of content from a Backward Design [v] perspective, but is it possible to ensure that the same approach trickles down to each item and task that the learner engages with? One way of doing this is to critically examine the effective result of each task or assessment being presented to learners, before it reaches them. This is especially true for assessments as good assessments provide vital information about the status of learning, and also help uncover learning gaps and common misconceptions – useful for both the educator and the learner.

Here are a few instances to illustrate how some items/tasks could be made more effective:

1. An assessment item that addresses a learning objective, but does not do a sophisticated job of it – the question may test only a very rudimentary understanding of the concept, and not provide enough information to the educator.

Fig. 1: Both questions given here test an understanding of the fundamentals of fractions.
Question (i) on the left shows equal parts of the whole, whereas Question (ii) on the right requires learners to deduce the number of equal parts, based on the fact that the figure is a square. Performance for both questions based on grade 6 student responses.

In Fig. 1, Question (i) is important to check a basic understanding of fractions and will probably be appropriate in the initial stages of the concept (at lower grades). In order to get more information on whether learners have a more nuanced understanding of fractions, it is important to check learners’ understanding of the fractions with non-familiar configurations in a familiar shape. This is also reflected in the performance of students in grade 6 – less than 30% of the students have chosen the correct answer in Question (ii). On further examining the item data for Question (ii), we see that more than 35% of learners have chosen option B, which indicates that students are simply counting the number of parts that the square has been evidently divided into, without considering the ‘equal parts’ bit of the concept of fractions. This has led them to think: “The square has been divided into 12 parts, and 8 parts have been shaded, so must be the answer.” The question provides a great opportunity for teachers to identify how students are thinking about fractions and spend valuable time discussing a fundamental understanding of the concept.

2. A learning task created for a specific skill on a digital interface, which fails due to a poor user experience – as shown by anecdotal pilot evidence.

Shown here is a task that was used in Mindspark English (a personalized learning software from Ei) where students needed to drag letters and create a correct word from jumbled letters.

  1. While the task seems pretty straightforward and intuitive, it works best on a touchscreen. An older version of this task and interface proved to be tiresome and tedious when learners used a mouse or a trackpad to do this on a non-touchscreen device. This feedback was taken back to the technical team who added improvements to the task to make it easier for students to work towards the specific skill used in the task – improving their spelling given a specific set of letters and an image cue. Such testing and piloting on the field brings immense value to the process of development in the case of educational content on a digital interface – mostly because digital learning resources are meant to assist the teacher and empower students by providing avenues for them to take control of their own learning. A seemingly simple failure at the UX level can jeopardize the entire learning experience for a learner. This is easily avoided by pilot tests and in-house user testing.

  2. 3. An assessment item that ends up requiring a different skill to answer the question, moving away from the original learning objective.The following questions were designed to test the learning objective of the effect of the tilt of the Earth’s axis. This is a learning objective that students are expected to be familiar with at grade 7.

Fig. 2: Both questions test the effect of the tilt of the Earth’s axis, but Question (i) is heavy on text and actually gives away the answer, whereas Question (ii) tests whether the learner can identify the correct cause-effect relationship based on their prior knowledge.

Question (i) in the example given above changes into a language comprehension question and does not test the original learning objective. Often, the creator of the task may have an innate bias towards the item, due to which certain flaws may be overlooked. This can be overcome with the help of a strong review process. In this case, a reviewer / Occam’s razor has distilled the question back to its original objective. All that empty space in Question (ii) is the learner’s mind-space not being used in an unnecessary exercise.

It is helpful to have a checklist while developing and reviewing a task/item to ensure that ‘the What, the Why and the How’ of a task or item get equal attention:

  1. What learning objective/skill do I want to address and why is it important?
  2. Does the task address the learning objective/skill?
  3. Is there an opportunity to address a misconception related to this learning objective?
  4. Is the task clear and can the learner respond to the task without impediments?
  5. Does the response to the task require or get diluted by another skill?

While points 1, 2 and 3 are crucial to the creation of the task, points 3 and 4 are best answered with the help of inputs from a reviewer or an actual user. Also, a pilot test often gives us quick inputs – much like a survey – and valuable information as we design and develop our tasks and assessments. Do include the step of experiencing the task or item much as a student would as part of the review process in the development itself – this small step has the power to weed out multiple errors or slips that may creep in during the creation of content.

There is an immense need and opportunity for all of us to come together and work towards delivering effective content to learners who need it more than ever. If we develop tasks and items while keeping some of these essential pointers in mind, hopefully, we will be able to address these needs successfully.



[i] http://img.asercentre.org/docs/ASER%202021/ASER%202020%20wave%201%20-%20v2/nationalfindings.pdf

[ii] https://www.orfonline.org/research/regression-in-learning/

[iii] http://img.asercentre.org/docs/ASER%202021/ASER%202020%20wave%201%20-%20v2/commentary_rukminibanerji.pdf

[iv] https://www.india-seminar.com/2018/706/706_sridhar_rajagopalan.htm (https://www.india-seminar.com/2018/706.htm)

[v] https://www.edglossary.org/backward-design/


Meghna Kumar

Heads Test Development at EI and enjoys finding out what's happening in classrooms. She is interested in how children think and the ways in which we can use information from assessment in the classroom. She also loves Bangalore's trees!