PARTNER STORY

What Counts as Evidence in EdTech?

There is no single evidence standard that fits every type of EdTech solution. The type of evidence we consider relevant depends largely on the nature of the solution and the type of learning outcomes it aims to influence.


Student with a tablet
Learning Cabinet Team

Learning Cabinet Team

05.03.2026

Over the past weeks, we’ve received a large number of applications to the Learning Cabinet, and the call is still open for EdTech providers globally.

One topic comes up repeatedly in conversations with applicants, especially the ones whose application has got rejected: What do we actually mean by “evidence”? 

It’s a fair question, and the answer is not always straightforward. 

There is no single evidence standard that fits every type of EdTech solution. The type of evidence we consider relevant depends largely on the nature of the solution and the type of learning outcomes it aims to influence. 

 
Matching Evidence to the Type of EdTech Solution 

Some EdTech tools have clearly defined, curriculum-aligned learning goals and include structured learning content. For these types of solutions – for example in mathematics, literacy, or language learning – it is often feasible to measure direct learning outcomes through pre- and post-tests or other quantitative methods. 

However, many EdTech solutions do not operate in this way. Some tools focus on supporting teachers and improving classroom pedagogy, rather than delivering specific content directly to learners. These tools may help teachers plan lessons, structure classroom activities, collaborate with colleagues, or streamline assessment practices. 

For such solutions, it is often much more difficult to isolate and measure their direct effect on students’ final learning outcomes. Teaching and learning are influenced by many factors, and the tool may play an indirect but still meaningful role in improving the learning environment. 

In these cases, it can be more appropriate to gather evidence on other levels of impact instead. This might include measurable improvements in teachers’ workload, efficiency in assessment, adoption of new pedagogical practices, or increased collaboration among educators. Qualitative feedback from teachers and classroom observations can also provide valuable insights into how the tool supports teaching practice. 

In other words, the type of evidence should match the type of impact the solution is designed to create.  

 

Why We Ask for Evidence? 

When we match EdTech solutions with governments and education systems, we need to be confident that the solution actually creates impact, and that the impact has a reasonable chance of being replicated in different countries and contexts. 

Evidence is what gives us that confidence by providing indicators and a history of the scope, design and outcomes or impact of a tool. And when we present solutions to countries, it is also the key element that helps them trust the tool. 

Popularity or fast growth in the number of users can certainly be a positive signal. It may indicate that a solution is easy to adopt and responds to a real need, but user growth alone does not necessarily indicate meaningful learning improvements.  

In large education systems, governments need to understand not only whether a product is popular, but also what outcomes it produces in classrooms and whether those outcomes can be replicated at scale. 

 

What kinds of evidence can exist? 

Ideally, we look for a developing evidence portfolio. 

At the foundation is a logic model or theory of change. In simple terms, this means the design of the solution aligns with existing research and established understanding of how teaching and learning are supported. The product concept should clearly connect to known pedagogical principles, effective teaching practices, or system-level processes that contribute to improved learning outcomes. 

The next level includes evidence gathered from real users of the solution. This could include structured user feedback, independent third-party evaluations, qualitative insights, case studies, or quantitative data. Depending on the type of solution, different forms of empirical evidence can be valuable. What matters most is transparency, how the data was collected, how many users were involved, what methods were used, and how the findings were analysed. 

In some cases, we see larger-scale empirical or longitudinal studies or even Randomised Controlled Trials (RCTs). These types of studies require significant time and resources and are therefore more commonly conducted by established organisations or around widely used learning tools, particularly in areas such as literacy, mathematics, or language learning. 

At the same time, educational outcomes are always influenced by many factors: teachers, classroom context, curriculum, and others. Technology alone rarely determines the results. 

This is why it is important to understand the role a digital solution plays in the learning environment and in the learner’s experience. As the field of EdTech evaluation continues to evolve, we are learning from research, practitioners, governments, and solution providers. These insights are helping shape the upcoming EdTech for Good Framework V2.0, which aims to further strengthen how evidence of learning impact is assessed. 

 

What does this mean for EdTech providers? 

Every scale of evidence counts. What we want to see is a credible and transparent evidence story. 

• Does the solution clearly explain why it should work? 
• Is it aligned with existing research? 
• Has impact been observed in real-world use? 
• Is the data collection transparent and methodologically sound? 

If you do not yet have strong empirical evidence, that is not the end of the road. It simply means the next step is to start building your evidence portfolio

  • Clarify your theory of change. 
  • Align your solution with existing research. 
  • Collect structured data from real users. 
  • Document findings transparently. 
  • Strengthen your evidence gradually over time. 

 

Evidence building is a journey, and we are happy to see solutions at different stages of that journey. The Learning Cabinet application call is still open, so if you are developing an EdTech solution with real potential for impact, there is still time to apply.