Skip to content

For more than a decade, evidence was a vague buzzword in the EdTech (Educational Technology) circles. The mixed bag of EdTech evidence claims included product endorsements and vendor-collected testimonials, independent teachers’ and parents’ reviews, as well as academic impact evaluations of effectiveness and efficacy. Fast forward to 2023, and a reinvigorated focus on efficacy is touted to be the breakout point for the future of EdTech.

But for many EdTech companies, scientific evidence is becoming an albatross around their neck. What counts as ‘evidence’ for an educational app or platform? An effect size of 0.92 on students’ learning? Positive feedback from thousands of children?

Various evidence definitions

Without a doubt, evidence in itself is always a positive thing: it is better to have some evidence than none. But the question of how to determine that an EdTech is excellent, good or inadequate fuels a divisive myth: namely that there is only one way of demonstrating evidence of learning and educational impact.

On one side of the debate, what works is defined in terms of efficacy and evidence is measured in a hierarchical fashion with randomised controlled trials (RCTs) on top of the pyramid. On the other side of the debate, is the view that an RCT-based definition of evidence propels a research monoculture that is ‘detrimental to the rigour and vigour of educational research’ (Biesta et al., 2022, p. 2).

Two prominent educational figures are often cited as representatives of the two viewpoints: Robert Slavin and Gert Biesta. For Slavin, RCTs are the golden standards for evidence: to establish whether something works, experimental evidence of the highest standard – similar to that in medical fields – needs to be applied to education. For Biesta, on the other hand, measuring learning in the form of standardised tests – as practised in RCTs – is against the nature of learning: education is a value-driven system where learning happens through exchange of meaning.

Both approaches have found support among educational research groups and policymakers, and, consequently, have been incorporated into national policies. Most prominently, the RCT-based definition of evidence has been part of the US Department of Education ESSA Standards (https://ies.ed.gov/ncee/wwc/essa).

Post-pandemic educational reforms have since given EdTech researchers an opportunity to reflect on the pros and cons of RCTs.

The benefits and limitations of RCTs

A well-conducted RCT abolishes selection bias in intervention comparisons and is therefore the best approach for demonstrating the efficacy of an intervention (Slavin, 2020). With ESSA’s clearly defined parameters, educational programmes can identify the steps towards a strong, moderate or promising level of evidence.

However, given that powerful RCTs require significant financial and manpower resources, often it is the RCT rather than the intervention that fails (Dekker & Meeter, 2022). An RCT cannot establish how certain interventions work, or could work. Furthermore, in a typical RCT, teachers are positioned to follow agreed procedures with minimum intervention. Biesta and colleagues argue that this is against the reality of education. Instead, any evaluation approach should give full space to teachers’ agency and ability to adjust tools and approaches to their classroom.

‘Given that powerful randomised controlled trials require significant financial and manpower resources, often it is the RCT rather than the intervention that fails.’

As we are re-thinking education’s role at the cusp of an Artificial Intelligence revolution, let us adopt a compromise response to evidence-based educational technology. The notion of an ‘EdTech Evidence Portfolio’, one that includes both the efficacy imperative but also recognises its limitations (Kucirkova, 2022), is a solution-oriented response to the educational evidence debates. The EdTech Evidence Portfolio idea builds on the recognition that different types of evidence answer different questions, and that EdTech is a unique vehicle to positively impact children’s learning.

An EdTech evidence portfolio includes experimental as well as observational evidence, with impact demonstrated in terms of measurable standardised tests as well as reflection on knowledge that cannot be measured. Crucially, an evidence portfolio includes authentic voices of teachers and students, and showcases the diversity of educational research. The diversity perspective embedded in an evidence portfolio pivots all EdTech to be driven by science and not hype.


References

Biesta, G., Wainwright, E., & Aldridge, D. (2022). A case for diversity in educational research and educational practice. British Educational Research Journal, 48(1), 1–4. https://doi.org/10.1002/berj.3777

Dekker, I., & Meeter, M. (2022). Evidence-based education: Objections and future directions. Frontiers in Education, 7, p. 941410. https://doi.org/10.3389/feduc.2022.941410

Kucirkova, N. (2022). Understanding evidence: A brief guide for EdTech producers. Report for University of Stavanger. https://doi.org/10.13140/RG.2.2.30096.07687/1

Slavin, R. E. (2020). How evidence-based reform will transform research and practice in education. Educational Psychologist, 55(1), 21–31. https://doi.org/10.1080/00461520.2019.1611432

More content by Natalia Kucirkova