Schools are hotbeds of innovation. In my role supporting schools to develop more evidence-informed practice, I always admire teachers’ creativity and dedication. However, I also see colleagues trying to do too many things, including things likely to have limited impact based on the best available evidence.
A clear message from the Education Endowment Foundation’s popular resource on putting evidence to work is that schools should do fewer things better (EEF, 2019). This includes stopping things that are less effective in order to release the capacity to do even better things. In my experience, these messages are beginning to take hold; they also feature prominently in the new national professional qualifications.
At a system level, I think we should do more to stop ineffective initiatives. The Department for Education (DfE) is increasingly good at scaling up initiatives with promise, such as the Nuffield early language intervention (NELI), which, according to multiple rigorous evaluations, has improved children’s communication and language (Dimova et al., 2020).
What about ineffective programmes?
A recent evaluation of Achievement for All’s flagship programme – used by around 10 per cent of schools in England – provides a fascinating case study (Humphrey et al., 2020). The evaluation was concerning: it found children in the control schools did considerably better than their peers in schools using the intervention. The study received the EEF’s highest security rating of five padlocks based on the randomised design, large scale, low dropout and low risk of wider threats to validity. This is on top of the EEF’s exacting standards, involving independent evaluation and pre-specifying the analysis to reduce ‘researcher degrees of freedom’ (EEF, 2017; Gehlbach & Robinson, 2018).
In short, we can be very confident in the headline: children in the Achievement for All schools made two months’ less progress in reading, on average, compared to children in schools that did not receive the programme.
What happened after the evaluation?
The EEF (2020) published helpful guidance for schools currently using the programme, and Achievement for All published a blog (Blandford, n.d.) essentially rejecting the negative evaluation – yet many schools continue to use the programme.
The contrast is stark: when programmes are evaluated with promising results, they are expanded; when evaluations are less positive, there are limited consequences.
What if we actively stopped ineffective programmes?
If we assume that the findings from the evaluations of programmes such as Achievement for All generalise to the wider population of schools already using the programme – a quite reasonable assumption – then investing in stopping it is an excellent investment.
A bold option is to simply pay organisations to stop offering ineffective programmes – think ‘golden goodbyes’. The government, or a brave charity, could purchase the intellectual property, thank the staff for their service, provide generous redundancy payments, and concede that the organisation’s mission is best achieved by stopping a harmful intervention.
‘The government, or a brave charity, could purchase the intellectual property, thank the staff for their service, provide generous redundancy payments, and concede that the organisation’s mission is best achieved by stopping a harmful intervention.’
If that feels too strong, what about simply alerting the schools still using the programme and supporting them to review whether the programme is working as intended in their own school. Remember, for Achievement for All, this is around 1 in 10 of England’s schools. New adopters of ineffective programmes could be discouraged by maintaining a list of ‘not very promising projects’ to mirror the EEF’s ‘promising projects’ tool, though we may need a better name.
These ideas scratch the surface of what is possible, but I think there is a strong case for using both positive and negative findings to shape education policy and practice.
Finally, there is an ethical dimension: is it right to do so little when we have compelling evidence that certain programmes are ineffective?
Blandford, S. (no date). Education Endowment Foundation Achievement for All: Years 4 and 5 Trial Programme (2016–2018). https://afaeducation.org/news/education-endowment-foundation-achievement-for-all/
Dimova, S., Ilie, S., Brown, E. R., Broeks, M., Culora, A., & Sutherland, A. (2020). The Nuffield early language intervention. London: Education Endowment Foundation. Retrieved from https://educationendowmentfoundation.org.uk/public/files/Nuffield_Early_Language_Intervention.pdf
Education Endowment Foundation [EEF]. (2017). EEF standards for independent evaluation panel members. London. Retrieved from https://educationendowmentfoundation.org.uk/public/files/Evaluation/Setting_up_an_Evaluation/Evaluation_panel_standards.pdf
Education Endowment Foundation [EEF]. (2019). Putting evidence to work: A school’s guide to implementation. London. Retrieved from https://educationendowmentfoundation.org.uk/tools/guidance-reports/a-schools-guide-to-implementation/
Education Endowment Foundation [EEF]. (2020). The EEF’s evaluation of Achievement for All: Answers to key questions for teachers and school leaders. Retrieved from https://educationendowmentfoundation.org.uk/news/achievement-for-all-answers-to-key-questions-for-schools/
Gehlbach, H., & Robinson, C. D. (2018). Mitigating illusory results through preregistration in education. Journal of Research on Educational Effectiveness, 11(2), 296–315. https://doi.org/10.1080/19345747.2017.1387950
Humphrey, N., Squires, G., Choudry, S., Byrne, E., Demkowicz, O., Troncoso, P., & Wo, L. (2020). Achievement for All: Evaluation report. London: Education Endowment Foundation. Retrieved from https://educationendowmentfoundation.org.uk/public/files/Projects/Evaluation_Reports/Achievement_for_All_(final).pdf