Skip to content
 

Blog post

Systematic reviews: Do we need more rigor and transparency?

Thomas Nordström, Senior lecturer at Linnaeus University

Have you ever wondered about the actual reliability of systematic reviews of educational interventions? This issue is significant, as systematic reviews are critical in informing educational research, which guides both classroom practices and education policies.

In response to this, our meta-review (Nordström et al., 2023) explored the realm of educational systematic reviews, seeking those with a ‘low risk of bias’. Reviews assessed as ‘low risk of bias’ are deemed more credible, as opposed to reviews assessed as ‘high risk of bias’, where reviews often lack methodological rigour and where the evidence of how effective, or ineffective, an intervention is can be questioned. There are also increasing demands, not least from the EU, to make science more open, such as that researchers share their methods and materials. This can have many advantages, for example that findings from high-quality reviews can be more easily reused and/or critically examined.

From our search, we initially examined 258 systematic reviews on educational effectiveness published in recent years. In this stage we discovered that many authors had not specified the most basic information we needed to be able to include them in our review, such as which population they focused on (17 per cent) or what type of outcome measures they wanted to target (17 per cent) – for example, that the intervention should target a specific skill, such as reading comprehension. The same were also observed for what type of studies (23 per cent) authors wanted to include – for example, to include either experimental or observational studies – along with unspecified details of what to compare the intervention effect with (45 per cent). As clearly outlined in handbooks of systematic review methods this blending of all types of studies, designs and outcome measures makes the systematic review less useful for readers who wants to inform themselves about the effectiveness of a particular intervention to be used in practice.

Out of the 258 reviews, 88 reviews were deemed eligible for our study. These reviews focused on any type of interventions for K-12 students, for example in areas such as reading, writing and mathematics. Surprisingly, when tested for risk of bias, using the bias tool ROBIS with guidance, only 10 of these studies passed the ‘low risk of bias’ criteria, amounting to a mere 11 per cent. The majority fell into the ‘high’ or ‘unclear risk of bias’ categories, which raises concerns for those who rely on these reviews for informed decision-making. The bias tool ROBIS involves items such as if a study were pre-registered, unambiguous study inclusion and exclusion criteria, sufficient search strategy, quality control of included primary studies, and if the synthesis acknowledged publication bias.

‘Out of the 88 reviews deemed eligible for our study, only 10 passed the “low risk of bias” criteria … the majority fell into the “high” or “unclear risk of bias” categories, which raises concerns for those who rely on these reviews for informed decision-making.’

What is particularly concerning is that among the 10 studies with low risk of bias, only a few shared their data, and none provided detailed information about where exactly they found the primary data. This lack of transparency is akin to saying ‘Trust us, it works’ without providing evidence, which is far from ideal. As the European Commission (EC, 2019), EU member states, UNESCO and other stakeholders (such as funders) call for more transparency in educational research, it is paramount for researchers and journals to adhere to the so-called FAIR-principles – that research data is Findable, Accessible, Interoperable and Reusable.

The key takeaway from our study is that we believe that a fundamental change is necessary in the way educational systematic reviews are conducted. There is a great need for more transparency, improved data sharing, and adherence to clear-cut guidelines already provided by pioneer organisations in systematic reviews, such as the Campbell Collaboration network. To increase credibility, proper organisation is essential.

In summary, this meta-review should serve as a wake-up call to researchers and journals to improve their practices. With the world moving towards more open science and a demand for solid evidence in education, there is a pressing need for change. As our study shows that many systematic reviews are not living up to the already established method standards, we believe better organised and more robust methods will foster greater trust and provide better guidance on what truly works in the classroom.


References

European Commission [EC]. (2019). Guidelines on FAIR data management in horizon 2020. https://ec.europa.eu/research/participants/docs/h2020-funding-guide/cross-cutting-issues/open-access-data-management/open-access_en.htm

Nordström, T., Kalmendal, A., & Batinovic, L. (2023). Risk of bias and open science practices in systematic reviews of educational effectiveness: A meta‐review. Review of Education, 11(3), e3443. https://doi.org/10.1002/rev3.3443