Skip to content
 

Blog post

Challenging the evidence on the new baseline test in reception

Gemma Moss, Harvey Goldstein Pam Sammons Gwen Sinnott Gordon Stobart


In this blog, the authors of the BERA report A baseline without basis: The validity and utility of the proposed reception baseline assessment in England preview the findings of their research. Click here to read the full report, which is now published.


On 11 April, school standards minister Nick Gibb announced that the National Foundation for Educational Research would be developing ‘baseline’ assessments for reception-age children. Gibb said:

 

‘This quick, simple assessment will help us to capture the progress that children make throughout primary school and provides a fairer measure for school accountability.’

What Gibb does not mention is that the overwhelming evidence is that assessments at this age, especially if they are quick and simple, are unreliable. Results are heavily dependent on how old the child is; they cannot make accurate assessments in the case of children who speak English as a second language. This makes them particularly unsuitable for use as instruments for accountability, by which Gibb means rankings or league-tables of schools.

Teachers are, of course, continually assessing their children in a ‘formative’ sense in order to improve their learning. But the use of reception baseline assessment data for this purpose has been explicitly ruled out by Gibb. The assessment is purely to be used as a prior starting point estimate in order to ‘adjust’ key stage 2 test scores so that schools can then be compared in terms of their pupils’ progress.

‘The present proposals are not fit for purpose. We urge the government to think again.’

Gibb’s announcement says nothing about how the scheme is to be evaluated. In fact we do know, from analysis of data from the government’s own national pupil database, that at secondary school level an equivalent scheme – which uses the interval from year-6 key stage 2 tests to year-11 exam results – raises serious problems (Leckie and Goldstein, 2017). Attempts to compare secondary schools using ‘value added’ progress measures are unreliable, and are of very little use for parental choice of schools – not least because they are out of date by many years. They cannot be used to make scientifically defensible distinctions between schools.

This is a fundamental measurement problem that will be even more problematic in the case of the proposed baseline assessments, given both the longer time-lag between reception baseline and key stage 2 outcomes and the much smaller number of children in each primary school in comparison to those in each secondary school. A proper evaluation of baseline assessment could only be made after the first cohort of children has passed through primary school – that is, not until 2027 – but already the evidence strongly suggests that the new proposals will turn out to be of little use.

The present proposals are not fit for purpose. We urge the government to think again. Otherwise, the policy will turn out to be a pointless exercise and a wasteful use of public funds.


Professor Gemma Moss
, UCL Institute of Education and past president, British Educational Research Association
Professor Harvey Goldstein, University of Bristol
Professor Pam Sammons, University of Oxford and senior research fellow, Jesus College, Oxford
Gwen Sinnott, education consultant and past president of the London Education Research Network
Emeritus Professor Gordon Stobart, UCL Institute of Education


Members of BERA’s expert panel convened to consider the viability of baseline assessment for school accountability.

The panel will publish a ‘BERA Baseline Briefing’ statement in June 2018.


Reference

Leckie G and Goldstein H (2017) ‘The evolution of school league tables in England 1992–2016: “Contextual value-added”, “expected progress” and “progress 8”’, British Educational Research Journal 43(2): 193–212

More content by Gemma Moss, Harvey Goldstein, Pam Sammons, Gwen Sinnott and Gordon Stobart