Metrics of school and teacher performance are increasingly used by policymakers and in the UK education systems to determine, for example, performance-related pay and positions in school league tables. A commonly used form of metrics are ‘value-added’ measures of educational progress, which estimate a schools’ or teachers’ contribution – the value that they add – to their student’s education. These metrics are estimated by comparing that student’s academic performance under the school or teacher to their performance in previous years. By using the student as their own unit of comparison, value-added measures attempt to control for all differences between students that are stable over time, and therefore aim to provide unbiased measures of school and teacher performance. However, the strength of value-added measures in providing accurate and unbiased measures of performance has been questioned (Perry 2016; Taylor and Nguyen 2006), and it is not clear how well they control for time-stable factors such as students’ prior ability.
If a particular value-added measure successfully controls for all pre-existing differences between students, then it should not be associated with students’ genetics (Branigan et al 2013). In research we present in full in a new article in the British Educational Research Journal (Morris et al 2018), we tested the presence of associations between genetics and value-added measures using data from a large UK birth cohort, the Avon Longitudinal Study of Parents and Children (ALSPAC). We used educational attainment data from samples of children at ages 11, 14 and 16 to investigate value-added measures when combined with genome-wide data. Data on educational attainment was obtained through data linkage to the UK National Pupil Database, the most complete and accurate record of individual educational attainment available in the UK. We investigated three value-added measures.
- Raw value-added: the difference between a student’s attainment at two exam time points.
- Contextual value-added: the difference between a student’s attainment at two exam time points while holding constant a range of background factors.
- Teacher assessed value-added: the difference between a teacher’s personal assessment of student ability at two time points).
We examined the genetic similarity between all pairs of individuals in the data and compared this to their similarity as indicated by the different value-added measures. If value-added measures are unbiased and immune to genetic variation, students with similar value-added scores should be no more genetically similar than those with very different value-added scores.
Our analyses provided little evidence of genetic contributions to raw value-added measures built only from key stage point score data, as displayed by the blue bars in figure 1. Of course, we cannot definitively rule out genetic contributions as there was some uncertainty in the estimates.
We did, however, find evidence for genetic contributions to contextual value-added measures at two of the three timepoints: between ages 11 and 14, and ages 11 and 16, as shown by the green bars in figure 1. This appears somewhat counterintuitive, because the contextual value-added measures additionally controlled for between-individual differences such as gender. Further analyses based upon simulated data suggested that this could have been due to measurement error at the age 11 test, which would inflate the estimated genetic contributions.
Finally, our analyses provided evidence for genetic contributions to teacher-assessed value-added measures, as displayed by the orange bar in figure 2. Genetic contributions towards teacher-assessed value-added were estimated to be higher than for both raw and contextual value-added, but lower than have been previously reported for teacher-assessed value-added measures (Haworth et al 2011). These genetic differences could have been expressed in a huge variety of ways – in capacity to concentrate, for example.
Our results demonstrate that some value-added measures may not be robust to genetic differences between students, particularly when calculated from teacher-reported ability. Value-added measures should therefore be used with caution in educational research and policy, as they have the potential to provide unfair assessment and accountability measures of teachers or schools, and they may be biased indicators of school and teacher performance.
*This blog post is based on the article ‘Testing the validity of value-added measures of educational progress with genetic data’ by Tim T Morris , Neil M Davies, Danny Dorling, Rebecca C Richmond and George Davey Smith, published in the British Educational Research Journal. The article is now free-to-view to non-subscribers for a limited period.
Branigan A R, Mccallum K J and Freese J (2013) ‘Variation in the heritability of educational attainment: An international meta-analysis’, Social Forces 92: 109–140. https://academic.oup.com/sf/article/92/1/109/2235872
Haworth C M A, Asbury K, Dale P S and Plomin R (2011) ‘Added value measures in education show genetic as well as environmental influence’, PLoS One 6(2). https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0016006
Morris T T, Davies N M, Dorling D, Richmond RC and Smith G D (2018) ‘Testing the validity of value-added measures of educational progress with genetic data’, British Educational Research Journal. https://onlinelibrary.wiley.com/doi/abs/10.1002/berj.3466
Perry T (2016) ‘English Value-Added Measures: Examining the Limitations of School Performance Measurement’, British Educational Research Journal 42(6): 1056–1080. https://onlinelibrary.wiley.com/doi/abs/10.1002/berj.3247
Taylor J and Nguyen A N (2006) ‘An Analysis of the Value-added by Secondary Schools in England: Is the Value-added Indicator of Any Value?*’, Oxford Bulletin of Economics and Statistics 68: 203–224. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-0084.2006.00159.x