This research (for the British Academy/Leverhulme) set out to gather evidence on the effects of the National Student Survey (NSS) ten years after its introduction. The effects of the survey go well beyond giving students an opportunity to evaluate their programme in order that universities can “improve the student learning experience” (ipsos-mori.com) in obvious ways.
higher education may increasingly be regarded as a transaction where students pay for something that academics ‘deliver’
The data would suggest that the NSS is encouraging a more instrumental attitude to education amongst students. The questionnaire itself, institutional responses to concerns, and shifts in the curricula of higher education are all contributing to this move. An economistic register has been reinforced by the introduction of the £9,000 fee where students increasingly consider whether they are getting ‘value for money’. In such a context, higher education may increasingly be regarded as a transaction where students pay for something that academics ‘deliver’.
Academics reported that some senior managers who oversee the survey take a punitive attitude towards academics. This is evident in the ways in which the results are distributed, the public nature of the comparisons that are made, the requirements to respond to issues raised and the combative tone of much of the discussion around the survey results. Academics are required to respond – and quickly – to concerns that are raised, although it is clear that these concerns may not actually represent a significant problem. The research suggests a series of mediations and approximations are represented in survey results resulting in a distinctly ‘muddy’ picture in respect of how improvements might be made. Low scores may originate in a very small number of students, and sometimes the survey is used to express disgruntlement about something quite outside the remit of the survey. Where academics explored a problematic score, and tried to address it, it was noticeable that some students who had been ‘satisfied’ then became ‘dissatisfied’ (and vice versa); the NSS is, as many people described it, a “blunt instrument” and it cannot differentiate between a real problem and a superficial problem. It addition, it highlights only what has ‘gone wrong’ (or appears to have ‘gone wrong’); at the same time, if nothing has ‘gone wrong’ it does not necessarily mean everything has ‘gone right’. On some occasions, an apparent problem was identified as being closer to a university agenda than it was to a student agenda.
Elements of this research suggested a diminishment of professional autonomy amongst academics. Driven by managers and league tables, questions of ‘improvement’ to programmes were initiated from outside those programmes. The notion of ‘continuous improvement’, which is now commonplace in academic departments, disregards thinking about the complexity of educational issues. The responses that are required to ‘problematic’ scores are also impoverished in educational terms, given the sometimes dubious and uncertain validity of the data and the speed with which people are expected to respond. This focus on immediate ‘solutions’ to apparent problems, could be seen as a further diminishment of professional values and practices, inasmuch as this contradicts (and perhaps militates against) considered development of programmes, by those leading them.
Academics reported that ‘poor’ NSS scores come back onto the table “again and again and again” funneling the impact of those scores on the people concerned (Hey, 2011), and increasing their visibility. Shore (2008): “league tables simultaneously produce winners and losers, and the ‘policy of naming and shaming failing institutions has become an annual ritual in humiliation”. In these, and other ways, neoliberalism comes to ‘inhabit’ us – it is ‘out there’ and ‘in here’ in Peck and Tickell’s (2002) terms. In contrast to students’ attitudes to the survey, academics reported a keen awareness and preoccupation with the survey and its effects. This echoes other work in this area: “In the immediate aftermath of the publication of results one manager saw his role as nothing to do with ‘the actual results’ which ‘comes later’ but rather in dealing with the ‘terrible weight’ and emotion that comes with receiving the NSS results” (Sabri, 2013: 5).
For a copy of the full report please email Jo Frankham (email@example.com)
Hey, Valerie (2011) Affective asymmetries: academics, austerity and the mis/recognition of emotion, Contemporary Social Science: Journal of the Academy of Social Sciences, 6 (2): 207-222.
Peck, J. and Tickell, A. (2002) Neoliberalizing Space, Antipode, 34 (3): 380-404.
Sabri, (2013) Student Evaluations of Teaching as ‘Fact-Totems’: The Case of the UK National Student Survey. Sociological Research Online, 18 (4).
Shore, C. (2008) Audit culture and illiberal governance: Universities and the politics of accountability, Anthropological Theory, 8 (3): 278-298.