Skip to content

Much hype around artificial intelligence in education (AIE) is crystallised in the language. This was demonstrated emphatically in the ‘discombobulated’ tones of a working paper from Hamilton, Wiliam and Hattie that makes speculative assumption about the imminence of artificial general intelligence the basis of some misplaced recommendations. One such recommendation is for global regulation of artificial intelligence (AI): wishful thinking, perhaps, but characteristic of how we are currently responding to seeing AIE as so disruptive. This blog post aims to push back on the use of hype in research around AIE, while acknowledging its contribution for renewed critical discourse around education’s general condition and future purpose.

Hype cascades across media and enters the domain of thinking around AI as a phenomenon that will inevitably disrupt, forcing us to change our ways of working, learning and even being. Perpetuating hype through discourse around AI enables dominant knowledge to be created (Nemorin et al., 2023), particularly that pushed by corporations with AI to sell. Hype can be distorting and detrimental. While speculating on the disappearance of jobs in the future, we can be distracted from more present concrete issues, such as inherent bias and discrimination built into algorithms which can perpetuate inequity.

Predictions about job losses or workers being replaced by robotics help to push a degree of possibility in the Overton window of just what AI is capable. Thus, even when the hype is unfounded, the public consciousness is readied to accept ever wilder speculation. Avis (2021) observes the tendency in AI discourse to make advancements seem inevitable when he posits that they are imaginary or ideological constructions. Assertions that AI can help to personalise learning through data analytics, for instance, are first without strong empirical evidence (Bartoletti, 2022) and second based on reductive understanding of what learning comprises. Reinforcing these assertions is convenient to serve corporations seeking brand awareness in AI development and the AIE marketplace.

‘Thus, even when the hype is unfounded, the public consciousness is readied to accept ever wilder speculation.’

The reductive definition of learning into singular units that can be transferred, measured and more easily automated was in the mind of AI pioneers who gathered at the Dartmouth Workshop in 1956. This was an event (sponsored by the Rockefeller Foundation) that has shaped much of the trajectory up to now of AIE, and where it was proposed that the development of AI ‘is to proceed on the basis that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’. According to Birhane (2022), such overpromises underestimate the complexity of our intelligence and ‘fabricate … narrow facets of human behaviour’, as Selwyn (2022, p. 623) states.

Some technologies are accompanied by hype because they can seem transformative at first. The impact on education is then forecast, often on the terms of the hype itself. AI is presently just fairly clunky predictive language tools, but the hype should prepare us for arguments around the automation of teaching and learning from sales folk. These rely on meagre definitions and hard selling.

None of this is to say we should not have concerns around AIE – we absolutely must be vigilant (Scott, 2023) as teachers need to confront the uncertainty and complexity of the world today and tomorrow. We may, for instance, resist any hype that personifies chatbots and large language models, however mesmerising it may seem.

Criticality must be grounded with view of the political and economic contours to which education is increasingly entrenched. Increasingly, we can use this hype to address fundamental questions about the purpose of education, as observed by Heimans et al. (2023). As ever, research and teacher education can be a site to contest this, with richer descriptions of the learning experience as participatory experiences, where knowledge is dynamic and socially constructed, rather than simulated.


References

Avis, J. (2021). Vocational education in the fourth industrial revolution: Education and employment in a post-work age. Springer Nature.

Bartoletti, I. (2022). AI in education: An opportunity riddled with challenges. In W. Holmes & K. Porayska-Pomsta (Eds.), The ethics of artificial intelligence in education (pp. 74–90). Routledge.

Birhane, A. (2022). Automating ambiguity: Challenges and pitfalls of artificial intelligence [Doctoral thesis, University College Dublin]. https://arxiv.org/pdf/2206.04179.pdf

Heimans, S., Biesta, G., Takayama, K., & Kettle, M. (2023). ChatGPT, subjectification, and the purposes and politics of teacher education and its scholarship. Asia-Pacific Journal of Teacher Education, 51(2), 105–112.

Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology48(1), 38–51.

Scott, H. (2023). ‘Reject all’: Data, drift and digital vigilance. In S. Hayes, M. Jopling, S. Connor & M. Johnson (Eds.), Human data interaction, disadvantage and skills in the community: Enabling cross-sector environments for postdigital inclusion (pp. 285–298). Springer. https://doi.org/10.1007/978-3-031-31875-7_15

Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), 620–631.