Academics are increasingly warned that cheating assisted by artificial intelligence (AI) could pose new challenges to the world of learning and teaching (Cotton et al., 2023). To understand the future of AI we suggest academics remember yesterday’s pedagogical threat: essay mills.
As two academics, we explore the connections between these services and argue that if essay mills have anything to teach us, it is that responding to new technologies with regulation is not the answer.
In the UK there was a strong push to regulate essay mills – and it didn’t work. Essay mills were made illegal by the Skills and Post-16 Education Act in April 2022, but contract cheating persists due to a loophole or legal grey zone.
While it is illegal to advertise and sell these services, it is not illegal to buy them. Further, it is only illegal to provide these services in the UK, and since many of the biggest contract cheating websites are American, the ban cannot adequately regulate contract cheating (Lancaster & Clarke, 2014).
Many universities worldwide use software called Turnitin to detect plagiarism. What most university staff do not know is that academic institutions are not the only ones accessing this software.
To our surprise, any private company can purchase access to Turnitin and what a company does with this software is up to them. According to an interview we held with a ‘Thought Leadership Specialist’ at Turnitin, ‘the anti-plagiarism software is not restricted to academic institutions, any private company can acquire it’.
Many essay mills use this software to enable plagiarism. The contract cheating business offers a Turnitin report confirming a plagiarism-free product alongside all purchased essays.
‘The idea of buying and selling academic work predates the web. What does seem new, however, is the increased commercialisation and internationalisation of contract cheating (despite measures to stop it).’
The AI frontier
The company OpenAI and its free online ChatGPT service, which generates text based on prompts from the user, has made headlines since its public release in November 2022. The text-generating AI produces fluently written passages in seconds by sifting through relevant data online.
A recent Guardian article suggests ‘academics have generated responses to exam queries that they say would result in full marks if submitted by an undergraduate’. We put this to the test by running one of our essay questions through ChatGPT. In a second-year education studies course, students are asked to write 2,000 words on the following:
Freire states ‘Pedagogy is not ideologically neutral’ (1970). Start by discussing what Freire means by this statement and why it is important to critical pedagogy. Then, explore how either A) The Black Curriculum or B) Canadian Reconciliation Curriculum relates to Freire’s argument.
What ChatGPT generates is relevant and well expressed, as this excerpt shows:
…[t]he way we educate and the subjects we choose to focus on are not neutral, but instead reflect the values and ideologies of the people in power. This is significant in the context of critical pedagogy because it highlights the importance of recognizing and challenging dominant power structures and ideologies in the education system.
At the start of 2023 new classifier tools are being released which have been trained to identify text written by a person and those generated by a variety of AI. It seems likely that software like Turnitin, or new software used by universities, will employ these classifier tools to prevent the misuse of AI. It is also likely that the AI will adapt to evade them, resulting in an endless regulatory game of cat-and-mouse (Xiao et al., 2022).
Reassessing our educational environments?
There is a temptation to approach essay mills and AI as regulation problems: that we need more security to protect academic integrity and harsher punitive measures to prevent plagiarism (Anders, 2023). Both the Skills and Post-16 Education Act and AI classifier tools, we suggest, seem to resist looking inwards at our own institutional practices and pedagogies. Treating AI as a threat also refuses the opportunities it may afford (Fyfe, 2022).
As we see with essay mills, by reducing academic integrity to a security issue we unwittingly sustain a market for private companies, some churning out plagiarism bait and others seeking to police them. Rather than outsource this issue to private security companies we wonder how we might respond to this as a structural and educative issue.
We contend that contract and AI-assisted cheating is indeed a symptom of a deeper problem: a market-driven university model that views students as consumers and staff as service providers in a depersonalised learning environment resulting in overwork, anxiety and disconnection. Addressing these issues will be the true challenge.
Anders, B. A. (2023). Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking? Patterns, 4(3). https://doi.org/10.1016/j.patter.2023.100694
Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International. https://doi.org/10.1080/14703297.2023.2190148
Fyfe, P. (2022). How to cheat on your final paper: Assigning AI for student writing. AI & Society. https://doi.org/10.1007/s00146-022-01397-z
Lancaster, T., & Clarke, R. (2014, July). An observational analysis of the range and extent of contract cheating from online courses found on agency websites. In 2014 Eighth International Conference on Complex, Intelligent and Software Intensive Systems (pp. 56–63). IEEE.
Xiao, Y., Chatterjee, S., & Gehringer, E. (2022, November). A new era of plagiarism the danger of cheating using AI. In 2022 20th International Conference on Information Technology Based Higher Education and Training (ITHET) (pp. 1–6). IEEE.