Skip to content

Blog post

Let’s not blame students for the shortcomings of assessment strategies of universities that turn a blind eye to artificial intelligence: A pre-crisis warning

Fawad Khaleel, Head of Global Online at Edinburgh Napier University Patrick Harte, Head of Postgraduate Programmes at Edinburgh Napier University

This blog post highlights the evolution of artificial intelligence (AI) aligned with how the higher education (HE) sector needs to adjust assessment strategies and academic integrity policies to reflect these changes. Current assessment strategies and academic integrity policies are increasingly impacting on student experiences, with the number of cases exponentially increasing in Scottish universities, as detailed Table 1

Table 1: Academic integrity cases processed between 2020 and 2022
* Data for 2021/22 was redacted by Abertay University for 21/22 in FOI


Cases for Academic Year 2020–21

Cases for Academic Year 2021–22

University of Stirling



University of Glasgow



Heriot-Watt University



Glasgow Caledonian University



University of Aberdeen



University of Strathclyde



Abertay University


184* (2022/23)

Our research, based on a Freedom of Information inquiry, across 16 Scottish universities suggests that the investigation of academic dishonesty cases (ultimately through oral examination) costs an institution 2,697 hours to process 1,000 cases (disaggregated as 933 hours of academic time and 1,764 administrative time) (see Khaleel, et al., 2024). This monetary impact alone should cause alarm, but so far it remains a hidden cost of academic dishonesty.

There is a significant body of academic discourse on the technological developments in generative AI (see Bin-Nashwan, 2023), some focusing on AI’s adaptability to learning, teaching and assessment (see Baidoo-Anu & Owusu Ansah, 2023), with others concentrating on threats from AI to academic integrity (see Sullivan et al., 2023). While the impact of student use of AI is extensively debated, change is not being seen in assessment strategies. In the seven Scottish and 24 English universities we reviewed (2021–24) institutional assessment strategies were dominated by a logic based on word count.

Word count serves as the academic proxy for students’ depth of critical thinking and the instrument which distinguishes the level of study, the credit value of the unit and weighting of assessment (Cheetham et al., 2023). This logic must be questioned as subject experts within Business Schools, for instance, do not understand the potential of AI and its future trajectory, and accessibility beyond the now generic ChatGPT. Subject experts are experts in their respective disciplines, not AI nor its exponential rate of development. This deficit in technological understanding results in untenable optimism, indefensible pessimism, or a completely rational confusion within HE communities.

‘Subject experts are experts in their respective disciplines, not AI nor its exponential rate of development. This deficit in technological understanding results in untenable optimism, indefensible pessimism, or a completely rational confusion within HE communities.’

Many studies choose to blame students (see Parnther, 2022), particularly international students (see Hayes & Introna, 2010), for attempting to gain the system when they are simply using the most contemporary resources available to them – the ‘new Google’. However, it is reasonable to suggest that increases in academic breaches of academic integrity is not student misconduct but an issue founded on dated assessment design, obsolete assessment strategies (see Shepard, 2000) and a quality logic using the doctrine of precedent to regulate LTA practice (Taras, 2010). Archaic perspectives on academic integrity compound the issue.

Many higher education institutions (HEIs) have academic integrity processes based on Turnitin similarity reporting and plagiarism policing (see Belli et al., 2020). They do not consider the different capacities with which students engage with AI. Acceptable examples of this engagement could include the acknowledged use of AI to: generate contextual research materials when drafting a report; or, to structure or plan an essay; and, to generate material in unmodified form.

We recommend that UK HEIs find an effective forum to collaborate and to share their good practice. At institutional level, HEIs need to co-construct clear and coherent policy and guidelines on acceptable uses of AI with students and academics. Students need clearly defined boundaries within which AI can be used productively and authentically – for instance traffic light systems with an amber for ‘maybe’ simply causes ambiguity and confusion. To this end, we suggest use of a coversheet which includes a reflective self-reporting section to present and report the extent to which AI is utilised – as currently operated by Newcastle University, Northampton University and the University of Birmingham based on the templates designed by UCL.

This self-reporting initiates reflection and reflexivity (Feucht et al., 2017) and enables deep understanding of learning processes and experiences vital for students’ personal and professional development. The self-reporting requirement may also improve active participation and engagement with the ethical use of AI. In addition, the data collected through self-reporting may reveal trends and patterns that allow for more tailored and effective interventions.


Baidoo-Anu, D., Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52–62.

Belli, S., Raventós, C. L., & Guarda, T. (2020). Plagiarism detection in the classroom: Honesty and trust through the Urkund and Turnitin software. In Á. Rocha, C. Ferrás, C. Montenegro Marin, & V. Medina García, (Eds.), Information technology and systems. ICITS 2020. Advances in intelligent systems and computing. Springer.

Bin-Nashwan, S. A., Sadallah, M., & Bouterra, M. (2023). Use of ChatGPT in academia: Academic integrity hangs in the balance. Technology in Society, 75, 102370.

Cheetham, J., Bunyan, N., & Samaca Uscategui, S. (2023). Calculating student assessment workloads and equivalences. Centre for Innovation in Education.

Feucht, F., Lunn Brownlee, J., & Schraw, G. (2017). Moving beyond reflection: Reflexivity and epistemic cognition in teaching and teacher education. Educational Psycholgist, 52(4), 234–241.

Hayes, N., & Introna, L. D. (2010). Cultural Values, Plagiarism, and Fairness: When Plagiarism Gets in the Way of Learning. Ethics & Behavior, 15(3), 213–231.

Khaleel, F., Harte, P., & Borthwick Saddler, S. (2024, March 1). The financial impact of AI on institutions through breaches of academic integrity. Higher Education Policy Institute blog.

Parnther, C. (2022). International students and academic misconduct: Considering culture, community, and context. Journal of College and Character, 23(1).

Shepard, L. A. (2000). The Role of Assessment in a Learning Culture. Educational Researcher, 29(7), 4–14.

Sullivan, M., Kelly, A., Mclaughlin, P. (2023). ChatGPT in higher education: Considerations for academic integrity and student learning. Journal of Applied Learning & Teaching, 6(1).

Taras, M. (2010). Assessment for learning: Understanding theory to improve practice. Journal of Further and Higher Education, 31(4), 363–371.