Skip to content
 

Blog post Part of series: Artificial intelligence in educational research and practice

Artificial intelligence: What questions should we be asking?

Darragh Woods, MSc Learning and Teaching student at University of Oxford

The Russell Group (2023) recently published five guiding principles in advocating artificial intelligence (AI) usage in higher education. The document uses the ‘ethical use of generative AI’ as a vehicle to promote its use in teaching, learning and assessment. What is an ‘ethical’ use of AI – in higher and, by extension, secondary education?

I have just submitted the first assignment for my MSc programme, which investigates the role of technology in year 12 Geography teaching. While I intend to publish these findings once the dissertation has been through the university assessment process, AI has been in the news since submission. This blog post is an opportunity for me to speak as a student to other researchers. I would argue the position of educational institutions on AI use in teaching and assessment needs to be clarified in more detail.

AI is being used in education right now. Consider this theoretical example: I could ask an AI for help understanding an idea as a student. By the end of the conversation, I might better comprehend the subject. This seems acceptable.

However, if the AI writes a section of coursework, the use of AI becomes problematic. One may now begin to question: At what point is the work not that of the students?

I asked ChatGPT on two different occasions to write me an introductory paragraph for an essay on London. The first line of each result is below:

‘London, the bustling capital of England, stands as a vibrant metropolis that seamlessly blends rich history with modern dynamism…’

‘London, the vibrant capital of the United Kingdom, is a city that effortlessly blends history, culture, and innovation…’

In effect, the same AI software gave two different answers. This leads me to ask: How can we tell if AI wrote all (or part) of an assignment? Russell Group universities (2023, p. 3) will ‘ensure academic rigour and integrity is upheld’. And they should. But how?

‘How can we tell if AI wrote all (or part) of an assignment?’

Although plagiarism detection software claims it can detect AI interference, TurnitinTM produced an Ethical AI Checklist, advising students to retain ‘artifacts’ to prove human writing. Whereas proving plagiarism involves finding the source, it may not be possible to do this with AI. This leads to my next question: How can machine intervention be (dis)proven?

In answer to the title of this blog post, there are many. We need to decide where AI – or rather, where applications of AI – are on the spectrum of ‘advocated’ to ‘outlawed’. To preserve ethical teaching and assessment in every setting, the academic community ought to pose the questions that need answering now.

AI use in assessment is being considered in the literature but needs greater inclusion of all stakeholders. Digital tools have given education opportunities and challenges before, and those such as AI require further research (Salas-Pilco et al., 2022). Indeed, some universities have begun to consult students on their initial thoughts (Guy, 2023; Attewell, 2023), but this discussion needs to extend further, from higher to secondary education.

AI is here to stay. Richardson (2023) rightly outlined how AI can improve the reliability of written assessments – a notion that we should endorse. However, a thundering silence appears surrounding what questions need answering about AI. Here I have posed only a handful:

  • What is the ‘ethical’ use of AI in education?
  • How much AI involvement does it take for work to no longer be the students?
  • How will academic honesty be preserved?
  • How can machine intervention be (dis)proven?

There are many more. We must focus on asking questions about AI and assessment that pragmatically matter to students, researchers and professionals – even if the answers are unknown.


References

Alhamad, K. (2023, May 12). Technology to support reading engagement: Should schools include augmented reality (AR) books on their shelves? BERA Blog. https://www.bera.ac.uk/blog/technology-to-support-reading-engagement-should-schools-include-augmented-reality-ar-books-on-their-shelves

Attewell, S. (2023, June 14). Exploring the role of generative AI in assessments: A student perspective. National Centre for AI Blog. https://nationalcentreforai.jiscinvolve.org/wp/2023/06/14/exploring-the-role-of-generative-ai-in-assessments-a-student-perspective/

Guy, M. (2023, April 21). Generative AI: Lifeline for students or threat to traditional assessments. National Centre for AI Blog. https://nationalcentreforai.jiscinvolve.org/wp/2023/04/21/generative-ai-lifeline-for-students-or-threat-to-traditional-assessment/

Richardson, M. (2023, May 22). The future of artificially intelligent assessment. BERA Blog. https://www.bera.ac.uk/blog/the-future-of-the-artificially-intelligent-examination

Russell Group. (2023). Russell Group principles on the use of generative AI tools in education. https://russellgroup.ac.uk/media/6137/rg_ai_principles-final.pdf

Salas-Piclo, S. Z., Yang, Y., & van Aalst, J. (2022). Emerging technologies for diverse and inclusive education from a sociocultural perspective. British Journal of Educational Technology, 53(6), 1483–1485. https://doi.org/10.1111/bjet.13279