Skip to content
 

Blog post Part of series: Artificial intelligence in educational research and practice

The broken pillar: AI for feedback generation and the erosion of students’ trust

Mariia Tishenina, PhD student at Edge Hill University

Learning unfolds as a dialogue between a student and a more knowledgeable other – in higher education, a lecturer – a dialogue marked by mutual respect, responsibility and trust as the key pillars of this process. For many higher education courses, the concluding remark – the pinnacle – takes the form of summative assignment feedback. In line with a humanistic approach, this feedback ought to be personalised, foster a sense of students’ personal identity, and provide a feedforward towards realistic future goals (Hamachek, 1977). Yet, it also must be genuine and address the student as a whole, rather than focusing solely on the quality of the work they have submitted. Otherwise, the congenial learning experience is disrupted, potentially resulting in a lasting negative effect on students’ educational motivation. The use of AI for feedback generation thus appears to pose a far more serious threat to education than plagiarism concerns, as although the feedback produced is personalised, it is impersonal by nature as there is no person behind it.

The discourse around generative AI used by students has turned full circle, from initial concerns about plagiarism to the acknowledgment of the need to teach students how to ethically integrate AI into their workflow (see for example Lukeš et al., 2023). However, once the use of GenAI is applied to feedback generation, most articles and blog posts focus primarily on one side of the coin – that is, on the benefits it offers (AlBadarin et al., 2023; Li et al., 2023). Whether the product or the source of such representation, Barrett and Pack (2023) argue that lecturers tend to have a significantly more positive outlook on the use of AI for feedback generation than students. Just like with students, generative AI eases lecturers’ participation in the educational process and is casually positioned as a time-saver, which also supposedly benefits students through a shorter feedback turnaround. So why is it wrong for students to use AI tools when submitting for marking what is supposed to be the outcome of their own intellectual engagement with the task, but is a welcomed practice for lecturers to do the same when engaging with what students submit?

‘Why is it wrong for students to use AI tools when submitting for marking what is supposed to be the outcome of their own intellectual engagement with the task, but is a welcomed practice for lecturers to do the same when engaging with what students submit?’

The learning and teaching process is based on mutual trust with learners and teachers, investing their time and effort to fully and authentically engage with each other over a certain topic. There is an unspoken social contract that they will do so. If we expect students to genuinely engage with the tasks, we, as educators, owe them our own genuine and thorough engagement with the work they produce. Students invest considerable resources into their submissions; and if the marker does not pay them due attention, this can be perceived as negligence. It is, therefore, only a matter of time before students might start to have doubts about the authenticity of the feedback they receive, much as lecturers do now when reading students’ submissions. This authenticity is put into question with an increased level of scrutiny and a pinch of suspicion on the part of students as they pick up on paragraph and sentence structure, overly flowery language, overuse of linkers, and so on. The more that experienced students become with AI for text generation, the more trust issues they may have when they receive feedback that follows a structure and wording similar to that of ChatGPT, Bing or Bard.

Most importantly, the lecturer might not even be using AI, but the seeds of doubt can nevertheless lead to the erosion of trust, one of the main pillars that hold students accountable for their authentic participation in the learning process. After all, the best feedback strategies (such as uptake, sandwich feedback, and so on) are all the aspects AI excels at.

Below are three principal questions that the professional educational communities need to address now:

  1. How do we maintain professional integrity in the eyes of students in the age of generative AI?
  2. How do we ensure the feedback we provide is not only personalised but also personal and is always perceived as such?
  3. Will upholding this humanistic approach lessen or increase lecturers’ workload?

These considerations are urgent and of outmost importance, for once trust is broken, it is not easy to mend.


References

AlBadarin, Y., Tukiainen, M., Saqr, M., & Pope, N. (2023). A systematic literature review of empirical research on ChatGPT in education. Social Science Research Network. https://doi.org/10.2139/ssrn.4562771

Barrett, A., & Pack, A. (2023). Not quite eye to A.I.: Student and teacher perspectives on the use of generative artificial intelligence in the writing process. International Journal of Educational Technology in Higher Education, 20(59). https://doi.org/10.1186/s41239-023-00427-0

Hamachek, D. E. (1977). Humanistic psychology. Theoretical and philosophical framework and implications for teaching. In D. J. Treffinger, J. Davis, & R. E. Ripple (Eds.), Handbook on teaching educational psychology. Academic Press.

Li, L., Ma, Z., Fan, L., Lee, S., Yu, H., & Hemphill, L. (2023). ChatGPT in education: A discourse analysis of worries and concerns on social media. Education and Information Technologies. https://doi.org/10.1007/s10639-023-12256-9

Lukeš, D., Laurent, X., Pritchard, J., Sharpe, R., & Walker, C. (2023). Beyond ChatGPT. A report on the state of generative AI in academic practice for autumn 2023. Centre for Teaching and Learning, University of Oxford. https://www.ctl.ox.ac.uk/beyond-chatgpt