Skip to content

The launch of ChatGPT in November 2022 marked a potential turning point in higher education. As a prime example of generative AI, capable of creating content and solving problems in previously human-exclusive ways, ChatGPT promised to fundamentally change knowledge acquisition, processing and sharing. However, the extent and depth of its impact in academia are debatable. While some see generative AI as a revolutionary force for positive change (Leo, 2023), others have questioned if the positive impacts outweigh the drawbacks (Leaver & Srdarov, 2023).

Reflecting on the changes that have occurred (or not) over the past year raises important questions about the future, and the role that such tools will play in the learning and teaching journeys of university staff and students.

The Skynet effect

While there are instances of generative AI being effectively implemented in UK higher educational settings, such as Advance HE’s Authentic Assessment in the era of AI project, these are exceptions rather than the rule. The dominant narrative in many universities is not about innovative integration but about how to manage and control this new technology. Instead of exploring productive uses of generative AI in enhancing teaching and learning methods, the community has instead mostly been caught up in mitigating perceived risks and uncertainties (Sætra, 2023).

For many staff and students, the introduction of generative AI in higher education has generated a wave of confusion and doubt, overshadowing any potential benefits. This phenomenon is partly due to what I like to call ‘The Skynet effect’ – an exaggerated, often fear-driven response to new AI technologies, reminiscent of the fictional, malevolent AI from the Terminator film series. This effect is characterised by a focus on worst-case scenarios and a tendency to view AI as a potential threat, rather than a tool for progress.

This Skynet effect has fostered a cautious and often sceptical attitude towards generative AI in higher education, a narrative that has been significantly influenced by media portrayals (Roe & Perkins, 2023). The key challenge ahead is to transition from these exaggerated apprehensions and instead focus on how these technologies can be responsibly and effectively integrated into existing higher educational frameworks.

‘This Skynet effect has fostered a cautious and often sceptical attitude towards generative AI in higher education, a narrative that has been significantly influenced by media portrayals.’

Addressing ethical and practical challenges

As with any major technological advancement, the integration of ChatGPT and other generative AI tools in higher education comes with its set of ethical and practical challenges. A significant concern is the widening digital divide (Illingworth, 2023a). As these technologies become more prevalent in academia, there is a risk of exacerbating existing inequalities (Freeman, 2024). Students with limited access to advanced technology could find themselves at a disadvantage, potentially widening the gap in educational opportunities and outcomes.

Another major issue is the inherent biases present in AI models like ChatGPT (Illingworth, 2023b). These biases can inadvertently influence the curriculum, hindering efforts to diversify and provide an inclusive educational experience. For instance, if these AI tools are not carefully monitored and calibrated, they may reinforce existing stereotypes (Alenichev et al., 2023) or overlook important cultural and contextual nuances in educational content.

Future potential

The ability of generative AI to offer personalised learning experiences holds the promise of a more inclusive and adaptable educational environment. For example, sophisticated translation technologies (Wang, 2023) have the capability to adapt course content in real-time, enabling students of diverse linguistic backgrounds to interact with the material in their native or preferred language. Similarly, generative AI can significantly enhance personalised learning by offering real-time feedback and promoting student engagement (Bahroun et al., 2023). However, realising this potential requires a balanced and informed approach that navigates the challenges while embracing the opportunities presented by these technologies.

To harness the transformative power of generative AI in higher education, focus should be on:

  1. Enhancing digital accessibility. Improving digital infrastructure is essential for equal access to generative AI technologies for all students and staff. This includes hardware, software, reliable internet and digital literacy programs.
  2. Monitoring and correcting biases. Establishing processes to identify and mitigate biases in generative AI models is crucial to maintain curriculum integrity and inclusivity.
  3. Promoting ethical usage and literacy. Educating students and staff about the capabilities and limitations of generative AI is important. This includes ethical considerations and responsible use in academic work and broader societal implications.

By addressing these areas, the higher education sector can leverage the benefits of generative AI, ensuring it serves as a catalyst for inclusive, innovative and ethical learning and teaching practices. The journey may be complex, but a strategic and thoughtful approach can shift the narrative from cautionary to constructive, enabling responsible and innovative enhancement of higher education through generative AI.


References

Alenichev, A., Kingori, P., & Grietens, K. P. (2023). Reflections before the storm: the AI reproduction of biased imagery in global health visuals. Lancet Global Health, 11(10). https://doi.org/10.1016/S2214-109X(23)00329-7

Bahroun, Z., Anane, C., Ahmed, V., & Zacca, A. (2023). Transforming education: A comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability, 15(17), 12983. https://doi.org/10.3390/su151712983

Freeman, J. (2024). Provide or punish? Students’ views on generative AI in higher education [Policy brief]. Higher Education Policy Institute. https://www.hepi.ac.uk/2024/02/01/provide-or-punish-students-views-on-generative-ai-in-higher-education/ 

Illingworth, S. (2023a, May 18). If AI is to become a key tool in education, access has to be equal. The Conversation. https://theconversation.com/if-ai-is-to-become-a-key-tool-in-education-access-has-to-be-equal-204156 

Illingworth, S. (2023b, February 20). How AI could undermine diversity in the curriculum. Wonkhe Blog. https://wonkhe.com/blogs/how-ai-could-undermine-diversity-in-the-curriculum/?utm_content=buffereacd9&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer 

Leaver, T., & Srdarov, S. (2023). ChatGPT isn’t magic: The hype and hypocrisy of generative artificial intelligence (AI) rhetoric. M/C Journal, 26(5). https://doi.org/10.5204/mcj.3004  

Leo, U. (2023, July 21). Generative AI should mark the end of a failed war on student academic misconduct. LSE Blog. https://blogs.lse.ac.uk/impactofsocialsciences/2023/07/21/generative-ai-should-mark-the-end-of-a-failed-war-on-student-academic-misconduct/  

Roe, J., & Perkins, M. (2023). ‘What they’re not telling you about ChatGPT’: Exploring the discourse of AI in UK news media headlines. Humanities and Social Sciences Communications, 10, 753. https://doi.org/10.1057/s41599-023-02282-w

Sætra, H. S. (2023). Generative AI: Here to stay, but for good? Technology in Society, 75, 102372. https://doi.org/10.1016/j.techsoc.2023.102372

Wang, Y. (2023). Artificial intelligence technologies in college English translation teaching. Journal of Psycholinguistic Research, 52, 1525–1544. https://doi.org/10.1007/s10936-023-09960-5