Skip to content
 

Blog post Part of series: Artificial intelligence in educational research and practice

Rethinking artificial intelligence through a relational lens

Margaret Bearman, Research professor at Deakin University

Artificial intelligence (AI) prompts an existential crisis for universities (Bearman et al., 2022), stoking concerns that we may lose the human touch or that teachers and institutions may be replaced by AI. Many educational commentators suggest we should embrace these new technologies as AI will be another fabulous tool for educators and students. While human versus machine is an old debate – which could usefully be reset (Bayne, 2015) – there has been a recent rise in these binary perspectives due to generative AI technologies such as ChatGPT. 

The arrival of ChatGPT and similar large language models feels like a game-changer in how society thinks about AI. These generative AIs have an extraordinary ability to statistically synthesise large amounts of text and present them in a coherent and often dialogic way. For many educators and students, ChatGPT makes tangible the opportunities and challenges presented by AI.

So how can educators, institutions and students move beyond fear and hype? In light of the concerns and excitement, it is important to look beyond the binaries. In a recent article written with Rola Ajjawi, we note that AI is often compared to a ‘black box’ – because its outputs are unpredictable, even to its developers (Bearman & Ajjawi, 2023). Thus there are often calls for ‘explainable’ AI or to improve what students (and we ourselves) know about a particular technology. But we think that is not the whole story. We write: ‘AI resembles many other aspects of our complex-socially mediated world in that it can never be fully explainable or transparent.’ Thus, we argue that we need to think about pedagogic strategies that can help our students learn to work with AI.

We conceptualise an ‘AI interaction’ as a useful starting point (Bearman & Ajjawi, 2023). An AI interaction occurs when a person works with a technology whose outputs cannot be traced, in a particular time and place. This definition of AI allows a shift away from considering technology as a neutral tool or as a deterministic technology, towards a contextualised relationship. This kind of thinking shifts the emphasis from ‘what AI can do for us’ and ‘what AI is doing to us’ to ‘what we are doing together’. This suggests, for example, that ChatGPT is always situated in the circumstances of its use: whether with an expert or a young child or a university student.

‘Helping students understand what “good” looks like or developing their “evaluative judgement” (Tai et al., 2018) becomes an increasingly important pedagogical approach for working with AI.’

Working with AI can be thought of as a dynamic, in-the-moment experience, rather than a singular, static position. Thus, our students can learn to assess the trustworthiness of AI interactions rather than take a fixed global view of AI. Helping students understand what ‘good’ looks like or developing their ‘evaluative judgement’ (Tai et al., 2018) becomes an increasingly important pedagogical approach for working with AI.

Our proposal also exposes trust as a key emotional dimension of working with AI. We write: ‘Both trust and distrust are powerful affective prompts … but nor are they sufficient in themselves.’ We suggest a person should pay attention to what they are feeling – to examine their own doubts and certainties when working with AI (Bearman & Ajjawi, 2023) – to help guide their judgements about seeking evidence to confirm AI outputs. We contend that emotions are often overlooked with respect to technology yet play a significant role in how they are incorporated into our day-to-day lives.

Our insights help frame how universities – and other educational institutions – can respond to AI. An interaction emphasises context: it allows us to note that, in one moment, a person and AI working together might lead to generative learning but, at another time and place, an AI interaction might be more instrumental or even harmful. And we should be employing pedagogic strategies that can help students distinguish between the two.

This blog post is based on the article ‘Learning to work with the black box: Pedagogy for a world with artificial intelligence’ by Margaret Bearman and Rola Ajjawi published in the British Journal of Educational Technology.


References

Bayne, S. (2015). Teacherbot: Interventions in automated teaching. Teaching in Higher Education20(4), 455–467. https://doi.org/10.1080/13562517.2015.1020783

Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160–1173. https://doi.org/10.1111/bjet.13337

Bearman, M., Ryan, J., & Ajjawi, R. (2022). Discourses of artificial intelligence in higher education: A critical literature review. Higher Education, 86, 369–385. https://doi.org/10.1007/s10734-022-00937-2

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: Enabling students to make decisions about the quality of work. Higher Education, 76, 467–481. https://doi.org/10.1007/s10734-017-0220-3