The question, ‘What is an appropriate role for Artificial Intelligence(AI)?’, is the subject of much discussion and interest. There is little doubt about the great potential and the disruptive nature of AI. However, in many professional areas there are still debates about when and where AI technologies are appropriate for use, or indeed whether they are appropriate at all (Floridi et al., 2018). Education is one of these areas in which AI technologies and their impact are currently not fully understood. In our recent paper published in the British Journal of Educational Technology (Cukurova, Kent, & Luckin, 2019), we argue that perhaps a more appropriate role for AI in education is to provide opportunities for human intelligence augmentation, with AI supporting us in decision-making processes, rather than replacing us through automation.
There is an emerging need for investigations into the potential use and impact of AI technologies in education. We need to explore the ability of AI systems to cope with the complex social contexts in education, and to serve learners equally and equitably as appropriate. We also need to study the immediate and long-term unintended consequences of these systems. Clearly, whether we will ever attain the level of AI maturity that will enable fully automated systems to be part of everyday activities in education systems is an interesting research question in and of itself. Perhaps that is why the initial vision of AI in education research was for attempts to create systems that are as perceptive as human teachers (Self, 1998), and the majority of research focussed on designing autonomous tutoring systems.
‘We propose leaving the pedagogic decision-making in the trusted hands of human teachers, while augmenting them with ‘artificial observers’: automated mechanisms for rich data collection and processing.’
However, our study is taking a different approach, stemming from an investigation into the specific areas in which machines excel and have unfair advantages over human cognitive capacities. We propose leaving the pedagogic decision-making in the trusted hands of human teachers, while augmenting them with ‘artificial observers’: automated mechanisms for rich data collection and processing. In our paper we present a case study in the context of debate tutoring, in which we use prediction models to detect the emotions of candidates from audio data and use classification models to increase the transparency of the intuitive decision-making processes of expert tutors for advanced reflection and feedback opportunities. We create visualisations of significant aspects of debate tutoring, such as the emotional arousal of the candidates, in order to provide opportunities for better feedback and reflections. AI in our system was used to predict emotions and traits from the audio data, but this information output then was fed into a transparent classification process for human interpretation.
Such non-autonomous human-AI hybrid systems might be valuable for educational research, where the ultimate purpose is to improve education rather than improving the state-of-the-field in AI. Although in AI research the purpose is often to externalise human cognition and create machines that can mimic or replicate human behaviours, AI can also be used to internalise or extend human cognition (Vold & Hernandez-Orallo, 2019). In education, AI systems should be considered a continuum with regards to the extent they are decoupled from teachers and learners, rather than only an approach to provide full automation of their behaviours (Cukurova, 2019). In our paper, through the analysis of audio and psychometric data collected in situ, we provided opportunities for expert tutors and candidates to better reflect on the significant aspects of debate tutoring. Our aim was to exemplify a potential approach to utilising AI in the service of human decision-making, rather than using AI to fully automate the decision-making process itself. We think that identifying the synergies by which machines are best suited to complement human cognition is a significant research area to further reinforce this ‘subservient’ role of AI in education.
This blog post is based on the article ‘Artificial intelligence and multimodal data in the service of human decision‐making: A case study in debate tutoring’ by Mutlu Cukurova, Carmel Kent and Rosemary Luckin.
It is published in the British Journal of Educational Technology, and is free-to-view for a limited period, courtesy of the journal’s publisher, Wiley.
Cukurova, M., Kent, C., & Luckin, R. (2019). Artificial Intelligence and Multimodal Data in the Service of Human Decision-making: A Case Study in Debate Tutoring. British Journal of Educational Technology. Advance online publication. https://onlinelibrary.wiley.com/doi/10.1111/bjet.12829
Cukurova, M. (2019). Learning Analytics as AI Extenders in Education: Multimodal Machine Learning versus Multimodal Learning Analytics. Artificial Intelligence and Adaptive Education Conference, 1-3, Beijing, China.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. https://link.springer.com/article/10.1007/s11023-018-9482-5
Self, J. A. (1998). The defining characteristics of intelligent tutoring systems research: ITSs care, precisely. International Journal of Artificial Intelligence in Education, 10, 350–364.
Vold, K., & Hernandez-Orallo, J. (2019). AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI. Proceedings of AAAI / ACM Conference on Artificial Intelligence, Ethics, and Society. Retrieved from https://doi.org/10.17863/CAM.36128