Skip to content
 

Blog post

What can multimodal data tell us about learning? Opportunities for positioning the learner as the real protagonist in learner-centred design

Michail Giannakos, Professor at Norwegian University of Science and Technology Kshitij Sharma, Senior Researcher at Norwegian University of Science and Technology

Contemporary educational technology research utilises data such as keystrokes, log files and clicks to interpret complex learning phenomena. In recent years, multimodal data’s potential to help us understand the world around us and interpret the complex learning processes in it, led learning technology research to develop models that can process information from multiple modalities. Although the collection, interpretation and visualisation of multimodal data (MMD) has been extremely challenging for researchers, recent technological developments in data science and artificial intelligence (AI) advancements have boosted the growth of non-invasive high-frequency MMD collections (Blikstein & Worsley, 2016; Drachsler & Schneider, 2018). Learners’ traces are generated during their interaction with technologies; such interaction is often complex but offers opportunities for collecting rich MMD (Giannakos, Sharma, Pappas, Kostakos, & Velloso, 2019). Insights generated during learner-computer interaction has the potential to help us to identify learners’ real needs. Once we know about such needs, we can utilise this knowledge to implement engaging and motivating technologies and pedagogies and improve the learning experience.

‘Once we know about learners’ real needs, we can utilise this knowledge to implement engaging and motivating technologies and pedagogies and improve the learning experience.’

In our recent paper published in the British Journal of Educational Technology, we conducted a systematic literature review of empirical evidence to present an overview of what and how MMD have been used to inform learning and in what contexts (Sharma & Giannakos, 2020). The results of the review depict the capabilities of MMD for learning and the ongoing advances and implications that emerge from the employment of MMD to capture and improve learning. In particular, MMD allows us to gain rich insights with respect to the following teaching and learning processes:

  1. Behavioural trajectories/process: explaining the different behavioural paths of the students while they are learning or solving a problem.
  2. Student feedback: providing feedback to avoid common errors (before they make a mistake) or to explain to them where they went wrong (once they have made a mistake).
  3. Learning outcome: understanding and distinguishing the different levels of learning outcomes.
  4. Learning-task performance: predicting and explaining learning-task performance.
  5. Teacher support: assisting teachers in understanding the students’ behaviour and performance.
  6. Engagement: explaining when and how the students engage with the learning settings/materials/instructions.

Most of the research in MMD for learning is employed in educational domains and contexts that were convenient for them (for instance using the university students). This simple collection of data (log, audio, video and facial data as compared to eye-tracking and electroencephalographic [EEG]) possibly explains the higher number of studies utilising those modalities. Nevertheless, even with such easy to collect MMD, we see that the added value is significant (compared to traditional survey-based or just clickstream data collections), with great potential to contribute to theories about the analysis of human behaviours in learning contexts to help us achieve our learning and teaching aspirational goals (Cukurova, Giannakos, & Martinez-Maldonado, 2020). For example, an ‘easy-to-obtain’ combination of facial data (webcam), physiological data (smart-watch) and system-logs could provide in-depth information about cognitive-affective-behavioural aspects of learners.

Despite the potential of MMD to posit the learner as the real protagonist in pedagogies and technologies that are aware, and even account for their needs, MMD research in learning and teaching is context‐dependent, requiring customised and sometimes cumbersome methods that cannot be easily reused and standardised. Therefore, working towards the modularisation and standardisation of MMD for learning (such as identifying context-independent data features) is a promising avenue. To obtain a holistic picture of the learners’ performance, outcomes and behaviour, combining attention (eye‐tracking), cognition (EEG) and affect (face) measurements is necessary. Future work should also focus not only on designing the feedback tools based on MMD, but also on testing the tool’s effectiveness and efficiency.

This blog is based on the article ‘Multimodal data capabilities for learning: What can multimodal data tell us about learning?’ by Kshitij Sharma and Michail Giannakos, published in a new special section of the British Journal of Educational Technology on ‘The promise and challenges of multimodal learning analytics’.


References

Blikstein , P. , & Worsley , M. (2016). Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. Journal of Learning Analytics, 3(2), 220–238.

Cukurova, M., Giannakos, M., & Martinez-Maldonado, R. (2020). The promise and challenges of multimodal learning analytics, British Journal of Educational Technology. https://doi.org/10.1111/bjet.13015

Drachsler, H., & Schneider, J. (2018). JCAL special issue on multimodal learning analytics. Journal of Computer Assisted Learning, 34(4), 335–337.

Giannakos, M. N., Sharma, K., Pappas, I. O., Kostakos, V., & Velloso, E. (2019). Multimodal data as a means to understand the learning experience. International Journal of Information Management, 48, 108–119.

Sharma, K., & Giannakos, M. (2020). Multimodal data capabilities for learning: What can multimodal data tell us about learning? British Journal of Educational Technology. https://doi.org/10.1111/bjet.12993

 

More content by Michail Giannakos and Kshitij Sharma