Skip to content

Recognising that learning is a social activity, intelligent tutoring systems enable learners to engage in (virtual) face-to-face social interactions with autonomous animated software characters, known as pedagogical agents (PAs). Learners’ socio-emotional responses to PAs have delivered learning benefits including increased engagement, motivation and learning achievement (see, Johnson & Lester, 2018; Kim & Baylor, 2016).

PAs can provide individual tailored and timely support to each child in the classroom, and potentially offer alternative lesson plans for special needs students, while letting the teacher track individual progress and identify who needs personal attention. Imagine students debriefing with their personal PA after school; such reflection can have significant benefits for learning retention (Nicholas, Van Bergen, & Richards, 2015).

PAs are particularly suited for roles that involve social simulations and dilemmas such as coping with bullying and learning empathy for refugee classmates. To achieve this, the characters have their own lives, including gender, personality, cultural background and (false) memories.

PAs never feel exhausted and are never short-tempered. They don’t ‘hate’ (or ‘love’) marking assessments. And you can ask them the same question as many times as you like. It’s not that they don’t remember. The PA could tell you exactly how many times and when you asked the question, with precision accuracy down to the second. When you finally get the answer right, they can reward you by playing your favourite tune, or letting the teacher know. As they develop intimate knowledge of the learner, the PA can build a long-term relationship with them. But is it necessary, or even ethical, for the PA to remember every question you asked or every answer you got wrong? When should the PA share this information, and with whom: the teacher, peers, parents, partner or employer?

‘Is it necessary, or even ethical, for the pedagogical assistant (PA) to remember every question you asked or every answer you got wrong? When should the PA share this information, and with whom?’

Hudlicka (2016) identifies several ethical issues specific to PAs that go beyond the general concerns of data privacy, including the right to keep your emotions to yourself, manipulating others’ emotions and virtual relationships. Human interactions can be so messy and, particularly at school, not always positive. But how would you feel if your students, or your child, preferred their relationship with their PA and avoided interaction with you, their classmates and other humans? Concerning social relationships between PAs and the learner, Walker and Ogan (2016, p. 726) ask, ‘Is it acceptable if technology lies to students? If it is purposefully manipulative? Is it the designer’s responsibility to avoid encouraging students to get too involved with the technology?’ As we add more social capabilities into our educational technologies we add more potential social and ethical issues. Can we design technology that takes into account the values of the learner and their environment towards building socially responsible and ethical artificial intelligence (AI) systems, and PAs in particular?

In our recent paper published in the British Journal of Educational Technology we look at the roles and future of PAs, and consider ethical PAs that are sensitive to moral principles and human values (Richards & Dignum, 2019). PAs, being AI systems, are characterised by their autonomy, interactivity and adaptability, which enable them to respond appropriately to their environment. In our article we use the ‘ART’ principles: accountability, responsibility and transparency. A ‘design for values’ approach to AI models ensures that these principles are analysed and reported at all stages of system development.


This blog is based on the article ‘Supporting and challenging learners through pedagogical agents: Addressing ethical issues through designing for values’ by Deborah Richards and Virginia Dignum, published in the British Journal of Educational Technology.

It is currently free-to-view online for a limited time only, courtesy of our publisher, Wiley.


References

Hudlicka, E. (2016). Virtual affective agents and therapeutic games. In D. D. Luxton (Ed.), Artificial intelligence in behavioral and mental health care (pp. 81–115). Elsevier.

Johnson, W. L., & Lester, J. C. (2018). Pedagogical agents: Back to the future. AI Magazine, 39(2).

Kim, Y., & Baylor, A. L. (2016). Research-based design of pedagogical agent roles: A review, progress, and recommendations. International Journal of Artificial Intelligence in Education, 26(1), 160–169.

Nicholas, M., Van Bergen, P., & Richards, D. (2015). Enhancing learning in a virtual world using highly elaborative reminiscing as a reflective tool. Learning and Instruction, 36, 66–75.

Richards, D., & Dignum, V. (2019). Supporting and challenging learners through pedagogical agents: Addressing ethical issues through designing for values. British Journal of Educational Technology. https://doi.org/10.1111/bjet.12863.

Walker, E., & Ogan, A. (2016). We’re in this together: Intentional design of social relationships with AIED systems. International Journal of Artificial Intelligence in Education, 26(2), 713–729.