Whether they’re driven by commercial interests or not, most developers and companies care about positive impact. Of course, impact helps in selling products, but it’s also a key motivation in why people develop and refine technologies: they care about supporting learning.
But how do they evaluate that impact, given that many developers don’t have access to research or to ethical review? Central to this concern is the desire to avoid both unbridled misuse of technologies, and overcautious underuse of them as a means of risk management (Floridi et al., 2018). Hence the question, How do educational technology developers ensure that their products have positive impact, and minimise the risk of harm?
‘We need to consider how we cultivate people who can practically reason, and we need examples of the kinds of practical choices people face in order to support this.’
We think there are a number of unanswered questions here that need to be reflected upon by the community of stakeholders who design, build and use technologies in education. In our recent paper (Kitto & Knight, 2019), published in the British Journal of Educational Technology, we pointed to some of these challenges, noting that there are a number of existing approaches to navigating them, including rules and regulations, and general aims like beneficence. While these approaches to ethics are important, we argue that they don’t reflect some of the complex tensions that people experience in their practical work, and that we see in our classrooms all the time, So, alongside considering rules, guidelines and the outcomes of our actions, we suggest that we need to consider how we cultivate people who can practically reason (an approach grounded in virtue ethics), and that we need examples of the practical tensions people face, and the kinds of choices we might make, in order to support this.
Take, for example, the case of an applicant to a university, who did not get very good grades while at school. Evidence suggests that students who enrol with these grades are less likely to be successful than students with higher grades. In this context, should we:
- turn them away
- ask them to come back with better grades
- let them in
- inform the student of the risks and let them in
- improve our teaching
- improve our student support
- do something else?
We think a student is ‘at risk’ of failure. Thus should we, in designing predictive algorithms for admissions and student support, act now or later? In such cases, multiple ethical principles appear to be equally required and yet in tension – for example, equitable support and widening participation. This is complicated by epistemic uncertainty around likely outcomes under typical conditions, and in the context of an intervention.
In other contexts, these tensions play out around the kinds of ways we collect and treat data. Video data of classroom interactions provides a rich resource for understanding and supporting learning. However, emerging technologies make audio data more likely to be re-identifiable, and video data reveal sensitive (or special category) personal information incidentally (for example, through the presence of cultural and religious clothing). How, then, do we collect and treat this data ethically without taking an overcautious approach to conducting low-risk classroom research?
While we can create rules and guidelines for these situations, these principles to provide heuristics to help us reason, rather than being an end-goal in their own right.
Indeed, the Australian Association for Research in Education’s Code of Ethics (AARE, 1993) note this need for moral reasoning in the preamble to their guidelines.
In our article (Kitto & Knight, 2019) we are therefore interested in the reasoning people engage in around these dilemmas. As one step to investigate, along with international collaborators at NYU and elsewhere, we have begun running workshops with different groups that aim to understand the dilemmas or tensions that they face in their design, development and use of EdTech.
We hope to use artefacts from our workshops to contribute to a database of ethical cases (as we argue for in Kitto & Knight, 2019), which will give insight into the dilemmas faced as well as support stakeholders to navigate them.
By doing this, we can take a systematic and structured approach to supplementing existing case examples, such as BERA’s research ethics case study, ‘Anticipating the application & unintended consequences of practitioner research’ (Pennacchia, 2019), those identified through research including on technology enhanced learning (see for example Chang & Gray, 2013), alongside research on the dilemmas that teachers face in their everyday practice (see Cabaroglu & Tillema, 2011; Lyons, 1990).
This blog is based on the article ‘Practical ethics for building learning analytics’ by Kirsty Kitto and Simon Knight, published in the British Journal of Educational Technology.
Australian Association for Research in Education [AARE] (1993). AARE code of ethics. Deakin ACT. Retrieved from https://www.aare.edu.au/assets/documents/policies/AARE-Code-of-Ethics.pdf
Cabaroglu, N., & Tillema, H. H. (2011). Teacher educator dilemmas: A concept to study pedagogy. Teachers and Teaching, 17(5), 559–573.
Chang, R. L., & Gray, K. (2013). Ethics of research into learning and teaching with Web 2.0: reflections on eight case studies. Journal of Computing in Higher Education, 25(3), 147–165.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., & Schafer, B. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.
Kitto, K., & Knight, S. (2019). Practical ethics for building learning analytics. British Journal of Educational Technology. https://doi.org/10.1111/bjet.12868. Author accepted version available for download here.
Lyons, N. (1990). Dilemmas of knowing: Ethical and epistemological dimensions of teachers’ work and development. Harvard Educational Review, 60(2), 159–181.
Pennacchia, J. (Ed.) (2019). BERA research ethics case studies: 3. Anticipating the application & unintended consequences of practitioner research. London: British Educational Research Association. Retrieved from https://www.bera.ac.uk/publication/anticipating-the-application-unintended-consequences-of-practitioner-research