Skip to content
 

Blog post

Feedback is good, but scaling it…

Abelardo Pardo

One of the aspects of being an instructor that I personally find most frustrating is how difficult is to improve a learning experience. The strong interdependency of so many factors makes finding the right combination of design decisions that translate into quantifiable improvements a truly wicked problem. A slightly simplified and less pessimistic approach is to look for the aspect that has the highest potential positive effect. The search is still frustratingly difficult, but at least the focus is on one single aspect.

In such a gloomy context it feels truly refreshing to find robust experimental evidence identifying feedback as one of those aspects with high potential for positive effects in learning (Hattie and Timperley 2007). There are numerous studies that conceptualise feedback processes, identify its elements, provide concrete guidelines for deployment, frame the process as a dialogue and establish elegant connections with other areas such as self-regulation.

The importance of feedback in a learning experience has even led to its being considered a necessary condition for learning, as described in the blog post ‘No feedback, no learning’ (Kirschner and Neelen 2018). But almost immediately the claim was responded to by an educational expert stating, ‘One of the problems is the inability to “scale” feedback – teachers simply can’t do it on a personal level, so tech needed to support’. And this is the next frontier of this problem. We know how effective feedback can be, but we simply cannot scale the process to large student cohorts.

One element of the feedback process that highlights the difficulties of scaling is the knowledge required about the students and their surrounding learning context. Intuitively, the more instructors know, the better positioned they are to create an effective feedback process. But handling such knowledge becomes prohibitively complex when dealing with large student cohorts. One typical scenario would consist of several hundreds of learners participating in an experience that lasts several weeks and requires them to perform tasks in an online environment, participate in a discussion forum, create and submit various artefacts, and engage with various formative assessment resources. If the scenario takes place in a blended space, these interactions are complemented with additional face-to-face interactions.

The current use of technology to mediate these experiences offers unprecedented opportunities to obtain highly detailed accounts of how students interact within the learning experience. Large datasets are becoming increasingly available to instructors. Those inclined to techno-solutionism are already envisioning automatic feedback processes that rely on some form of artificial intelligence (AI) as the silver bullet to address the scalability problem of human-only feedback processes. Granted, AI is making truly remarkable progress on how to synthesise the common traits of a large number of patterns and use them for induction (interpreting medical images, for example). But feedback is a significantly more complex process that requires a combination of domain-of-knowledge expertise, pedagogical expertise, empathy and complex understanding of the personal situation of each learner. It won’t be fully automated anytime soon.

‘The premise of has underpinned hybrid processes have recently been conceptualised in which human expertise is combined with technology to deploy personalised feedback at scale.’

Recently, the community of researchers studying this type of solution has suggested using technology that augments rather than replaces human intelligence (Baker 2016). Perhaps instructor expertise can be enhanced by technology in the same way that exoskeletons allow workers in factories to guide a rigid frame to manipulate extremely heavy objects. This premise has underpinned the recent conceptualisation of hybrid processes in which human expertise is combined with technology to deploy personalised feedback at scale (Pardo 2018) and the design of technological ‘exoskeletons’ to articulate these processes (Pardo et al 2017). Preliminary studies show encouraging results, particularly in terms of student satisfaction. The not-so-good news is that although technology can help with the scaling, it still does not address other issues such as identifying the type of message, the tone, the level of scaffolding, and so on.

So, feedback is good, but scaling it… may be feasible if we design the right technological exoskeletons.


References

Baker R S (2016) ‘Stupid Tutoring Systems, Intelligent Humans’, International Journal of Artificial Intelligence in Education 26(2): 600–614

Evans C (2013) ‘Making Sense of Assessment Feedback in Higher Education’, Review of Educational Research 83(1): 70–120

Hattie J and Timperley H (2007) ‘The Power of Feedback’, Review of Educational Research 77(1): 81–112

Kirschner P A and Neelen M (2018) ‘No Feedback, No Learning’, blog, 3-Star Learning Experiences, 5 June 2018. https://3starlearningexperiences.wordpress.com/2018/06/05/no-feedback-no-learning/

Pardo A, Jovanović J, Dawson S, Gašević D and Mirriahi N (2017) ‘Using Learning Analytics to Scale the Provision of Personalised Feedback’, British Journal of Educational Technology

Pardo A (2018) ‘A feedback model for data-rich learning experiences’, Assessment & Evaluation in Higher Education 43(3): 428–438