Skip to content
 

Blog post

How technology shapes assessment design: Findings from a study of university teachers

Sue Bennett

As technology has become increasingly integrated into all aspects of university teaching, the tools and strategies available to support assessment have increased and evolved. Despite these developments, take-up has not been consistent within universities or across the sector. Data we collected as part of a larger study of assessment design practice provide some insights as to why this is the case (Bennett et al, 2017).

‘Take-up has not been consistent within universities or across the sector’

Our participants were 33 academics from 4 Australian universities who were involved in assessment design across a range of discipline areas (including education, journalism, health sciences, engineering, history, politics, languages, sociology, biology, physics and chemistry). Most were midcareer academics with significant teaching experience who were familiar with a range of assessment strategies and teaching of small to very large enrollment units. 

In semi-structured interviews we asked these academics to describe a recent instance of assessment design and then reflect on their broader practice. We did not focus specifically on technology-supported assessment, nor were we interested in innovation per se. Extending on Selwyn (2010), we argue that understanding the “state of the actual” provides critical insights into technology integration that explorations of the “state-of-the-art” cannot. The everyday accounts captured in our study provided us with an opportunity to explore the impact of technology on routine assessment design. Our analysis identified four main themes.

  1. The “economics” of assessment

Participants spoke of how limited time and funding resulted in pressure to adopt forms of technology-supported assessment that were perceived to be more cost-efficient. Examples included online multiple-choice quizzes that provided automated feedback to students, student-recorded video to assess practical competencies rather than direct observation by staff, and the provision of group feedback online. Technology also offered administrative benefits with central storage of submissions that could be easily referred to and retrieved. These labour-saving advantages though were often offset by unanticipated challenges that created additional work. These included technical difficulties when working with submitted files and assessment designs that created marking inefficiencies. There were worries too about whether shifts to student self-assessment was really good practice, although these were somewhat balanced with hopes that students would be more independent and would welcome more immediate feedback.

  1. Contemporary and innovative

There was a clear sense from participants that integrating technology made assessment more ‘modern’ and ‘interesting’, and that the availability of new tools stimulated thinking about different approaches. A conundrum was faced by many, though, as they attempted to navigate mixed messages within institutions which pushed for greater efficiency while simultaneously advocating more innovative (and often more time-consuming) approaches. A lack of time time to take a considered approach to designing was also raised by some, who felt this risked a disconnect between pedagogy and technology. 

  1. Shaping and shaped by student behaviour

As with all assessment, the participants designed their technology-supported tasks with a view to encouraging student behaviours that would lead to high quality learning. It was generally felt that technology could foster independent learning. Examples included online quizzes that encouraged students to read and prepare for class, and assess their own understanding. These considerations provoked considerable discussion about trade-offs between rewarding online participation appropriately while also mitigating collusion and plagiarism, and the need to ensure that students possessed or could develop the technical skills needed to be successful. 

  1. Support and compromise

Participants strongly identified the need for greater support to design and implement technology-supported assessment. This ranged from concerns about a lack of infrastructure and limitations in the ways technology was implemented, through to the need for better technical and educational design advice. Many described having to compromise on their aspirations and take a long-term iterative view of what they could achieve.

These everyday experiences of designing and implementing technology-supported learning give some insights into the complexity. As a whole, those experiences reveal efforts to save costs, be more innovative and promote effective student learning through the integration of technology. At the same time, inexperience and limited support often led participants to simplify or abandon aspects of their preferred assessment designs in favour of what they felt was possible. The various balancing acts required help us to understand why some university teachers take up technology-supported assessment in limited ways and reveals some of the barriers to more widespread adoption.

 

References

Bennett, S., Dawson, P., Bearman, M., Molloy, E. & Boud, D. (2017). How technology shapes assessment design: Findings from a study of university teachers. British Journal of Educational Technology, 48, 672-682. doi:10.1111/bjet.12439.

Selwyn, N. (2010), Looking beyond learning: notes towards the critical study of educational technology. Journal of Computer Assisted Learning, 26: 65–73. doi:10.1111/j.1365-2729.2009.00338.x