Skip to content

Using models to understand how and why a natural phenomenon emerges is a fundamental practice in the sciences. Analysing the underlying components and interactions that generate a phenomenon is called ‘mechanistic reasoning’ (Machamer, Darden, & Craver, 2000). When teachers attend to mechanistic reasoning during modelling activities, students make sense of how and why simulation outcomes occur. In classrooms, computer modelling, such as agent-based modelling (ABM) environments, are powerful tools that can develop learners’ mechanistic reasoning when learners build computer models and run simulations to examine closely how a complex phenomenon occurs.

Many classrooms today incorporate computer simulations to help students identify observable aspects of phenomenon, study data generated from running simulations and visualise processes too dangerous, costly or difficult to reproduce in the real world. However, prepackaged science simulations alone do not promote mechanistic reasoning. Presented as animations, these simulations hide the mechanisms and rules governing the model’s behaviours. Alternatively, web-based ABM environments like StarLogo Nova (SLN), developed by the Scheller Teacher Education Program at the Massachusetts Institute of Technology, leverage the affordances of combining computer modelling with simulation. SLN engages learners to create their own models through an accessible block-based programming language and a visually appealing virtual world that simulate phenomena. Designed within the constructivist tradition of using computer models as ‘objects-to-think-with’ (Papert, 1980), SLN allows learners to program rules of behaviour and interactions of individual entities in a phenomenon and execute these rules to simulate how a model behaves over time. Unlike standalone simulations, SLN opens up the ‘black box’ for teachers and their students to inspect why and how simulation outcomes emerge. Teachers can use SLN computer models to strengthen students’ mechanistic reasoning skills. Our newly published paper in the British Journal of Educational Technology (Hsiao, Lee, & Klopfer, 2019) explores how teachers use SLN to advance their own and their students’ mechanistic reasoning.

We present three case studies of teachers who implemented Project GUTS’ CS in Science, a curriculum that integrates computer science into school-day science classes through SLN modelling. Teachers participated in Teachers with GUTS (TwiG), a yearlong professional development programme in the United States in which teachers first experienced SLN as adult learners and later attended follow-up workshops in the academic year to learn to teach curricular modules in the earth, life and physical science domains. As part of the programme, teachers were periodically assessed for their skills in SLN modelling and were observed during their classroom implementations of different modules. We studied midyear datasets highlighting teachers’ use patterns of SLN tool features and their implementation field notes. We applied a mechanistic reasoning framework (Russ, Scherr, Hammer, & Mikeska, 2008) to understand whether and how teachers promoted mechanistic reasoning during classroom modelling activities.

Our analysis shows that decoding models is an important practice when engaging in computer modelling and simulation. Decoding entails looking ‘under the hood’ of a model to view the code that is executed as the model runs the simulation. When faced with a novel model they have never seen before in the SLN environment, teachers who examined the simulation and decoded it were more likely to provide mechanistic descriptions of how the model worked compared to those who only investigated the simulation. In their implementations, teachers who used decoding to demonstrate, modify or create SLN models with their students tended to focus on mechanistic reasoning in their instruction.

‘Our case studies exemplify the importance of decoding computer models to promote mechanistic reasoning.’

In conclusion, our case studies exemplify the importance of decoding computer models to promote mechanistic reasoning. As such, we recommend that when using computer simulations in the classroom, instruction should incorporate decoding to engage students in uncovering why and how a phenomenon occurs. Additionally, in professional development programmes that use computer modelling for science learning, it is important for educators to learn how to decode and why decoding contributes to understanding of mechanisms as a scientific practice.

More broadly, this work is significant because it expands on and clarifies how an aspect of computational thinking, decoding/analysis, can lead to deeper scientific understanding. While some scholars promote the use of simulations in STEM education as visualisations of phenomena that produce outcome data, our work has the potential to increase learners’ understanding of models at a deeper level – that is, the mechanisms that generate those data. An important unanswered question is, To what extent can decoding and analysing models for mechanisms be taught without teaching programming?


This blog is based on the article ‘Making sense of models: How teachers use agent‐based modeling to advance mechanistic reasoning’ by Ling Hsiao, Irene Lee and Eric Klopfer, published in the British Journal of Educational Technology.

It is currently free-to-view online for a limited time only, courtesy of our publisher, Wiley.


References

Hsiao, L., Lee, I., & Klopfer, E. (2019). Making sense of models: How teachers use agent‐based modeling to advance mechanistic reasoning. British Journal of Educational Technology, 50(5), 2203–2216. https://doi.org/10.1111/bjet.12844.

Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67(1), 1–25. https://doi:10.1086/392759.

Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic Books.

Russ, R. S., Scherr, R. E., Hammer, D., & Mikeska, J. (2008). Recognizing mechanistic reasoning in student scientific inquiry: A framework for discourse analysis developed from philosophy of science. Science Education, 92(3), 499–525. https://doi:10.1002/sce.20264.