Competitions and competitive learning are a staple of entrepreneurship and enterprise education. They are recommended for 11–18-year-olds in England (Hanson, Cox & Hooley, 2017), and identified as the most popular way of integrating entrepreneurship in school classrooms across Europe (Komarkova, Gagliardi, Conrads & Collado, 2015). The problem is that in between the enthusiastic policy prescriptions and the drama and excitement of competitive activities, limited attention is given to potential downsides.
‘Between the enthusiastic policy prescriptions and the excitement of competitive activities, limited attention is given to the potential downsides of competitions and competitive learning in entrepreneurship and enterprise education.’
One of the issues impeding developing better knowledge about how and why competitions work is, we argue in our new article in the journal Industry and Higher Education, an over-reliance on certain methods of evaluation (Brentnall, Diego Rodriguez & Culkin, 2018). Like education more generally, entrepreneurship education has been committed to emulating ‘gold standard’ scientific evaluation approaches such as randomised controlled trials, systematic review and meta-analysis (Rideout & Grey, 2013). Such methods are borrowed from the field of evidenced-based medicine, and recommended as the best route to learning ‘what works’ (Goldacre, 2013). Meanwhile, methods in health-based research are moving on, in particular with the regards to the development and use of realist evaluation (see Greenhalgh et al, 2015), which aims to extend the research question beyond ‘what works?’ and towards ‘what works, for whom, in what circumstances and why?’.
Realist evaluation has theory as its start and end points. Part of its philosophy is that social programmes are theory incarnate, and therefore the theoretical basis of programmes should be scrutinised, refined, challenged and refuted (Pawson, 2006, 2013). We adopted this approach to take a deeper look at the theory underpinning competitions in entrepreneurship education (EE), conducting a 10-year policy review and ‘mining’ the outcomes expected from, and attributed to, such interventions. We found that the most common outcomes assumed from competitions and competitive learning is that they develop students’ skills and motivate and reward them, and that students learn from and are inspired by their peers.
However, assuming that competitions will ‘work’ in this way for all participants is flawed. Utilising principles of realist evaluation, we intentionally searched for theory from psychology and education that challenged the benefits assumed in policy. Our cross-examination identified that that EE competitions can be demotivating, diminish competency, dent self-confidence and be demoralising for some participants.
By using this approach, we have been able to critically analyse the theoretical basis of entrepreneurship education competitions, as well as demonstrate the wider relevance of realist evaluation in (re)appraising social programmes.
Of particular interest, given the focus on social mobility that often accompanies careers and enterprise activities, analysis indicated that competitions can lead to unforeseen outcomes, especially for those in ‘at risk’ groups (such as students from lower socioeconomic backgrounds). Essentially, EE competitions may enable confident, socially and culturally advantaged young people to gain additional social and educational capital that will benefit them further at a later time and thus, in effect, create greater disadvantage for their less well equipped peers.
Such insights are crucial in providing a deeper and fuller account of reality so that practitioners and policymakers have better explanations on which to base their policy decisions and practice. Realist evaluation has enabled us to look beyond the intuitive appeal of competitions and competitive pedagogies to scrutinise the theory that underpins such interventions. This approach has revealed that while entrepreneurship education competitions are presented as fun and effective interventions for all, the declared benefits and positive outcomes are by no means guaranteed.
This blog post is based on the article, ‘The contribution of realist evaluation to critical analysis of the effectiveness of entrepreneurship education competitions’ by Catherine Brentnall, Ivan Diego Rodriguez and Nigel Culkin, recently published in in the journal Industry and Higher Education. It is free-to-view for a time-limited period, courtesy of the journal’s publisher, Sage Education.
Brentnall, C., Diego Rodríguez, I., & Culkin, N. (2018). The contribution of realist evaluation to critical analysis of the effectiveness of entrepreneurship education competitions. Industry and Higher Education, 32(6), 405–417. https://doi.org/10.1177/0950422218807499
Goldacre, B. (2013). Building evidence into education. Department for Education.
Greenhalgh T., Wong, G., Jagosh, J., Greenhalgh, J., Manzano, A., Westhorp, G. & Pawson, R. (2015). Protocol—the RAMESES II study: developing guidance and reporting standards for realist evaluation. BMJ Open 2015(5), e008567. http://dx.doi.org/10.1136/bmjopen-2015-008567
Hanson, J., Cox, A. & Hooley, T. (2017). Business games and enterprise competitions. What works? London: The Careers & Enterprise Company.
Komarkova, I., Gagliardi, D., Conrads, J. & Collado, A. (2015). Entrepreneurship competence: An overview of existing concepts, policies and initiatives: Final Report (JRC96531). Luxembourg: Publications Office of the European Union. https://publications.europa.eu/en/publication-detail/-/publication/6e016026-77e8-11e5-86db-01aa75ed71a1/language-en
Pawson, R. (2006). Evidence-based policy: A realist perspective. SAGE.
Pawson, R. (2013). The science of evaluation: a realist manifesto. SAGE.
Rideout, E. C. & Gray, D. O. (2013). Does entrepreneurship education really work? A review and methodological critique of the empirical literature on the effects of university‐based entrepreneurship education. Journal of Small Business Management, 51(3), 329–351.