Skip to content
 

Blog post

What I meant and the respondent assumed: Improving survey questions with cognitive interviews

Natia Sopromadze, Postdoctoral Research Assistant at University of Wolverhampton

When we gather survey data about human perceptions and attitudes, do we elicit answers to what we really ask? How far can self-administered questionnaires (Schwarz, 1999), with no interviewer present, capture respondents’ thoughts and feelings? As online surveys increasingly cross geographical boundaries, how can we ensure a shared understanding of survey questions across cultures and languages? Cognitive interviewing (CI), an unconventional survey pretesting method, promises to make an indirect communication between the researcher and the respondent more meaningful (Willis, 2005).

CI was developed in the 1980s as a result of an interdisciplinary collaboration between cognitive psychologists and survey methodologists. The method assumes that observing people’s thought processes when they answer survey questions can identify problematic aspects in the questionnaire design and point to possible solutions (Miller, Chepp, Willson, & Padilla, 2014). Although CI has been successfully applied in different disciplines to enhance survey data quality, its application in education research remains limited (Desimone, & Le Floch, 2004).

‘Using cognitive interviewing as a survey evaluation tool offers the potential to better understand the question–response process and collect more accurate data.’

In my PhD study, I used CI to evaluate, adapt and improve a bilingual English/Georgian questionnaire. The survey was designed to explore the emotional side of higher education leadership from a cross-cultural perspective. I conducted eight cognitive interviews with English and Georgian academics using a combination of think-aloud and probing techniques (DeMaio, & Landreth, 2004). Initially, I tested the draft questionnaire in English before translating it into Georgian. Based on the findings from the English CI round, the questionnaire was revised and then back-translated (Behr, 2017). Since back-translation on its own may not guarantee the equivalence of the source and translated text, I carried out another CI round in the Georgian language. The analysis of the interview data from both testing rounds revealed that several survey items carried different meanings to different individuals. Exploring a range of interpretations detected less obvious response difficulties that led to improving question clarity, translation quality and cross-cultural comparability.

Considering a rapid growth of comparative studies in education, it is vital to establish linguistic and cultural equivalence of research instruments across diverse populations (Harkness et al., 2017). CI as a survey evaluation tool offers the potential to better understand the question–response process and collect more accurate data (Beatty & Willis, 2007). It can help to overcome the limitations of back-translation and add methodological rigour to questionnaire adaptation. While other pretesting techniques can provide valuable input into optimising the questionnaire, they offer insufficient insight into the respondent’s cognitive processing of individual survey items (Blake, 2015). Understanding the sources of response problems requires direct information on how questions are experienced and interpreted. As large-scale pretests of translated instruments are not always feasible in the social sciences, my study illustrates a pragmatic but rigorous approach to survey development and adaptation, which should be within the means and time of most social scientists.

For more details please see Sopromadze and Moorosi (2017).

References

Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287–311. https://doi.org/10.1093/poq/nfm006.

Behr, D. (2017). Assessing the use of back translation: The shortcomings of back translation as a quality testing method. International Journal of Social Research Methodology, 20(6), 573–584. https://doi.org/10.1080/13645579.2016.1252188.

Blake, M. (2015). Other pretesting methods. In D. Collins (Ed.), Cognitive interviewing practice (pp. 28–56). London: SAGE.

DeMaio, Th. J. & Landreth, A. (2004). Do different cognitive interview techniques produce different results? In S. Presser, J. M. Rothgeb, M. P. Couper, J. T. Lessler, E. Martin, J. Martin, & E. Singer (Eds.), Methods for testing and evaluating survey questionnaires, pp. 89–108. Hoboken: John Wiley & Sons.

Desimone, L. M. & Le Floch, K. C. (2004). Are we asking the right questions? Using cognitive interviews to improve surveys in education research. Educational Evaluation and Policy Analysis, 26(1), 1–22. https://doi.org/10.3102/01623737026001001.

Harkness, J. A., Braun, M., Edwards, B., Johnson, T. P., Lyberg, L. E., Mohler, P. P., Pennell, B. E., & Smith, T. W. (Eds.). (2010). Survey methods in multinational, multiregional, and multicultural contexts, Hoboken: John Wiley & Sons.

Miller, K., Chepp, V., Willson, S., & Padilla, J. L. (Eds.). (2014). Cognitive interviewing methodology. London: SAGE.

Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54(2), 93–105. https://doi.org/10.1037/0003-066X.54.2.93.

Sopromadze, N., & Moorosi, P. (2017). Do we see through their eyes? Testing a bilingual questionnaire in education research using cognitive interviews. International Journal of Research & Method in Education, 40(5), 524–540. http://doi.org/10.1080/1743727X.2016.1181163.

Willis, G. B. (2005). Cognitive interviewing: A tool for improving questionnaire design. Thousand Oaks: SAGE.