Skip to content

In his seminal paper in 1950, ‘Computing Machinery and Intelligence’, Alan Turing raised the question: ‘Can machines think?’. Igniting debates among both philosophers and computer scientists, philosophers began pondering the implications for humanity if machines could think and possess intelligence like humans; while computer scientists plunged into the challenge of creating artificial human-like intelligence. Central to both discourses was the notion of creating a new form of humanity, often referred to as Homo Sapiens 2.0 (Dietrich, 2002) with a driving force to replace human labour (Barrat, 2023).

The philosophical discourse surrounding artificial intelligence (AI) can be traced back to 1990 when Pollock conceptualised about a ‘person’ who possesses mental faculties, and considering: ‘Can a machine be considered a person?’. A person is defined as someone who has rationality and consciousness (Pollock, 1990). Consequently, machines can be considered persons if they exhibit rational behaviour (Kao & Venkatachalam, 2021). Arguably, constructing a person entails creating an accurate computer model of human rationality, which is the fundamental concept in strong AI. That said, AI tools can become person-like without exactly replicating human rationality, as they often lack consciousness but may surpass human rationality (Pollock, 1990).

ChatGPT is a chatbot employing natural language processing to generate human-like responses to user input. It aligns with Pollock’s (1990) idea that AI can create a person-like system that doesn’t necessarily replicate exact human rationality. ChatGPT performs tasks that require human intelligence including teaching, learning, assessment and research.

This blog post extends the review findings of Ansari and colleagues (2023), with a focus on the conceptual scope of ChatGPT. The review adhered to PRISMA guidelines for mapping and synthesising the global literature regarding ChatGPT’s use in higher education. The review analysed 69 included studies on two levels: the scope of evidence; and the meta-synthesis of the use of ChatGPT in higher education by teachers, students and researchers. This post focuses on the review’s findings about the conceptual scope of ChatGPT in the literature to contribute to the philosophical debate surrounding AI tools. The 69 studies primarily view ChatGPT as a technical tool for tasks requiring human intelligence, discussing its advantages and disadvantages. However, none of them explicitly outline the underlying assumptions driving ChatGPT’s creation, leaving a gap in understanding its rationale and intended applications.

Arguably, there are three possible philosophical assumptions: first, it is a market-required product to generate profit (Zarifhonarvar, 2023); second, it is controlling and directing the thought processes of users (Fan et al., 2020); third, it is to replace human resources (Temsah et al., 2023). While these assumptions are not explicitly addressed in the literature, there appears to be a leaning towards the third assumption, with frequent highlights of ChatGPT’s potential to revolutionise the way people think, interact and work. Moreover, the implicit threat to the reduction in the labour force is explicitly concerning academia which reinforces that ChatGPT can replace human intelligence (Bozkurt et al., 2023).

‘In the face of unpredictable futures, it is imperative to harness the potential of AI for good while safeguarding the unique value of human intelligence.’

This notion raises significant concerns that require proactive solutions. Open discussions and transparent planning are crucial to mitigate potential harms and to ensure responsible development of these technologies. Additionally, it warrants exploration of ChatGPT’s potential to complement and augment human capabilities. Certainly, academia can lead the way by adapting its practices to coexist effectively with AI-powered tools by integrating AI into the learning processes to foster critical thinking skills and explore new avenues for human–AI collaboration. Therefore, in the face of unpredictable futures, it is imperative to harness the potential of AI for good while safeguarding the unique value of human intelligence.


References

Ansari, A. N., Ahmad, S., & Bhutta, S. M. (2023). Mapping the global evidence around the use of ChatGPT in higher education: A systematic scoping review. Education and Information Technologies, 1–41. https://link.springer.com/article/10.1007/s10639-023-12223-4

Barrat, J. (2023). Our final invention: Artificial intelligence and the end of the human era. Hachette UK.

Bozkurt, A., Xiao, J., Lambert, S., Pazurek, A., Crompton, H., Koseoglu, S., … & Jandrić, P. (2023). Speculative futures on ChatGPT and generative artificial intelligence (AI): A collective reflection from the educational landscape. Asian Journal of Distance Education, 18(1), 53–130. https://doi.org/10.5281/zenodo.7636568

Dietrich, E. (2002). Philosophy of artificial intelligence. In The Encyclopedia of Cognitive Science (pp. 203–208). Wiley. https://doi.org/10.1002/0470018860.s00155

Fan, J., Fang, L., Wu, J., Guo, Y., & Dai, Q. (2020). From brain science to artificial intelligence. Engineering, 6(3), 248–252. https://doi.org/10.1016/j.eng.2019.11.012

Kao, Y. F., & Venkatachalam, R. (2021). Human and machine learning. Computational Economics, 57(3), 889–909. https://doi.org/10.1007/s10614-018-9803-z 

Temsah, O., Khan, S. A., Chaiah, Y., … & El-Eyadhy, A (2023). Overview of early ChatGPT’s presence in medical literature: Insights from a hybrid literature review by ChatGPT and human experts. Cureus, 15(4). https://doi.org/10.7759/cureus.37281

Zarifhonarvar, A. (2023). Economics of ChatGPT: A labor market view on the occupational impact of artificial intelligence. Journal of Electronic Business & Digital Economics. Advance online publication. https://doi.org/10.1108/JEBDE-10-2023-0021