Moral Intuitions and Moral Judgments in Humans and Large Language Models (LLMs): An Experimental Comparison

Monday, 7 July 2025: 13:30
Location: ASJE028 (Annex of the Faculty of Legal, Economic, and Social Sciences)
Oral Presentation
Valeria KOROTKOVA, Higher School of Economics, Russian Federation
Inna DEVIATKO, Higher School of Economics, Russian Federation
In this article, we consider the problem of the moral intelligence of Large Language Models (LLMs) and their ability to produce morally relevant texts and solve common high-conflict moral dilemmas in direct juxtaposition with decisions preferred by people from different populations. To do this, we conducted two experiments using the same set of stimulus materials, in which we analyzed, firstly, the ability of ChatGPT (as a paradigmatic generative language model) to reproduce the most popular moral intuitions of people when solving moral dilemmas, and secondly, its ability to generate full-fledged moral reasoning corresponding to human deliberation-based moral choice justifications collected through an online survey. The results provided overall confirmation of our assumption that at the current level of development, although LLM can correctly guess the most popular public positions on moral issues, it is unable to fully reflect in texts which it produces the entire human decision-making process with all its contradictions. These findings open up important prospects and opportunities for future sociological analysis of human perception of artificially generated morally relevant texts and AI-assisted decisions within a moral domain.