Harnessing (Generative) AI for the Study of Decision Making and Social Processes

Friday, 11 July 2025: 11:00-12:45
Location: FSE024 (Faculty of Education Sciences (FSE))
RC45 Rational Choice (host committee)

Language: English

In the past few years, major advancements have been made in Generative Artificial Intelligence (AI) – artificial intelligence capable of generating text, images, videos, or other types of data. Recent work in computer and social science suggests that Generative AI can facilitate hypothesis testing by modelling (rational) decision making of humans and persuasive agents and supporting various types of text and image analyses. At the same time, generative AI exhibits various biases, suffers from lack of transparency and challenges the conventional notions of scientific reproducibility. This session invites contributions that explore opportunities and risks of using generative AI as a tool for the study of social mechanisms and processes. This includes applications that, for instance, attempt to understand strategic thinking and social understanding of generative AI models, use AI to simulate decision making in simulation studies or social experiments, or compare human and AI responses in survey research. Studies that use generative AI as a tool for behavioural, textual, or visual media data analysis in the study of social processes are also welcome, as are studies that evaluate how AI can push boundaries of hypothesis generation for the study of decision making and social processes.
Session Organizer:
Ana MACANOVIC, European University Institute, Italy
Oral Presentations
The Emergence of Economic Rationality of Gpt
Yiting CHEN, Lingnan University, China; Tracy Xiao LIU, Tsinghua University, China; You SHAN, China; Songfa ZHONG, Hong Kong University of Science and Technology, China
Large Language Models Show Human-like Content Biases in Transmission Chain Experiments
Alberto ACERBI, Università di Trento, Italy; Joe STUBBERSFIELD, University of Winchester, United Kingdom
Field Experiments of Social Influence and Contagion with AI-Assisted Bots
Hiroki ODA, London School of Economics and Political Science, United Kingdom; Milena TSVETKOVA, London School of Economics and Political Science, United Kingdom; Taha YASSERI, Trinity College Dublin, Ireland; Kinga MAKOVI, NYU Abu Dhabi, United Arab Emirates
De-Biasing Large Language Models: Fine-Tuning with in-Group Texts for Enhanced Sociocultural Representations
Sukru ATSIZELTI, Turkey; Ali HÜRRIYETOĞLU, Koç University, Turkey; Erdem YORUK, Koc University, Turkey; Fırat DURUŞAN, Koç University, Turkey; Fuat KINA, Turkey; Melih Can YARDI, Koç University, Turkey; Şule TAN, Bogazici University, Turkey
See more of: RC45 Rational Choice
See more of: Research Committees