Field Experiments of Social Influence and Contagion with AI-Assisted Bots

Friday, 11 July 2025: 11:30
Location: FSE024 (Faculty of Education Sciences (FSE))
Oral Presentation
Hiroki ODA, London School of Economics and Political Science, United Kingdom
Milena TSVETKOVA, London School of Economics and Political Science, United Kingdom
Taha YASSERI, Trinity College Dublin, Ireland
Kinga MAKOVI, NYU Abu Dhabi, United Arab Emirates
This study investigates how artificial intelligence influences human behavior in online communities through social influence and contagion. We conduct field experiments across three platforms—Reddit, Wikipedia, and DonorsChoose —to causally test whether AI agents are less influential than humans when lacking a convincing rationale, but more effective when displaying expertise and due diligence. The interventions involve comments, symbolic awards, and monetary donations, delivered by either human or bot accounts employing random, emotion-driven, or success-optimizing strategies. We study the direct effects on recipients, indirect effects on observers, and the resulting collective-level outcomes.

On each platform, we create human and clearly labeled bot accounts, seeding them with activity histories. Bot accounts use scripts that prompt large language models (LLMs) to select content based on criteria like contribution time, quality, and initial success, enhancing the truthfulness of interventions. LLMs generate standardized two-sentence messages: a generic opening and a context-specific rationale aligned with the assigned strategy. Human accounts replicate these processes manually. Contributions are randomly assigned to six conditions — [bot, human] account × [random, emotional, rational] strategy—plus a control group with no intervention.

We match the samples on factors such as funding goals and initial success, and intervene early before significant community engagement to enhance the comparability and potential impact of treatments. The analyses measure recipients' behavior (activity levels, donations, language use, and sentiment) and the overall success of posts, articles, or projects (e.g., upvotes, edits, and amount donated) and employ non-parametric mean-comparison tests to estimate differences between conditions, intervention types, and online communities.

By integrating generative AI models to create realistic interventions and selection processes, this research contributes a novel method for online experimentation involving AI agents. The findings enhance our understanding of AI's role in influencing human decision-making and social processes, highlighting how LLMs can simulate human-like rationale in interactions.