Field Experiments of Social Influence and Contagion with AI-Assisted Bots
On each platform, we create human and clearly labeled bot accounts, seeding them with activity histories. Bot accounts use scripts that prompt large language models (LLMs) to select content based on criteria like contribution time, quality, and initial success, enhancing the truthfulness of interventions. LLMs generate standardized two-sentence messages: a generic opening and a context-specific rationale aligned with the assigned strategy. Human accounts replicate these processes manually. Contributions are randomly assigned to six conditions — [bot, human] account × [random, emotional, rational] strategy—plus a control group with no intervention.
We match the samples on factors such as funding goals and initial success, and intervene early before significant community engagement to enhance the comparability and potential impact of treatments. The analyses measure recipients' behavior (activity levels, donations, language use, and sentiment) and the overall success of posts, articles, or projects (e.g., upvotes, edits, and amount donated) and employ non-parametric mean-comparison tests to estimate differences between conditions, intervention types, and online communities.
By integrating generative AI models to create realistic interventions and selection processes, this research contributes a novel method for online experimentation involving AI agents. The findings enhance our understanding of AI's role in influencing human decision-making and social processes, highlighting how LLMs can simulate human-like rationale in interactions.