Coordinated Inauthentic Behavior and Visual Framing in Social Media: Insights from the Vera AI Alerts System

Tuesday, 8 July 2025: 09:25
Location: FSE004 (Faculty of Education Sciences (FSE))
Oral Presentation
Anwesha CHAKRABORTY, Università di Urbino Carlo Bo, Italy
Fabio GIGLIETTO, Università di Urbino Carlo Bo, Italy
Giada MARINO, Università di Urbino Carlo Bo, Italy
The rise of inauthentic and coordinated online behavior of sharing links and posts to push a specific agenda has garnered significant academic attention in the last years. Information operations, particularly during crises, breaking news, and elections, often exploit coordinated networks of social media actors to spread deceptive content (Giglietto et al 2020; Giglietto et al 2023). Research on coordinated behavior detection today also needs to pay attention to visual content, particularly short-form videos, which can be rapidly generated and manipulated through generative AI technologies. Visual formats pose unique difficulties for detecting coordination, as methods for comparing and computing image similarity lag behind text-based analysis.

This paper employs the Vera AI Alerts system, which is being developed as part of a large European project, and which draws inspiration from previous works of Giglietto and colleagues (2020; 2023) to detect nefarious coordinated link sharing accounts. The primary data is derived from CrowdTangle API - gathered from October 2023 to August 2024 - which detected 7,068 coordinated posts, 10,681 coordinated links, and 2,126 newly identified accounts. The research highlights three specific examples of coordinated networks: exploited large groups (sexual content), casino engagement bait (financial content) and Putin fan groups (political content). Focusing on the last example, the paper applies an extended visual framing model inspired by Chakraborty and Mattoni (2023) - which studied political posts on Facebook by grassroots collective actors - to analyze the visual framing of posts generated by coordinated accounts. By adding new frames to this model, the study offers insights into the visual manipulation techniques employed in the dissemination of politically charged, inauthentic content on social media. These findings contribute to a deeper understanding of the intersections between (in)authenticity, coordinated behavior, and visual framing in the evolving landscape of online information warfare.