Generative AI and Challenges of Social Perception in Healthcare

Friday, 11 July 2025: 11:00
Location: FSE035 (Faculty of Education Sciences (FSE))
Oral Presentation
Ignat BOGDAN, Research Institute for Healthcare Organization and Medical Management , Moscow, Russian Federation
Maksim GORNOSTALEV, Research Institute for Healthcare Organization and Medical Management, Russian Federation
Nikita BURDUKOVSKII, Research Institute for Healthcare Organization and Medical Management, Russian Federation
Mariia MIAKISHEVA, Research Institute for Healthcare Organization and Medical Management, Russian Federation
Irina IGLITSYNA, Research Institute for Healthcare Organization and Medical Management, Russian Federation
The research utilized empirical data from a CATI survey conducted in 2023-2024 (N = 1600) and included experiments with textual and visual generative neural networks.

Findings indicate that generative AI is primarily used by younger individuals, who tend to understand the technology better, while older generations remain cautious. One in three respondents reported experience with generative AI, a figure that has remained stable over the past year, signaling a plateau after initial growth. Notably, only 9% have used generative AI for health-related inquiries, and only a third believe it could replace a doctor. A significant barrier to wider adoption is the poor and negative image of AI, which is more common among older Muscovites. Respondents express concerns about medical errors and treatment quality.

Not only do social perceptions influence AI usage, but there is also a reverse influence. The study highlights that neural networks reinforce visual (appearance) and fictional (texts) stereotypes about medical professions. For instance, nurses are often portrayed as young, white women in a uniform, while doctors are middle-aged men. Functional stereotypes limit the perceived roles of nurses, who are often seen merely as “doctor’s assistants”. There is a vicious circle when neural networks are being trained on stereotypical data, generating stereotypical output, and being trained on more stereotypical data. To disrupt this cycle, it is important to analyze the output from neural networks with a socio-humanistic lens, aiming for an evidence-based perspective free from specific ideological biases. This approach, which we termed “prompt experiment,” has been partially adopted by major AI companies, though concerns about their objectivity persist. Also, our research reveals that many medical stereotypes are deeply ingrained and require specialized skills for identification.

Engaging with social perception is vital because literature reveals its connection with current challenges within the healthcare system.