Trust, Risk, and Uncertainty Surrounding the Use of Artificial Intelligence in Healthcare: A Scoping Review of Public Perceptions

Monday, 7 July 2025: 00:00
Location: SJES019 (Faculty of Legal, Economic, and Social Sciences (JES))
Oral Presentation
Samantha MEYER, University of Waterloo, Canada
Helena GH NASCIMENTO, University of Waterloo, Canada
Geetika ARIKATI, University of Waterloo, Canada
Sultan ABDULKARIM, University of Waterloo, Canada
Artificial intelligence has immense potential to improve the health of societies. However, the pace at which AI is developing generates uncertainty in the utility of these technologies for social benefit. Accordingly, public trust in AI for use in healthcare is imbued with considerations of perceived/real risk. In recent years, scholars have conceptualized trust in AI in healthcare, noting dimensions of competence, honesty, and benevolence. The underlying assumption that has yet to be unpacked, however, is in what, or in whom, we are asking people to trust and how this relates to risk and uncertainty. For example, is the focus on algorithms, training data, the humans involved in obtaining training data, tech companies developing/regulating the technologies, or a combination? Do risk perceptions relate to public discourse and rhetoric? Does trust vary across social identity? We aim to systematically document empirical literature investigating public trust in AI within the context of healthcare. Following Arksey and O’Malley’s framework for scoping reviews, we searched four databases to identify peer-reviewed and grey literature published between 2022 (launch of ChatGPT) and October 2024. Articles will be extracted in Nov 2024 and data will be charted in consideration of study design/location, definition/conceptualization of trust, focus of trust (in who/what), study population characteristics, and determinants accounted for as influencing public trust, including risk and uncertainty. To realize the potential benefits of AI in healthcare, health officials need to ensure that AI is trustworthy and then secure public trust. Building trust will require transparency regarding the real risks of AI use in healthcare, and how these risks will be managed given the uncertainty of the rapid evolution of AI. The present work will respond to identified gaps in our understanding of public perceptions of AI required before these technologies can be implemented for health system improvement.