Conditions of Trust: Physicians’ Views on Artificial Intelligence in Medical Practice in Italy
In the literature, doctors’ trust in AI in medical practice is regularly proposed as a way of approaching the risks associated with medical artificial intelligence (Starke et al 2020). Doctors may in fact find themselves threatened by AI which could make their work obsolete or lead them to face legal action due to diagnostic and therapeutic errors caused either by predictions produced by an algorithm, or even by errors caused by not using them – in a hypothetical future of widespread AI acceptance.
In this paper, we will identify and explore the barriers to trust that clinicians in Italy face regarding incorporating AI in their everyday medical practice. Empirical findings in this area will help to expand the discussion on the facets of trust in AI in Healthcare (HCAI) to the wider context, since building trust requires taking into account not only AI applications and end users, but also wider organizational, cultural, governmental and environmental factors that can influence trust formation.
References:
Starke, G., Van Den Brule, R., Elger, B.S., Haselager, P., 2022. Intentional machines: A defence of trust in medical artificial intelligence. Bioethics 36, 154–161. https://doi.org/10.1111/bioe.12891