Artificial Intelligence (AI) and Institutional Trust: Promise or Peril?
This study focuses on a specific consequence of AI adoption in the public sector on society: institutional trust. We explore key mechanisms driving not only people’s trust in algorithms but their overall trust in public welfare systems implementing AI to profile and categorize citizens.
Existing theories and empirical research on algorithmic perceptions ignore two elements that we argue are essential for understanding the consequences of AI: awareness and experiences with institutions. Specifically, we argue that both factors are key to help understand how AI will shape institutional trust.
This work applies a vignette experiment combined with survey measures to test the effect of these two factors on AI adoption and trust in Italy. In the experiment, people firstly provide their trust without knowing the decision-making process of the welfare institution. After, they are revealed whether the decision was made by a human, a hybrid system, or an AI. Our core argument is that, in contrast to what existing theories assume, the effect of revelation should depend upon people’s prior experiences and trust in institutions.
Results will contribute to understand the broader discourse on AI adoption in the public sector as a possible game-changer for citizens’ institutional trust and we will provide theoretical and empirical insights into what the future may hold for Italy and beyond as AI adoption in the public sector occurs.