How Institutions Shape Social Trust in AI

Friday, 11 July 2025: 00:45
Location: FSE036 (Faculty of Education Sciences (FSE))
Oral Presentation
Sulfikar AMIR, Nanyang Technological University, Singapore
What drives people to have trust in using AI? How does the institutional environment shape the social trust in AI? What are the key factors in the acceptance of AI across societies? This paper addresses these questions to explain the role of institutions in allowing AI-based technologies to be socially accepted. In this study, social trust in AI is situated in the sociopolitical environment. Each of these institutions contributes different factors to the social trust in AI. It is posited that the level of social trust in AI is correlated to the level of trust in institutions. We argue that trust in AI is shaped by the institutions that exist outside the cognitive realm. This means the process in which trust in AI is build is strongly influenced by how the institutions create a favourable environment for AI to be socially accepted and adopted. By institutions we refer to three entities that play central roles in production and utilization of AI—the AI ecosystem, namely the government, tech companies, and scientific community. The stronger the trust in the institutions, the deeper the social trust in the use of AI. To test this hypothesis, we conducted a cross-country survey involving a total of 4,000 respondents in Singapore, Taiwan, Japan, and South Korea. These East Asian countries were selected due to relative similarities in political economy and technological capabilities. In addition, all countries have shown equal interests in the development of AI. To capture the nuanced picture of social trust across different domains of AI, the survey was designed to measure three different AI-based technologies, including autonomous vehicle, automated healthcare, and personalized fintech. The results show convincing evidence that institutions matter a great deal in shaping the social trust in AI.