Who Prefers AI to Humans? Cultural Capital and Political Capital and AI Preference in China

Wednesday, 9 July 2025: 12:00
Location: FSE025 (Faculty of Education Sciences (FSE))
Oral Presentation
Zheng FU, Columbia University, USA
Chuncheng LIU, Microsoft Research, USA
This study investigates the social determinants of preferences for Artificial Intelligence (AI) evaluation versus human judgment among urban residents in China, particularly focusing on cultural capital and political capital, of respondents. Building on existing literature, which primarily examines AI features and their impact on user preferences, this research instead centers on respondent attributes and the broader social context in which AI is perceived.

The study is based on a survey of 1,042 urban residents in China, conducted in 2021, which gathered data on preferences for AI versus human evaluation in the context of personal credit evaluation. Respondents were asked to express preferences and to compare AI and human judgments in terms of fairness, credibility, comprehensiveness, and accuracy.

Crucially, we identify two mechanisms involving respondents' cultural and political capital that explain their preference for AI evaluation as compared to human evaluation:

First, we disentangle the effect of knowledge and education on AI preference into two aspects: 1) knowledge of current affairs, and 2) the credentialing effect of education. We found that while those more knowledgeable about current affairs tend to prefer AI over human evaluation, those with a college education (as compared to high school graduates) tend to prefer human evaluation. We attribute this to the interactional nature of cultural capital recognition: since human evaluations are more likely to acknowledge the advantages of cultural capital, those with higher cultural capital prefer to be evaluated by humans.

Second, we found that individuals with political capital in China (such as Communist Party membership or having relatives in government institutions) tend to prefer being evaluated by humans.

These findings from China reveal a preference for AI evaluation over human judgment among individuals with lower socio-political status, offering a nuanced perspective that challenges the prevailing literature’s cautionary stance on AI.