Erosion of Minority Rights in the 21st Century: Navigating Global Challenges, Postcolonialism, and the Role of AI

Wednesday, 9 July 2025: 00:00
Location: FSE015 (Faculty of Education Sciences (FSE))
Oral Presentation
Roberta MEDDA-WINDISCHER, Eurac Research - Institute for Minority Rights, Italy
Katharina CREPAZ, Eurac Research - Center for Autonomy Experience, Italy
Federico SIMEONI, Free University of Bozen-Bolzano / Eurac Research, Italy
Since the beginning of the 21st century, the previously favorable stance towards minority rights, regarded as fundamental components of democratic societies, has begun to show signs of fatigue, if not outright resistance, leading to the erosion of established minority rights standards. As a result, minority rights have increasingly lost prominence on political agendas, overshadowed by other global challenges, including the disruptions caused by climate change, rising global economic inequalities, health crises, increased international mobility, international conflicts, and technological advancements, particularly in media and artificial intelligence. This shift also intersects with the ongoing global debate on postcolonialism, which calls for a re-examination of power structures and a reconsideration of whose voices and rights are prioritized.

Aiming to revitalize the field of minority rights research and reframe the minority rights paradigm amid global challenges, including postcolonialism, this presentation explores the increasing significance of AI systems across various domains of human life, particularly in relation to minorities and the accommodation of their needs and claims. Using Kimberlé Crenshaw’s (1989) concept of intersectionality as the main framework, we argue that discriminatory AI is a human-made problem and can therefore only be tackled through a human-centered approach. This approach includes discussing protected attributes and their (in)stability, vulnerability, and essentialist vs. non-essentialist attributions of group identity, as well as focusing on human-made inequalities and power imbalances as the source of biased AI systems. AI models are biased and discriminatory because our societal structures are as well; solutions that only address technological challenges fall short of tackling the underlying issue of inequalities. We analyze the EU AI Act and the European Centre for Algorithmic Transparency as possible strategies for mitigating discriminatory effects through AI governance, and conclude that successfully creating fair AI will not be possible without addressing the societal roots of its discriminatory behavior.