Making the Invisible Visible: Addressing Gender and Ethnic Vulnerabilities in Algorithmic Systems

Wednesday, 9 July 2025
Location: Poster Area (Faculty of Education Sciences (FSE))
Poster
Paola PANARESE, Sapienza University of Rome, Italy
Vittoria AZZARITA, Sapienza University of Rome, Italy
Marta GRASSO, Sapienza University of Rome, Italy
Over the past decade, there has been a significant expansion in the application domains of algorithmic and artificial intelligence (AI) systems, which have profoundly influenced the representation and communication of social vulnerabilities (Noble, 2018). Research highlights that these systems can perpetuate forms of discrimination, particularly along ethnic, and gender lines (boyd et al., 2014). Indeed, algorithmic systems often reflect a reductive view of human complexity, systematically excluding historically marginalized identities and characteristics, which consequently remain invisible (Gross, 2023).
These discriminatory dynamics are rooted in several intertwined factors, including the biases of system developers, the partiality of the data used for training, and the actions of users who contribute to the data inputs. As a result, algorithmic systems often fail to adequately account for the diversity of individual experiences, narrowing the spaces of self-determination and self-expression (Waelen, Wieczorek, 2022).
Building on this foundation, this poster focuses on a segment of a broader research project that explores the gender and ethnic inclusivity of algorithmic systems. The poster compares key findings from a systematic review of the literature on algorithmic systems and gender/ethnic biases alongside insights from developers, collected with 20 in-depth interviews. The goal is to offer recommendations for creating more equitable and inclusive algorithms.
While the studies acknowledge the social implications of biases in algorithmic systems, they fail to provide socio-technical frameworks that offer a shared definition of bias or address algorithmic discrimination systemically. Furthermore, developers show a limited understanding of the structural inequalities embedded in these systems (Cratsley, Fast, 2024) and the enduring belief in algorithmic neutrality (Natale, Ballatore, 2017), although this belief varies based on individual cultural and gender backgrounds. Therefore, the findings emphasize the need to establish guidelines that address the lack of representation of human diversity, by making the invisible visible (Shrestha, Das, 2022).