Large Language Models and Social Reproduction
The study of LLMs is a promising new domain for understanding the relationship between social structures and semantics (Luhmann, 1980). As sociologists and sociocyberneticians aiming to comprehend the organizing processes of social order, including feedback loops that reinforce existing representations, we must ask: How do we build uses on top of LLMs? Are biases in discourse truly a bug, or are they a feature? To what extent do they reflect our current societies’ normative reasonings and expectations, or can they reflect what we are not, i.e., what we desire to become?
References
Baguma, R. et. al (2024). Examining Potential Harms of Large Language Models (LLMs) in Africa. In: Tchakounte, F., Atemkeng, M., Rajagopalan, R.P. (eds) Safe, Secure, Ethical, Responsible Technologies and Emerging Applications. SAFER-TEA 2023. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 566. Springer.
Luhmann, N. (1980). Gesellschaftsstruktur und Semantik: Studien zur Wissenssoziologie d. modernen Gesellschaft. Suhrkamp.
Luhmann, N. (2009). ¿Cómo es posible el orden social? Herder.
Yogarajan, V. et. al (2023). Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies (arXiv:2312.01509). arXiv.