Do Llms Transform Alienation?: Human Consequences of Non-Human Language Developments
However, the impact of AI technology on self-identification is not limited to the "visible" dangers of pseudo-human representations. The primary purpose of this report is to critically examine the pitfalls of "invisible" technologies that threaten self-identification. This underlying threat is set against the backdrop of the rise of machine learning, which brought about the third digital revolution. Large language models (LLMs) based on machine learning achieve an understanding of the language system itself based on the probability of word occurrence. Therefore, in the process of their understanding, the semantic horizon of symbol grounding is excluded.
The fact LLMs can fully acquire and understand language without any references to the existential objectives, have the potential to overturn the conventional assumptions of linguistics and linguistic philosophy about the interrelation between language and the world (Suzuki 2024: 140-7). In response to this, I will first briefly explain the mechanism of LLMs and then summarizes how LLMs open up possibilities for reconsidering the premises surrounding language acquisition, usage, and "signification." Subsequently, I will discuss how ethical encounters between self and other, and the self-identification achieved therein, relate to the above crustal questionings in linguistics and whether there is a possibility of reinterpreting self-alienation. In doing so, I would like to open up a theoretical horizon that can introduce fundamental questions about meaning and language, derived through the mediation of LLM's technological development, into the realm of alienation theory.