Observing Algorithmic Decision-Making: From the Perspective of Sociocybernetics
First, we ask what the interpretive framework for AI is (how AI is interpreted by people, how social systems observe AI, second-order observations), independent of what AI itself is (first-order observations).
Next, we will focus on the point where “final judgment by humans” is the breaking point in the acceptance of decision-making by AI. Then we sort out what is being backgrounded in the series of interpretive schemes concerning AI. For example, who (or what) makes decisions, what is called decision-making, and how is the distinction drawn between AI decision-making and “human” decision-making, and so on.
As a result of the spread of algorithmic decision-making, it is expected that the pattern of social interpretive frameworks of “human,” “self,” “subject,” and “decision-making” will change (evolution of meaning), and that will require sociology to abandon the representation of “human” as a theoretical starting point.
In conclusion, we point out that in cases where we can find a feedback loop between the subject making decisions and the object being decided, the key to understanding this issue is how to position the algorithm within that loop.