347.2
Artificial Intelligence and Machine Learning: Algorithmic Biopolitics in Policing
There is no current means to audit workings of machine-learning algorithms creating these results. This has produced substantive concern in computer-science that these systems do not reflect defensible science, and has spawned research and conferences under the name of Fairness, Accountability and Transparency in Machine Learning (FATML) seeking to produce means to detect and remove bias in the operation of such systems.
At present there are believed to be two principal sources of bias. First, data used to `train` algorithms and machine-learning software systems comes from archives that are known to be problematic—`uncleaned` data from police databases and national security databases. Second, algorithms are constructed by (primarily white male) programmers using not just statistics and logic, but also—probably unintentionally—biased `commonsense` held by the programmers in relationships between factors including race, ethnicity, and sex that are not necessarily correlated with criminal behavior.
This session will report cases of algorithmic bias and their biopolitical implications—especially as they may extend to the War on Terror—and current efforts among FATML researchers to address the identified issues in the production of fair, accountable and transparent systems.