347.2
Artificial Intelligence and Machine Learning: Algorithmic Biopolitics in Policing

Friday, 20 July 2018: 08:50
Location: 707 (MTCC SOUTH BUILDING)
Oral Presentation
Donald WINIECKI, Boise State Universtity, USA
The recent introduction of machine-learning technologies has produced (among other things) `recommender systems` with applications ranging from Netflix ratings to policing, law enforcement and criminal justice. In the latter it is used to predict `likely offenders` and `likely reoffenders` (thus recommending police activity), and by judges in assigning sentences. Widespread and uncritical assumption that computer-based recommender systems are objective and immune to bias has influenced their rapid uptake. However, analysis of these `algorithmic policing` and `algorithmic justice` systems show unsupportable `preference` toward ethnic and racial minorities. The result is a biopolitics embedded in technoscience that—on the surface—ratifies long-standing bias associating biological factors and criminal behavior.

There is no current means to audit workings of machine-learning algorithms creating these results. This has produced substantive concern in computer-science that these systems do not reflect defensible science, and has spawned research and conferences under the name of Fairness, Accountability and Transparency in Machine Learning (FATML) seeking to produce means to detect and remove bias in the operation of such systems.

At present there are believed to be two principal sources of bias. First, data used to `train` algorithms and machine-learning software systems comes from archives that are known to be problematic—`uncleaned` data from police databases and national security databases. Second, algorithms are constructed by (primarily white male) programmers using not just statistics and logic, but also—probably unintentionally—biased `commonsense` held by the programmers in relationships between factors including race, ethnicity, and sex that are not necessarily correlated with criminal behavior.

This session will report cases of algorithmic bias and their biopolitical implications—especially as they may extend to the War on Terror—and current efforts among FATML researchers to address the identified issues in the production of fair, accountable and transparent systems.