I have written several blogposts touching on various aspects of #AlgorithmicBias. This post summarizes my position, with links to further blogposts and other supporting material.
1. Algorithmic bias exists. As Cathy O'Neil says, algorithms are opinions embedded in code.
2. Algorithmic bias can have negative consequences for individuals and society, and is therefore an ethical issue. Algorithms may be performative (producing what they claim to predict) and final (producing conclusions that are incapable of revision). Furthermore, because of its potential scaling effects, a biased algorithm may have much broader consequences than a biased human being. Hence the title of O'Neil's book, Weapons of Math Destruction.
3. Algorithms often have an aura of infallibility. People may be misled by an illusion of objectivity and omniscience.
4. There are different kinds of bias. People may disagree about the relative importance of these. This is one of the reasons why the notion of media balance and impartiality is problematic.
5. Attempts to eliminate one kind of bias may reinforce another kind of bias. It may not be possible to eliminate all kinds of bias simultaneously.
6. Attempts to address algorithmic bias may distract from other ethical issues. Furthermore, researchers may be pushed or pulled into solving algorithmic bias as a fascinating technical problem. Julia Powles calls this a seductive diversion.
Note: Points 4 and 5 are not exclusively about algorithmic bias, but apply also to other kinds of systemic bias.
Further discussion and links in the following posts.
Reinforcing Stereotypes (May 2006). Early evidence of search engines embedding human bias.
Weapons of Math Destruction (October 2016). Links to book reviews, talks and other material relating to Cathy O'Neil's book.
Transparency of Algorithms (October 2016). Policy-makers are often willing to act as if algorithms are somehow above human error and human criticism. But when people are sent to prison based on an algorithm, or denied a job or health insurance, it seems reasonable to allow them to know what criteria these algorithmic decisions were based on. Reasonable but not necessarily easy.
The Road Less Travelled - Whom does the algorithm serve? (June 2019). In general, an algorithm is juggling the interests of many different stakeholders, and we may assume that this is designed to optimize the commercial returns to the algorithm-makers.
The Game of Wits Between Technologists and Ethics Professors (June 2019). Technology companies fund ethics researchers to work on obscure philosophical problems and technical fixes.
Algorithms and Auditability (July 2019). Looking at proposed remedies to algorithmic bias.
Algorithms and Governmentality (July 2019). Looking at the use of algorithms to support bureaucratic biopower (Foucault). If the prime job of the bureaucrat is to compile lists that could be shuffled and compared (Latour), then this function is increasingly being taken over by the technologies of data and intelligence - notably algorithms and so-called big data.
Limitations of Machine Learning (July 2020). Problems resulting from using biased datasets to train machine learning algorithms.
Bias or Balance (March 2021). Discusses how different stakeholders within Facebook prioritize different kinds of bias, comparing this with approaches to impartiality and balance in other media organizations including the BBC.
Does the algorithm have the last word? (February 2022). Algorithms may be performative (producing what they claim to predict) and final (producing conclusions that are incapable of revision). See also On the Performativity of Data (August 2021) and Can Predictions Create their Own Reality (August 2021).
Lots of references and links in the above posts. Here are some of the main sources.
Cathy O'Neil, How algorithms rule our working lives (Guardian, 1 Sept 2016)
Cathy O'Neil, Weapons of Math Destruction (Crown Books, 2016)
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
Wikipedia: Algorithmic bias
No comments:
Post a Comment