Showing posts with label bias. Show all posts
Showing posts with label bias. Show all posts

Saturday, March 13, 2021

Algorithmic Bias

I have written several blogposts touching on various aspects of #AlgorithmicBias. This post summarizes my position, with links to further blogposts and other supporting material. 

 

1. Algorithmic bias exists. As Cathy O'Neil says, algorithms are opinions embedded in code.

 

2. Algorithmic bias can have negative consequences for individuals and society, and is therefore an ethical issue. Algorithms may be performative (producing what they claim to predict) and final (producing conclusions that are incapable of revision). Furthermore, because of its potential scaling effects, a biased algorithm may have much broader consequences than a biased human being. Hence the title of O'Neil's book, Weapons of Math Destruction.


3. Algorithms often have an aura of infallibility. People may be misled by an illusion of objectivity and omniscience.

 

4. There are different kinds of bias. People may disagree about the relative importance of these. This is one of the reasons why the notion of media balance and impartiality is problematic.


5. Attempts to eliminate one kind of bias may reinforce another kind of bias. It may not be possible to eliminate all kinds of bias simultaneously.

 

6. Attempts to address algorithmic bias may distract from other ethical issues. Furthermore, researchers may be pushed or pulled into solving algorithmic bias as a fascinating technical problem. Julia Powles calls this a seductive diversion.

 

Note: Points 4 and 5 are not exclusively about algorithmic bias, but apply also to other kinds of systemic bias.


Further discussion and links in the following posts. 

Reinforcing Stereotypes (May 2006). Early evidence of search engines embedding human bias.

Weapons of Math Destruction (October 2016). Links to book reviews, talks and other material relating to Cathy O'Neil's book.

Transparency of Algorithms (October 2016). Policy-makers are often willing to act as if algorithms are somehow above human error and human criticism. But when people are sent to prison based on an algorithm, or denied a job or health insurance, it seems reasonable to allow them to know what criteria these algorithmic decisions were based on. Reasonable but not necessarily easy. 

The Road Less Travelled - Whom does the algorithm serve? (June 2019). In general, an algorithm is juggling the interests of many different stakeholders, and we may assume that this is designed to optimize the commercial returns to the algorithm-makers.

The Game of Wits Between Technologists and Ethics Professors (June 2019). Technology companies fund ethics researchers to work on obscure philosophical problems and technical fixes.

Algorithms and Auditability (July 2019). Looking at proposed remedies to algorithmic bias.

Algorithms and Governmentality (July 2019). Looking at the use of algorithms to support bureaucratic biopower (Foucault). If the prime job of the bureaucrat is to compile lists that could be shuffled and compared (Latour), then this function is increasingly being taken over by the technologies of data and intelligence - notably algorithms and so-called big data.

Limitations of Machine Learning (July 2020). Problems resulting from using biased datasets to train machine learning algorithms.

Bias or Balance (March 2021). Discusses how different stakeholders within Facebook prioritize different kinds of bias, comparing this with approaches to impartiality and balance in other media organizations including the BBC.

Does the algorithm have the last word? (February 2022). Algorithms may be performative (producing what they claim to predict) and final (producing conclusions that are incapable of revision).  See also On the Performativity of Data (August 2021) and Can Predictions Create their Own Reality (August 2021).

 

 

Lots of references and links in the above posts. Here are some of the main sources.

Cathy O'Neil, How algorithms rule our working lives (Guardian, 1 Sept 2016)

Cathy O'Neil, Weapons of Math Destruction (Crown Books, 2016)

Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)

Wikipedia: Algorithmic bias  

Bias or Balance?

@_KarenHao has written a detailed exposé of Facebook's approach to ethics. In addition to some useful material about political polarization, which I have discussed in previous posts, the article contains some insight into the notion of bias preferred by Mark Zuckerberg and Joel Kaplan (VP Global Public Policy). 

The article describes the work of several ethics teams within Facebook, including SAIL (Society and AI Lab) and Responsible AI. There were various challenges that these teams identified as important, including polarization and misinformation. However, because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, they were directed to focus on algorithmic bias.

Narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.

The Responsible AI team produced a tool called Fairness Flow, intended to measure the accuracy of machine-learning models for different user groups. The research team took the view that

when deciding whether a misinformation model is fair with respect to political ideology, ... fairness does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But according to Hao, Kaplan's team took the opposite view:

they took fairness to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless.

On this evidence, Facebook seems to be following pretty much the same narrow approach to balance and impartiality that responsible news organizations claim now to be trying to move away from. Perhaps the most egregious example of this approach in recent times was the coverage of climate change. For many years, the BBC felt it necessary to invite a climate change denier to debate any discussion of climate change. In 2018, they acknowledged that this was a mistake.

Politicians often complain to news organizations that their party is being treated unfairly. The traditional belief is that if you are getting similar numbers of complaints from both sides, you are probably getting things about right. However, this assumes that politics is symmetrical, with exactly two sides to any given argument. Professor Angela Phillips, one of the founders of the Media Reform Coalition, quotes research from Loughborough University showing that the BBC’s obsession with balance took Labour off air ahead of Brexit, because of the belief that a fair balance between Remain and Leave could be largely achieved by close coverage of the conflicts within the Conservative party.

One politician who has regularly complained about a lack of coverage on the BBC is Nigel Farage. Writing from a Scottish Nationalist perspective, the Jouker argues that the BBC responds to such complaints by giving him the oxygen of publicity he craves, while denying equivalent or fair coverage to the SNP. And as Simon Read notes,

It was the mainstream media that gave Mr Farage all the publicity he has wanted over the past couple of decades, including a record number of appearances on the BBC’s Question Time and his own show on radio station LBC. ... Without a doubt, he is the establishment – apart from his failure to become an MP despite 25 years of trying – and to paint himself as otherwise is rather disingenuous.

Stuart Cosgrove argues:

Due impartiality is one of the load-bearing props of the BBC’s producer guidelines. Not only is it a concept that is easily unpicked, I would argue that it has run its course as a guiding principle and is now singularly unsuited to a society where the media is fragmented, where views do not sit comfortably on the see-saw of balance and when the digital world has disrupted television’s authority.


Quite so.




Damian Carrington, BBC admits we get climate change coverage wrong too often (The Guardian, 7 September 2018)

Centre for Research in Communication and Culture, Media Coverage of the EU Referendum 5 (Loughborough University, 27 June 2016)

Stuart Cosgrove, Emily Maitlis row exposes BBC's outdated obsession with due impartiality (The National, 31 May 2020)

Karen Hao, How Facebook got addicted to spreading misinformation (MIT Technology Review, 11 March 2021)

Angela Phillips, How the BBC’s obsession with balance took Labour off air ahead of Brexit (The Conversation, 14 July 2016)

Simon Read, Beware Farage's advice (FT Advisor, 21 October 2020)

The Jouker, BBC has explaining to do over record Farage Question Time appearance (The National, 10 May 2019)

See also tweet by @leobarasi via @tonyjoyce

Related posts: Polarization (November 2018), Polarizing Filters (March 2021), Algorithmic Bias (March 2021)

 

Wednesday, May 09, 2007

Reinforcing Stereotypes

Google reflects the cumulative errors and distortions of the Internet. Several people (I got it from Radovan - see also Blogoscoped) have recently noticed that if you type "she invented" into Google, you are presented with a suggested modification: Did you mean "he invented"? So I tried some variations.
  • "he chatted" - Did you mean "he created"?
  • "she boring" - Did you mean "he boring"?
  • "she time-wasting" - Did you mean "he time-wasting"?

I tried some other search engines. None of the others I tried has this helpful feature. I'd better get back to work ... 

Update

A related story about a boy whose mother wanted to call him Kohler Wilson, so that he would have a good Google rating. (I got it from Barry, see also Marc Perton.) Did you mean Reptiles of Central America, by Gunther Kohler and Larry David Wilson?


Comment March 2022

Early indications of search engine bias, some years before Safiya Noble and others started to research the hidden gender and racial bias in these algorithms.

Whenever she speaks anywhere, I try to find the recording on YouTube. But after I’ve watched it, the YouTube algorithm invariably decides what I should watch next is some white dude’s TED talk. I try not to overuse the word “ironic”, but I think it is fair to use it here.

Safiya Noble, Algorithms of Oppression (New York University Press, 2018)

Monday, March 19, 2007

Overcoming Bias

Thanks to Alex, I have discovered a thought-provoking blog called Overcoming Bias, hosted by the Future of Humanity Institute at Oxford University.

There is some excellent material on the blog, highly recommended to any POSIWID reader, but I am puzzled by the official agenda.
"Our minds are full of natural tendencies to bias our beliefs via overconfidence, wishful thinking, and so on. Worse, our minds seem to have a natural tendency to convince us we that are aware of and have adequately corrected for such biases, when we have done no such thing. In this forum we discuss whether and how we might avoid this fate, by spending a bit less effort on each specific topic, and a bit more effort on the general topic of how to be less biased. We discuss common patterns of bias and self-deception, statistical and other formal analysis tools, computational and data-gathering aids, and social institutions which may discourage bias and encourage its correction."

In other words, bias is a Bad Thing, and needs to be Overcome. Attention will be focused not on the causes of bias but on remedies for bias. The agenda is largely written in the first person plural - we start by dealing with our own bias, in accordance with the principle taught by Jesus (Matthew 7:5).

I am minded to regard the emphasis on finding (technical) solutions rather than understanding (systemic) problems as itself a form of intellectual bias. We might observe that the Institute was in part funded by James Martin, a noted technology optimist.

I am also minded to regard the agenda of the blog as promoting what Lacan called the Discourse of the University - disinterested, abstract and apparently disembodied knowledge. (Lacan's critique of this discourse indicates hidden subjectivity in the apparently objective.)

However when I browse the actual content of the blog, I find that the contributors have a tendency to talk about the causes of bias rather than the remedies, and to talk about bias in other people rather than themselves. There's an interesting metaproblem there.

The purpose of the Overcoming Bias blog (or for that matter the POSIWID blog) is not what it says it does, but what it does.