Showing posts with label Facebook. Show all posts
Showing posts with label Facebook. Show all posts

Friday, August 20, 2021

Metrication and Demetrication

Yesterday evening I travelled across London for the opening of Ben Grosser's latest exhibition at the Arebyte Gallery, entitled Software for Less. 

Grosser's agenda is to disrupt the surveillance economy - enabling, encouraging and empowering users of social media to disengage from the glue traps laid for them by big data tech. The title of the exhibition is an answer to Mark Zuckerberg's compulsive repetition of the word "more", of which Grosser has compiled a 47 minute montage of video clips ("Order of Magnitude") prominently displayed at the entrance. Meanwhile Rachel O'Dwyer describes the paradox of Facebook: "an economy based on exponential growth ... an economy based on less".

In his book Crossing the Postmodern Divide (1992) Albert Borgmann extends the concept of hyperactivity to society as a whole, and defines it as "a state of mobilization where the richness and variety of social and cultural pursuits, and the natural pace of daily life, have been suspended to serve a higher, urgent cause" (p. 14). Psychiatrist Anna Lembke links this state with an excess of dopamine, and describes the smartphone as "the equivalent of the hypodermic needle for a wired generation".

In my post on YouTube Growth Hacking (November 2018), I mentioned Sophie Bishop's work on the anxiety, panic and self-optimization promoted by social media, and the precarity of those whose identity and self-worth depends on the number of likes and follows from other users, as measured by the platform algorithms.

On display at the Software for Less exhibition are a series of disengagement tools, including a demetrication filter (to hide those anxiety-provoking numbers counting followers and likes) and a random emotion generator (mixing up reactions of anger, sadness and joy to confuse the big tech algorithms). There are also platforms that are designed for constraint rather than overabundance, limiting the total number of posts to force the user to think whether each post is really necessary.

Perhaps for some users, these tools will provide a valuable remedy for addiction, hyperactivity and other mental and social issues. But perhaps for many other users, the point is not to actually use these tools, but simply to become more aware of the design choices that the big platforms have made, and the ability of users to resist.

 

In other news ...

August 2021. The Chinese authorities have just announced a demetrication programme, which they say is necessary to tackle online bullying and protect children. Online lists ranking celebrities by popularity are banned, and cultural products (songs, films, TV shows, etc.) should be primarily ranked by quality rather than the number of likes and comments. I mentioned Stan Culture (fan quan) in my post on A Cybernetics View of Data-Driven (August 2020)




Tim Adams, How artist Ben Grosser is cutting Mark Zuckerberg down to size (Guardian/Observer, 15 August 2021)

Helen Davidson, China bans celebrity rankings in bid to rectify chaos in the fan community (The Guardian, 27 August 2021)

Rebecca Edwards, Leave Me Alone (Arebyte Gallery, 2021)

Ben Grosser, Order of Magnitude (2019), Software for Less (28 July 2021)

Anna Lembke, Digital Addictions Are Drowning Us in Dopamine (WSJ, 13 August 2021). See also Jamie Waters, Constant craving: how digital media turned us all into dopamine addicts (Guardian/Observer 22 August 2021)

Vincent Ni, China bans reality talent shows to curb behaviours of idol fandoms (Guardian, 2 September 2021)

Rachel O'Dwyer, More or Less (Arebyte Gallery, 2021)

Related posts: Tablets and Hyperactivity (February 2013), YouTube Growth Hacking (November 2018), A Cybernetics View of Data-Driven (August 2020), The Social Dilemma (December 2020)

Saturday, March 13, 2021

Bias or Balance?

@_KarenHao has written a detailed exposé of Facebook's approach to ethics. In addition to some useful material about political polarization, which I have discussed in previous posts, the article contains some insight into the notion of bias preferred by Mark Zuckerberg and Joel Kaplan (VP Global Public Policy). 

The article describes the work of several ethics teams within Facebook, including SAIL (Society and AI Lab) and Responsible AI. There were various challenges that these teams identified as important, including polarization and misinformation. However, because of Kaplan’s and Zuckerberg’s worries about alienating conservatives, they were directed to focus on algorithmic bias.

Narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.

The Responsible AI team produced a tool called Fairness Flow, intended to measure the accuracy of machine-learning models for different user groups. The research team took the view that

when deciding whether a misinformation model is fair with respect to political ideology, ... fairness does not mean the model should affect conservative and liberal users equally. If conservatives are posting a greater fraction of misinformation, as judged by public consensus, then the model should flag a greater fraction of conservative content. If liberals are posting more misinformation, it should flag their content more often too.

But according to Hao, Kaplan's team took the opposite view:

they took fairness to mean that these models should not affect conservatives more than liberals. When a model did so, they would stop its deployment and demand a change. Once, they blocked a medical-misinformation detector that had noticeably reduced the reach of anti-vaccine campaigns, the former researcher told me. They told the researchers that the model could not be deployed until the team fixed this discrepancy. But that effectively made the model meaningless.

On this evidence, Facebook seems to be following pretty much the same narrow approach to balance and impartiality that responsible news organizations claim now to be trying to move away from. Perhaps the most egregious example of this approach in recent times was the coverage of climate change. For many years, the BBC felt it necessary to invite a climate change denier to debate any discussion of climate change. In 2018, they acknowledged that this was a mistake.

Politicians often complain to news organizations that their party is being treated unfairly. The traditional belief is that if you are getting similar numbers of complaints from both sides, you are probably getting things about right. However, this assumes that politics is symmetrical, with exactly two sides to any given argument. Professor Angela Phillips, one of the founders of the Media Reform Coalition, quotes research from Loughborough University showing that the BBC’s obsession with balance took Labour off air ahead of Brexit, because of the belief that a fair balance between Remain and Leave could be largely achieved by close coverage of the conflicts within the Conservative party.

One politician who has regularly complained about a lack of coverage on the BBC is Nigel Farage. Writing from a Scottish Nationalist perspective, the Jouker argues that the BBC responds to such complaints by giving him the oxygen of publicity he craves, while denying equivalent or fair coverage to the SNP. And as Simon Read notes,

It was the mainstream media that gave Mr Farage all the publicity he has wanted over the past couple of decades, including a record number of appearances on the BBC’s Question Time and his own show on radio station LBC. ... Without a doubt, he is the establishment – apart from his failure to become an MP despite 25 years of trying – and to paint himself as otherwise is rather disingenuous.

Stuart Cosgrove argues:

Due impartiality is one of the load-bearing props of the BBC’s producer guidelines. Not only is it a concept that is easily unpicked, I would argue that it has run its course as a guiding principle and is now singularly unsuited to a society where the media is fragmented, where views do not sit comfortably on the see-saw of balance and when the digital world has disrupted television’s authority.


Quite so.




Damian Carrington, BBC admits we get climate change coverage wrong too often (The Guardian, 7 September 2018)

Centre for Research in Communication and Culture, Media Coverage of the EU Referendum 5 (Loughborough University, 27 June 2016)

Stuart Cosgrove, Emily Maitlis row exposes BBC's outdated obsession with due impartiality (The National, 31 May 2020)

Karen Hao, How Facebook got addicted to spreading misinformation (MIT Technology Review, 11 March 2021)

Angela Phillips, How the BBC’s obsession with balance took Labour off air ahead of Brexit (The Conversation, 14 July 2016)

Simon Read, Beware Farage's advice (FT Advisor, 21 October 2020)

The Jouker, BBC has explaining to do over record Farage Question Time appearance (The National, 10 May 2019)

See also tweet by @leobarasi via @tonyjoyce

Related posts: Polarization (November 2018), Polarizing Filters (March 2021), Algorithmic Bias (March 2021)

 

Monday, March 08, 2021

Polarizing Filters

In photography, a polarizing filter can manage your reflections and darken your skies: this may be a good metaphor for what happens in media and communications. While filtering and polarization were (and remain) well-known phenomena within traditional media, there are enhanced mechanisms on the Internet, whose effects are still not fully understood.

The term filter bubble was introduced by Eli Pariser around 2010, drawing on earlier work by Nicholas Negroponte and Cass Sunstein, to refer to a phenomenon previously known as echo chamber or information cocoon. If you get all your news from a single partisan source - for example, a single newspaper or TV channel - you could easily get a one-sided view of what is going on in the world. In the USA, the CNN audience doesn't overlap very much with the Fox News audience. Barack Obama is one of many who have expressed concerns about the consequences of this for American democracy.

The concept is easy enough to challenge if you take it too literally.

The images of chambers and bubbles conjures up hermetically sealed spaces where only politically like-minded participants connect and only ideologically orthodox information circulates, but this seems highly improbable. ... We cluster but we do not segregate. Bruns pp 95-96

Or if you imagine that filter bubbles are a technologically determined problem associated exclusively with the Internet. As Ignas Kalpokas notes in his review of Bruns, 

It is easy to slide into a form of fundamentalism, particularly when researching something as pervasive as social media, by simply assuming that the architecture and policies of the dominant platforms determine everything. ... Once we attribute causation to technology, we can comfortably and conveniently avoid responsibility for any societal ills and the ensuing necessity to put some effort towards ameliorating them. Kalpokas

However, the problem identified by Sunstein twenty years ago was not just about filters and fragmentation, but also about group polarization - the fact that there are internal and external forces that push individuals and groups to adopt ever more extreme positions. This is akin to Bateson's notion of schismogenesis. Axel Bruns stresses the importance of this.

The problem, in short, is polarisation, not fragmentation, and such polarisation is not the result of our use of online and social media platforms. Bruns p 105
I agree with his first point, but I want to qualify his second point. While the internet is certainly not the only cause of political polarization, its influence cannot be completely discounted. The techno-sociologist Zeynep Tufekci has identified a specific mechanism on the Internet that appears to have a polarizing effect similar to that predicted by Sunstein - the recommendation algorithms on social media designed to keep users engaged for as long as possible, showing them progressively more outrageous and extreme content if that's what it takes. She argues that this is built into the business model of the tech giants. 

This is consistent with something Des Freedman noted in 2012,

The digital sphere is not a parallel economy but one that accentuates the tensions between the creativity and collaboration of a generative system and the hierarchies and polarisation prioritised by a system that rests, above all else, on the pursuit of profit. Curran Fenton Freedman p 92 my emphasis

In the same volume, Natalie Fenton noted Sunstein's argument that group polarisation is likely to become more extreme with time (p 168).

Many of the comments under Dr Tufekci's CBC interview are from what one might call nudge denialists - people who point out how easy it is to switch off the auto-play function on YouTube, and who claim never to be influenced by recommendations from the tech giants. Yeah, right. But that's not the point. Nobody said you can nudge all of the people all of the time. But the tech giants are certainly capable of nudging some of the people some of the time, at massive scale.

As a former computer programmer herself, Tufekci is able to explain the extraordinary power of the recommendation algorithms deployed by the tech giants. The tech giants choose to develop and use these algorithms in the pursuit of profit; and legislators and regulators around the world may or may not choose to permit or control this. So this deployment is not a matter of technological determinism but is subject to social and political choice. I wonder who is best placed to nudge governments to do the right thing?

 

Update

A few days after I posted the above, further details emerged about Facebook's approach to polarization, including a detailed exposé by @KarenHao (click on her name for Twitter discussion), which in turn appears to have prompted an internal Facebook meeting on Polarization and our Products reported by Ryan Mac and Craig Silverman.


Another Update

John Naughton's latest article has alerted me to a quantitative study of internet usage and polarization, published in 2017, which appears to show that the effects of political polarization are most marked in those demographic groups with comparatively less internet use.

While this evidence provides a warning not to overstate the Internet Causes Polarization thesis, it doesn't fully refute it either. A plausible explanation of these findings is that those who are less familiar with the workings of the Internet may be more vulnerable to its effects. I look forward to seeing further empirical studies.




Levi Boxell, Matthew Gentzkow and Jesse M. Shapiro, Greater Internet use is not associated with faster growth in political polarization among US demographic groups (PNAS, 114/40, 3 October 2017) HT John Naughton, Is online advertising about to crash, just like the property market did in 2008? (The Guardian, 27 March 2021)

Axel Bruns, Are Filter Bubbles Real? (Polity Press, 2019)

James Curran, Natalie Fenton and Des Freedman, Misunderstanding the Internet (1st edition, Routledge 2012). Note that some of this material didn't make it into the 2016 second edition.

Karen Hao, How Facebook got addicted to spreading misinformation (MIT Technology Review, 11 March 2021)

Ignas Kalpokas, Book Review: Are Filter Bubbles Real? by Axel Bruns (LSE Blogs, 17 January 2020)

Ryan Mac, Facebook: Polarization and Our Products (Twitter, 11 March 2021)

Ryan Mac and Craig Silverman, Facebook is good for America actually, says Facebook executive (Buzzfeed News, 12 March 2021)

Thomas Nagel, Information Cocoons (London Review of Books, 5 July 2001)

Eli Pariser, Did Facebook's big new study kill my filter bubble thesis? (Medium/Backchannel, 7 May 2015)

Cass Sunstein, The Law of Group Polarization (John M. Olin Program in Law and Economics Working Paper No. 91, 1999)

Zeynep Tufekci, YouTube, the Great Radicalizer (New York Times, 10 March 2018). Is YouTube Radicalizing You? (CBC via YouTube, 22 April 2018)

 

Wikipedia: Echo Chamber (Media), Filter Bubble, Schismogenesis,

Related posts: Social Networks and Received Opinion (July 2010), The Pursuit of Truth (December 2016), Polarization (November 2018), Technological Determinism (December 2020), Optimizing for Outrage (March 2021), Bias or Balance (March 2021)

Saturday, December 31, 2016

The Pursuit of Truth

As @PennyRed said last month, after the election of Donald Trump: "It turns out that you cannot stop fascism by turning off Facebook and doing some deep breathing."

The other day, I was arguing with a woman who told me about some recent atrocities in a politically torn part of the world. She was clearly upset by these atrocities, which she framed in a particular way, and used to support some fairly extreme political conclusions. I disagreed strongly with her conclusions, and I was not minded to take the reports of the atrocities at face value.

When I looked on the internet later, I found a Facebook page that carried the same reports, in similar language. Presumably that was the woman's source. I also found a Wikipedia page on the conflict, which framed things in more neutral terms, based on a number of apparently independent sources. Although there were some unpleasant incidents reported by the mainstream news media, these were neither as drastic nor as one-sided as the Facebook material suggested. So while I don't have sufficient evidence to disprove the atrocities completely, I cannot see enough evidence to take them as seriously as she does.

Many Facebook pages use dramatic images to increase circulation. There have been images of billboards supposedly encouraging criminal behaviour. Snopes shows that a fake billboard, supposedly displayed in Finland to encourage rape by migrants, was actually based on a genuine billboard displayed in Liberia to offer support to rape victims. Georgina Guedes finds another version of the same billboard in South Africa, this time supposedly promoting violence against white farmers.

And both sides are now using the fake billboard tactic. Today someone tweeted a picture of a billboard advertising some Trump property development, which was supposedly displayed in an Indian slum, with people sleeping on the street below. A few hours later, the same person deleted the tweet and apologized for the fake.

Many people find it harder to apply the same critical eye to material that they are instinctively sympathetic to. But as I said in my earlier piece on The Purpose of Truth (November 2016), the more I want to believe something (because it fits my preconceptions), the more I should doubt it.




BBC Guidelines - things to ask yourself before you share a claim
  • Have I heard of the publisher before?
  • Is this the source I think it is, or does it sound a bit like them?
  • Can I point to where this happened on a map?
  • Has this been reported anywhere else?
  • Is there more than one piece of evidence for this claim?
  • Could this be something else?



How to spot a fake US election claim (BBC News, 2 November 2016) Fake news in 2016: What it is, what it wasn't, how to help (BBC News, 30 December 2016)

How to verify photos and videos on social media networks (The Observers, France 24, 10 November 2015)

Dan Evon, You Can't Do That in Finland (Snopes, 11 January 2016)

Georgina Guedes, The ANC is not encouraging black people to kill whites (eNews Channel Africa, 10 March 2016)

Laurie Penny, Against Bargaining: On not taking leave of your senses (The Baffler, 18 November 2016)

Wikipedia: BBC News, eNews Channel Africa, France 24, Snopes, The Baffler

Related post: The Purpose of Truth (November 2016)