Tuesday, October 12, 2021

Amplification and Attenuation

@STS_News (Lee Vinsel) poses a couple of important questions on Twitter

  • Have any of y'all seen any specific proposals for how to regulate Big Tech companies around mis/dis/harmful information?
  • Are there any so-far best philosophical/ethical treatments for why platforms *should* shutdown groups like Qanon?

I think it makes sense to start with the WHY question before tackling the HOW. In my post on The Ethics of Diversion - Tobacco Example (September 2019), I listed four ethical principles that might be relevant to that case, and discussed the need to balance these appropriately.

  • Minimum interference principle
  • Utilitarianism
  • Cautionary principle
  • Conflict of interest

If the primary justification for shutting down certain groups is to reduce the circulation of ideas that society has agreed to classify as undesirable, for whatever reason, then the minimum interference principle urges us to consider alternative (less extreme) measures that might achieve almost as much suppression with less interference. For example, instead of banning these groups from a given platform, it might be almost as effective to allow these groups to post whatever they like, provided that these posts are not picked up and recommended to others by the platform algorithm. Indeed, it might be safer to keep these groups on a well-controlled platform, rather than push them to migrate onto a less well-controlled platform.

In other words, in order to justify total shutdown, you have to demonstrate that this will produce significantly better outcomes than lighter regulation would.

However, in the case of the social media platforms, this is complicated by conflict of interest. The platform algorithms are designed to promote the commercial interests of the platform, and this means maximizing the engagement (or even addiction) of the so-called users. So society may not feel it can trust the platforms to manage the recommendation algorithms in a way that achieves the outcomes that society wants (and avoids the outcomes that society wants to prevent). And the algorithms themselves are generally not amenable to independent inspection, being incredibly complex and wrapped in commercial secrecy.

Meanwhile, not everyone is comfortable with the idea that ethical decisions should depend solely on the consequences of this or that action. If there is something clearly objectionable in the content that a given group wishes to post, or if we can demonstrate some evil intention behind it, surely we should be allowed to object to the content itself, not merely suppress its consequences? It may also be difficult to determine these consequences with any degree of accuracy, either in advance or retrospectively. So in addition to the consequentialist arguments we have already looked at, moral philosophers have identified a different class of argument, which they call deontological.

This includes arguments based on the intrinsic value of truth, and the idea that platforms should suppress material simply because it contains certain kinds of falsehood. However, I think this idea is insufficient as a basis for regulation, as I argue in my post on Ethical Communication in a Digital Age


There's obviously a lot more I could say about the WHY, but let me say a few words about the HOW. One way of understanding regulation and control comes from cybernetics. As we see from the work of Stafford Beer, for example, the behaviour of complex systems can be controlled by a combination of amplification and attenuation. In the world of social media, the critical question is which messages are selected for amplification.

In her article on Amplified Propaganda, Renée DiResta suggests that traditional top-down propaganda (which Chomsky called the Manufacture of Consent) has been replaced by the bottom-up creation and selective amplification of narratives that shape reality. She calls this Ampliganda.

In many cases, these narratives appear to emerge spontaneously from the crowd - this is sometimes called going viral. However, there are increasingly sophisticated ways of engineering and orchestrating these narratives, not just with bots but by convincing real people to participate in their propagation.

Meanwhile, if some things are being amplified, other things are being suppressed or ignored. In most cases, this is not the result of anyone actively deciding to suppress them, more a consequence of the fact that there is a finite quantity of attention. If a book or film or other cultural product receives poor reviews, or perhaps nobody even bothers to review it at all, then it is unlikely to become a best-seller. Similarly, the majority of the content on the internet will be read by at most a handful of people. It doesn't make sense to accuse the platform of exercising censorship, or to complain of being cancelled, simply because something fails to achieve a mass audience.

Furthermore, there are some structural reasons why certain types of content fails to attract an audience. In politics, there are limits to the kinds of idea that people are willing to take seriously - this is known as the Overton window. (However, this can change over time, for reasons that I can't go into here.)

But while the majority of people are comfortably inside the Overton window, there may be a significant minority who are attracted to groups positioned well outside the Overton window. So instead of asking how we should regulate the use of Big Tech by these groups (or some might say the cynical exploitation of these groups by Big Tech), maybe there are some larger questions about the regulation and/or liberation of discourse inside and outside the Overton window.



Renée DiResta, It’s Not Misinformation. It’s Amplified Propaganda (Atlantic, 9 October 2021)

Stanford Encyclopedia of Philosophy: Consequentialism, Deontological Ethics

Wikipedia: Overton Window

Related posts: Ethical Communication in a Digital Age (November 2018), Culture War - What Is It Good For? (July 2021)

No comments:

Post a Comment