Showing posts with label surveillance. Show all posts
Showing posts with label surveillance. Show all posts

Monday, July 21, 2025

The Urge to Persecute

People have always sought to project evil onto their neighbours, and that desire now extends to random strangers on the Internet. Malcolm Gaskill shows how the science of witch-hunting took a leap forward in the Enlightenment period, thanks to the meticulous assembly and analysis of data to confirm or confound hypotheses, and describes how one seventeenth century German woman was found innocent of witchcraft only after the intervention of her son, who was able to use these same tools in her defence. Of course it helped that her son happened to be one of the greatest intellectuals of the period, Johannes Kepler.

Empiricism made witchcraft possible as an actionable crime before it made it an impossible one. Kepler saved his mother through formidable concentration, sticking to a firm line of reasoning and dissecting his opponents’ arguments, point by point.

In this week's news, two tech executives were spotted cuddling one another at a Coldplay concert, drawing attention to themselves by ducking in a guilty fashion when they realized they were being shown on the big screen. Internet sleuths were able to discover their identity, public shaming ensued, and jobs and marriages were lost - an example of what Cathy O'Neil calls Networked Shame. In his commentary on the incident, Brandon Vigliarolo noted our willingness to persecute someone for a perceived wrong despite not knowing the full story. 

Vigliarolo then went on to remind us of the eagerness with which other tech executives are pushing mass surveillance, which will apparently keep everyone on their best behavior through the use of constant real-time machine-learning-powered monitoring

Because we can trust machine learning to know the full story before jumping to conclusions, can't we?


See also: Witnessing Machines Built in Secret (November 2017), Metrication and Demetrication (August 2021), The Purpose of Shame (April 2022)


Malcolm Gaskill, Money, Sex, Lies, Magic (London Review of Books, 38/13, 30 June 2016) 

Malcolm Gaskill, Social media witch-hunts are no different to the old kind – just bigger (Guardian, 13 October 2016)

Cathy O'Neil, The Shame Machine (New York: Crown, 2022) 

Jon Ronson, So You've Been Publicly Shamed (Picador 2015)

Geoff Shullenberger, The Scapegoating Machine (The New Inquiry, 30 November 2016)  

Brandon Vigliarolo, Ellison declares Oracle all-in on AI mass surveillance, says it'll keep everyone in line (The Register, 16 September 2024)

Brandon Vigliarolo, Coldplay kiss-cam flap proves we’re already our own surveillance state (The Register, 18 Jul 2025)

Sunday, August 04, 2024

The Purpose of Surveillance

While surveillance has been a recurring topic on this blog, the technological environment has developed significantly over the past twenty years.

Once upon a time, the only form of real-time surveillance involved so-called closed circuit systems (CCTV), providing a dedicated watcher with a view of what was going on at that moment, although these systems now generally include a recording function, often operate retrospectively, and feed into an open-ended ecosystem of discipline-and-punish. As I noted in May 2008, the purpose of CCTV had extended from monitoring to include deterrence and penalty, and in the process it had ceased to be closed circuit in the original sense.

Fiction has provided some alternative models of surveillance and control. As well as Fritz Lang's 1960 film The Thousand Eyes of Dr Mabuse, there are the Palantíri in Tolkein's Lord of the Rings, which are indestructable stones or crystal balls enabling events to be seen from afar.

The data company Palantir. whose founders included Alex Karp and Peter Thiel, was originally established to provide big data analytics to the intelligence community. Geoff Shullenberger suggests that Palantir might be understood as an application of the ideas of Leo Strauss (who inspired Thiel): an enterprise that acknowledges the deep, dangerous undercurrent of human violence and harnesses the reams of data generated by the internet to monitor and control it. Meanwhile Moira Weigel notes the contribution of Adorno (who inspired Karp): Adorno’s jargon anticipates the software tools Palantir would develop. By tracing the rhetorical patterns that constitute jargon in literary language, Karp argues that he can reveal otherwise hidden identities and affinities—and the drive to commit violence that lies latent in them.

 


 

Geoff Shullenberger, The Intellectual Origins of Surveillance Tech (Outsider Theory, 17 July 2020)

Moira Weigel, Palantir goes to the Frankfurt School (Boundary2, 10 July 2020)

Related posts: Surveillance and its Effects (May 2005), What's in a Name - CCTV (May 2008), As Shepherds Watched (April 2024)

Surveillance@DemandingChange, Surveillance@POSIWID








Sunday, October 31, 2021

Contagion

The mathematician and broadcaster Hannah Fry did some post-doctoral research modelling social disturbance and crime in terms of contagion. In 2014, she gave a presentation at a conference in Berlin about her work with the police attempting to model the patterns of the 2011 London riots. As she said later, it didn't occur to her that a Berlin audience might have a different perspective on police power than a British audience would, and she got (in her words) absolutely torn apart. 2018 video 11:20

Several members of the audience expressed concerns about handing over too much control to the police, not only in giving them the power to suppress different forms of disturbance, but also in biasing the data on which the mathematical models were based. One person asked whether the data could really represent who the rioters were, referring to sociological research showing that police arrests are anything but neutral ... underprivileged groups of society tend to be arrested more. 2014 video 55:40

Another person noted how the riots depended not only on the behaviour of the rioters and the police, but also on the behaviour of the bystanders, which varied in different parts of London. If the Turkish community on London dealt robustly with the situation without relying on the police, this might be linked to the relationship between police and public in Turkey. In her response, Fry also noted the influence of the British media on the behaviour of bystanders in such situations.

While noting concerns about privacy, and agreeing that handing over too much control to technology is a really scary thing, Fry attempted to balance this against the claim that there is something positive to be gained by looking at the macro level behaviour of people in the way that we can design our society. 2014 video 52:50

In her more recent talks, Professor Fry has been more careful to put mathematical modelling into an ethical frame, as well as encouraging people to question the authority of the algorithm.

When it comes to algorithms, you can't just build them, put them on a shelf, and decide whether they're good or bad in isolation. You have to think about how they are actually going to be used by people. 2018 video 11:40

Once you dress something up as an algorithm or as a bit of artificial intelligence it can take on this air of authority that makes it really hard to argue with. 2018 video 26:10

Peter Polack provides a more fundamental challenge to the something positive claim. He traces the genealogy of this idea from August Comte's social physics to latter-day neorationalism, referencing Michel Foucault's notion of biopower and biopolitics.

Meanwhile, if social disorder appears to follow the same mathematical patterns as contagious disease, and the police are being invited to treat crime as a disease, perhaps it is not surprising when disease (or even the possibility of being infectious) starts to be treated as a crime.

The protective measures during the COVID pandemic include lock-down and self-isolation. So-called social distancing really means physical distancing, with as much social interaction as your technology (from phones to Internet) can provide. This is a lot easier for people with reasonably large houses, good internet connections, and devices for each member of the family, as well as the kinds of jobs that are relatively easy to do from home. For people in cramped housing, and for people who actually need to turn up at work if they want to get paid, self-isolation is a luxury they may not be able to afford. Therefore being tested for COVID may also be a luxury they can't afford.

Hannah Fry's mathematical model of the London riots identified that many of those arrested were from disadvantaged areas, although as we've seen this finding can be interpreted in more than one way. A model of disease might also show increased infection in disadvantaged areas. Maps of disadvantage and disease show strong persistence over time, as I discuss in my post on Location, Location, Location, quoting a study by Dr Douglas Noble and his colleagues.

But the COVID testing data are not going to show this pattern if people from disadvantaged areas are reluctant to come forward for testing. So much for biopower then.



Hannah Fry, I predict a riot (re:publica 2014, May 2014) recording via YouTube

Hannah Fry, Contagion: The BBC Four Pandemic (BBC March 2018) recording not currently available

Hannah Fry, Should Computers Run the World (Royal Institution, November 2018) recording via YouTube

Douglas Noble et al, Feasibility study of geospatial mapping of chronic disease risk to inform public health commissioning. BMJ Open 2012;2:e000711 doi:10.1136/bmjopen-2011-000711

Peter Polack, False Positivism (Real Life Mag, 18 October 2021) HT @jjn1

Stanford Encyclopedia of Philosophy: August Comte, Michel Foucault

Related posts: Location, Location, Location (February 2012), Algorithms and Governmentality (July 2019), Algorithmic Bias (March 2021)

Friday, August 20, 2021

Metrication and Demetrication

Yesterday evening I travelled across London for the opening of Ben Grosser's latest exhibition at the Arebyte Gallery, entitled Software for Less. 

Grosser's agenda is to disrupt the surveillance economy - enabling, encouraging and empowering users of social media to disengage from the glue traps laid for them by big data tech. The title of the exhibition is an answer to Mark Zuckerberg's compulsive repetition of the word "more", of which Grosser has compiled a 47 minute montage of video clips ("Order of Magnitude") prominently displayed at the entrance. Meanwhile Rachel O'Dwyer describes the paradox of Facebook: "an economy based on exponential growth ... an economy based on less".

In his book Crossing the Postmodern Divide (1992) Albert Borgmann extends the concept of hyperactivity to society as a whole, and defines it as "a state of mobilization where the richness and variety of social and cultural pursuits, and the natural pace of daily life, have been suspended to serve a higher, urgent cause" (p. 14). Psychiatrist Anna Lembke links this state with an excess of dopamine, and describes the smartphone as "the equivalent of the hypodermic needle for a wired generation".

In my post on YouTube Growth Hacking (November 2018), I mentioned Sophie Bishop's work on the anxiety, panic and self-optimization promoted by social media, and the precarity of those whose identity and self-worth depends on the number of likes and follows from other users, as measured by the platform algorithms.

On display at the Software for Less exhibition are a series of disengagement tools, including a demetrication filter (to hide those anxiety-provoking numbers counting followers and likes) and a random emotion generator (mixing up reactions of anger, sadness and joy to confuse the big tech algorithms). There are also platforms that are designed for constraint rather than overabundance, limiting the total number of posts to force the user to think whether each post is really necessary.

Perhaps for some users, these tools will provide a valuable remedy for addiction, hyperactivity and other mental and social issues. But perhaps for many other users, the point is not to actually use these tools, but simply to become more aware of the design choices that the big platforms have made, and the ability of users to resist.

 

In other news ...

August 2021. The Chinese authorities have just announced a demetrication programme, which they say is necessary to tackle online bullying and protect children. Online lists ranking celebrities by popularity are banned, and cultural products (songs, films, TV shows, etc.) should be primarily ranked by quality rather than the number of likes and comments. I mentioned Stan Culture (fan quan) in my post on A Cybernetics View of Data-Driven (August 2020)




Tim Adams, How artist Ben Grosser is cutting Mark Zuckerberg down to size (Guardian/Observer, 15 August 2021)

Helen Davidson, China bans celebrity rankings in bid to rectify chaos in the fan community (The Guardian, 27 August 2021)

Rebecca Edwards, Leave Me Alone (Arebyte Gallery, 2021)

Ben Grosser, Order of Magnitude (2019), Software for Less (28 July 2021)

Anna Lembke, Digital Addictions Are Drowning Us in Dopamine (WSJ, 13 August 2021). See also Jamie Waters, Constant craving: how digital media turned us all into dopamine addicts (Guardian/Observer 22 August 2021)

Vincent Ni, China bans reality talent shows to curb behaviours of idol fandoms (Guardian, 2 September 2021)

Rachel O'Dwyer, More or Less (Arebyte Gallery, 2021)

Related posts: Tablets and Hyperactivity (February 2013), YouTube Growth Hacking (November 2018), A Cybernetics View of Data-Driven (August 2020), The Social Dilemma (December 2020)

Saturday, February 09, 2019

Insurance and the Veil of Ignorance

Put simply, the purpose of insurance is to shift risk from the individual to the collective. When an individual cannot afford to bear a given risk, the individual purchases some risk cover from an organization - typically an insurance company or mutual - which spreads the risk over many individuals and is supposedly better able to bear these risks.

Individuals are sometimes obliged to purchase insurance - for example, car insurance before driving on the public roads, or house insurance before getting a mortgage. In some countries, there may be legal requirements to have some form of health insurance.

Insurance companies typically charge different premiums to different individuals depending on the perceived risk and the available statistics. For example, if young inexperienced drivers and very elderly drivers have more accidents, it would seem fair for these drivers to pay a higher premium.

Insurance companies therefore try to obtain as much information about the individual as possible, in order to calculate the correct premium, or even to decide whether to offer cover at all. But this is problematic for two reasons.

The first problem is about fairness, as these calculations may embed various forms of deliberate or inadvertent discrimination. As Joi Ito explains,
The original idea of risk spreading and the principle of solidarity was based on the notion that sharing risk bound people together, encouraging a spirit of mutual aid and interdependence. By the final decades of the 20th century, however, this vision had given way to the so-called actuarial fairness promoted by insurance companies to justify discrimination.
The second problem is about knowledge and what Foucault calls biopower. Just suppose your insurance company is monitoring your driving habits through sensors in the vehicle or cameras in the street, knows how much red meat you are eating, knows your drinking habits through the motion and location sensors on your phone, is inferring your psychological state from your Facebook profile, and has complete access to your fitness tracker and your DNA. If the insurance company now has so much data about you that it can accurately predict car accidents, ill-health and death, the amount of risk actually taken by the insurance company is minimized, and the risk is thrown back onto the individual who is perceived (fairly or unfairly) as a high-risk.

In her latest book, Shoshana Zuboff describes how insurance companies are using the latest technologies, including the Internet of Things, not only to monitor drivers but also to control them.
Telematics are not intended merely to know but also to do (economics of action). They are hammers; they are muscular; they enforce. Behavioral underwriting promises to reduce risk through machine processes designed to modify behavior in the direction of maximum profitability. Behavioral surplus is used to trigger punishments, such as real-time rate hikes, financial penalties, curfews, and engine lockdowns, or rewards, such as rate discounts, coupons, and gold stars to redeem for future benefits. The consultancy firm AT Kearney anticipates 'IoT enriched relationships' to connect 'more holistically' with customers 'to influence their behaviors'. (p215)

So much for risk sharing then. Surely this undermines the whole point of insurance?



Sami Coll, Consumption as Biopower: Governing Bodies with Loyalty Cards, (Journal of Consumer Culture 13(3) 2013) pp 210-220

Caley Horan, Actuarial age: insurance and the emergence of neoliberalism in the postwar United States (PhD Thesis 2011)

Joi Ito, Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination (Wired Magazine, 5 February 2019) HT @WolfieChristl @zeynep

AT Kearney, The Internet of Things: Opportunity for Insurers (2014)

Cathy O'Neil, How algorithms rule our working lives (The Guardian, 1 September 2016)

Jathan Sadowski, Alarmed by Admiral's data grab? Wait until insurers can see the contents of your fridge (The Guardian, 2 November 2016)

Carissa Véliz, If AI Is Predicting Your Future, Are You Still Free? (Wired, 27 December 2021)

Shoshana Zuboff, The Age of Surveillance Capitalism (Profile Books 2019) esp pages 212-218


Stanford Encyclopedia of Philosophy: Foucault

Related posts
: The Transparency of Algorithms (October 2016) Pay as you Share (November 2016), Shoshana Zuboff on Surveillance Capitalism (Book Review, February 2019) 

 

Update: I have just come across a journal special issue on the Personalization of Insurance (Big Data and Society, November 2020). I note that the editorial starts with the same Zuboff quote that I used here. Also adding link to a recent article by Professor Véliz.

Sunday, November 12, 2017

Witnessing Machines Built in Secret

#amtsb @proto_type's current performance work, which I caught at the South Bank Centre in London this weekend, is called A Machine They're Secretly Building. The title comes from a warning by Edward Snowden, as reported by Glen Greenwald.

"I can't in good conscience allow the US government to destroy privacy, internet freedom and basic liberties for people around the world with this massive surveillance machine they're secretly building."

As I filed out of the performance, I bought a copy of the script, paying with cash rather than credit card (as if that's going to stop THEM knowing I was there). In her introduction, Alwyn Walsh mentions Henry Giroux and the idea of disimagination. Henry Giroux credits this idea to Georges Didi-Huberman who, starting from four photographs taken by Jews at Auschwitz-Birkenau, had offered an extended and profound meditation on the status of the image as a means of historical analysis. Giroux's version of the politics of disimagination refers to images (and also institutions, discourses, and other modes of representation) "that undermine the capacity of individuals to bear witness to a different and critical sense of remembering, agency, ethics and collective resistance".

According to Giroux, therefore, the disimagination machine "functions primarily to undermine the ability of individuals to think critically, imagine the unimaginable, and engage in thoughtful and critical dialogue: put simply, to become critically informed citizens of the world". Thankfully, Walsh tells us, "this ... is what theatre and performance is so perfectly equipped to challenge".

So the Proto-type show aims to bear witness about what is going on. As the audience files into the performance space, we see two women dressed in black, with pink balaclavas. And a large screen facing the audience. One of the women is facing a camera: her face (or what we can see of it) is shown on the screen. As the show progresses, the screen (which has equal billing with the human characters in the script) also displays text and documentary fragments, apparently offering "facts" to illustrate or substantiate the shifting subjective voices of the human characters - sometimes resigned acceptance, sometimes angry protest - exploring the conflict between the security narrative (normal, law-abiding citizens versus terrorists, "keeping you safe") and the privacy narrative (state surveillance versus private individuals with rich inner lives). At the climax of the show, the screen shows the audience, with random members marked with green and red rectangles as if indicating targets of suspicion, perhaps based on behaviour or backstory. (From a technology point of view this looked pretty unsophisticated, but from a dramatic point of view it was sufficient to provoke audience discomfort.)

But if THEY are secretly building a machine, who exactly is THEY?

For Edward Snowden and Proto-type, THEY means governments - mostly the British and American governments, although Pussy Riot is referenced both in the script and in the pink balaclavas. But of course the power behind the machine could also be Google or Facebook, which might possibly (but how would I know?) be much more powerful than those of mere governments.

 And if the machine was so secret, how could such a machine affect "the ability of individuals to think critically, imagine the unimaginable, and engage in thoughtful and critical dialogue"? Surely a much more dangerous machine would be one that seduced people into suspending their critical imagination, a machine that presented us with apparently objective facts, a machine that persuaded us to think with the majority - or at least what it told us was the majority view. (Surely that couldn't happen here?)


In his essay on the relationship between coercion and consent, Walter Streek refers to
"a huge machinery of coercion, easily the largest and most expensive in history, maintained in readiness for the state of emergency that may one day have to be called"
and chimes with Proto-type in suggesting that cover for the growth of this machinery is provided by the "war on terror", 
 "waged to enable the masses to continue living their pressured lives of competitive production and consumption".

In his 2011 documentary, All Watched Over By Machines of Loving Grace (#AWOBMOLG), Adam Curtis presented a powerful dialectic about technological capitalism. Although there were some logical flaws in his argument, as I pointed out at the time, I think Curtis was correct in identifying some of the key trends, as well as pointing at the multiple centres of power - for example, Madison Avenue, Silicon Valley, Wall Street and Washington. The multiple centres of power (media, technology, corporate, state) were also explored (with rather more academic rigour) at the Power Switch conference in Cambridge in March 2017.

A Machine They're Secretly Building is darker than Curtis (if that were possible) and more narrowly focused. But although one may be justifiably alarmed by state surveillance, the disimagination effect is arguably wreaked more by corporate surveillance, hashtag #YouAreTheProduct. So I'm looking forward to their next show, which I understand will be on economics.






Georges Didi-Huberman, Images in Spite of All: Four Photographs from Auschwitz (Trans. Shane B. Lillis. Chicago: University of Chicago Press, 2008) review by Paul B Jaskot in Journal of Jewish Identities Issue 3, Number 2, July 2010 pp. 93-95

Henry A. Giroux, The Politics of Disimagination and the Pathologies of Power (Truth Out, 27 February 2013)

Glen Greenwald et al, Edward Snowden: the whistleblower behind the NSA surveillance revelations (Guardian, 11 June 2013)

Laura James, Power Switch - Conference Report (31 March 2017) - liveblog of CRASSH PowerSwitch Conference

Wolfgang Streeck, You need a gun (London Review of Books, 14 December 2017) (subscribers only)

Richard Veryard, All Chewed Over By Machines (26 May 2011) - review of Adam Curtis.
See also Pax Technica (24 November 2017), IOT is coming to town (3 December 2017), Shoshana Zuboff on Surveillance Capitalism (February 2019)

Aylwyn Walsh, Staging the Radical Potential of the Imagination: A Critical Introduction to A Machine they’re Secretly Building (via Academia.edu, undated)

Andrew Westerside and Proto-type Theatre, A Machine they’re Secretly Building (Oberon Modern Plays, 2017)


updated 18 December 2017

Tuesday, June 26, 2007

Something to hide?

There is a perceived conflict between two things. On the one hand, security and law enforcement. On the other hand, privacy and confidentiality.

The advocates of security and law enforcement often dismiss concerns about privacy and confidentiality with the "something to hide" argument. In other words, if you've got nothing to hide, then you've got nothing to worry about. Among other things, this argument is used to defend the proliferation of surveillance and CCTV.

This leads to the grossly unfair assumption that people who are highly protective of their privacy and confidentiality are probably up to no good. This is linked to the popular (but sometimes misleading) POSIWID-related assertion: No Smoke Without Fire.

There are lots of complex and politically charged aspects to the investigation of dealings between British Aerospace (BAe) and members of the Saudi Royal family [BBC News, June 26th 2007]. One of the possible effects of the investigation is that it may disrupt BAe's attempted take-over of the US firm Armor Holdings, which US competitors have viewed with some disfavour.

The Saudi government has always insisted that these dealings should remain confidential. And if the dealings between BAe and the Saudi Royal family are legal, then they have just as much right to confidentiality as any other corporate entity. Let us hope that the advocates of confidentiality in this case (possibly including Tony Blair) are as vocal in their support of the principles of confidentiality and privacy in other cases.

Friday, January 12, 2007

Enchanted Coins

Inspired by Hermione Granger (who distributes magically enchanted coins to the members of a secret society in Harry Potter and the Order of the Phoenix), someone has been passing technologically enhanced Canadian coins to United States defence contractors. [ABC Money, Yahoo]. Or perhaps not [Globe and Mail].

The coins (dubbed "spy coins" by some journalists) contain tiny transmitters, and we may guess they are intended for something to do with espionage or surveillance.

Bruce Schneier thinks the story sounds implausible. "There are far easier ways to track someone than to give him something he's going to give away the next time he buys a cup of coffee. Like, maybe, by his cell phone."

But Bruce's criticism makes three questionable assumptions.

Firstly, it assumes we know exactly what the other side wants to track. [Note 1] Maybe they want to find out whether the coins are spent on coffee or cocaine. Maybe they want to track the circulation of hot money. [Note 2]

Secondly, it assumes that the coins were deliberately planted on the defence contractors. Maybe the real espionage targets had already spent the coins in the coffee shop, and the defence contractors merely chanced to receive the coins in their change. [Note 3]

And thirdly it assumes that there is a relatively small number of these coins. But if there were millions of these spy coins in circulation, it wouldn't matter if some of them were spent in coffee shops.

Meanwhile, what is the purpose of publishing this story in this form? One effect is that patriotic US citizens will get the message that Canadian coins (like Canadian drugs) are to be distrusted. Obviously the bad guys wouldn't dare to doctor US coins would they?

Note 1. Bruce is assuming the purpose and then evaluating whether a given mechanism will satisfy this purpose. The POSIWID alternative is to infer the possible purpose from the likely effects of a given mechanism. In other words, reasoning in the opposite direction.

Note 2: A Nonny makes a similar point. "RFID tags in coins is a stupid way to spy. It is, however, an excellent way to track currency (especially through vending machines and the like). Everything that makes it a weak spy tool makes it a good tool for a mint that is trying to assess coinage usage patterns." clvrmnky disagrees. "I seriously doubt Canada would spend the time and money coming up with tech to track currency usage. They already know how currency is used." This disgreement yields another example of the different kinds of reasoning discussed in Note 1.

Note 3: Perhaps defence contractors have exceptional capability for detecting unusual coinage. Or perhaps the coins were detected when they entered a secure facility. Defence contractors therefore serve as markers for the population as a whole.

Wednesday, September 21, 2005

Surveillance 2

On his way home from Salford University, Robin Wilton has the good fortune to pass Strangeways Prison, and is prompted to blog about the panopticon.

POSIWID teaches us to look at the effects of a system rather than its avowed purpose. I have just re-read a post by Scribe (The two faces of CCTV) [URL updated] in which he discusses and dismisses several avowed (and contradictory) purposes of CCTV. Among other things, surveillance is supposed to teach good behaviour to those being watched. In reality, surveillance often merely teaches more devious or secretive behaviour.

Most of the discussion of panopticon revolves around the people under surveillance. But we should also consider the corrupting effect of panopticon on those doing the watching. See my previous posts Guarding the Guardians and Surveillance and its Effects.

In March/April 2005, there was a lengthy debate between Stefan Brandt (from Credentica and McGill) and SuperPat (Pat Patterson of Sun Microsystems) as to whether the Liberty Alliance counted as panoptical. There is a useful index to the debate by Kim Cameron (Microsoft). But this debate revolved largely around the technical features of the Liberty Alliance architecture, and on the hierarchical/network trust relationships. I don't deny that these details are important and interesting, but I don't think they have anything to do with the panopticon.

The panopticon was designed to produce certain effects - certain changes in behaviour in the actors. In my view, this is what is most important in deciding whether to regard something as a metaphorical implementation of the panopticon. I haven't seen any contribution to the Liberty/panopticon debate that identified any such effects.

Wednesday, May 18, 2005

Surveillance and its Effects

Surveillance is a process of keeping people (such as customers and employees, as well as members of the public) under close supervision. What are the effects of surveillance? Here are two answers from an interesting blog (now called Into The Machine) whose main purpose seems to be to critique the authoritarian policies of the UK Home Secretary (past and present).
  • All CCTV monitoring does is lock down the public face of our nation, allowing us in our public capacity to simply sweep aside all the factors that lead to the crime and attitude we're experiencing every day. (The Two Faces of CCTV)
  • Surveillance will always produce nothing but underground revelry and a false sense of security. (The Ubiquity of Unnatural Surveillance)
[update: blog title and URLs changed, content looks the same]

It is clearly important to understand the effects on those being observed. But it is also interesting to note the effects on those doing (or relying upon) the observing.

Jeremy Bentham’s panopticon was originally a prison so designed that the warder could watch all the prisoners at the same time. By extension, this term is used to describe any technical or institutional arrangement to watch/ monitor large numbers of people. It forms part of Foucault's analysis of discipline, and provides a useful metaphor for various modern technologies
  • CCTV
  • workforce monitoring
  • database systems such as customer relationship management (CRM)
  • Google
The panopticon provides surveillance, and may result in a loss of privacy for the people being watched / monitored, but may also make people feel they are being looked after (better quality of service, safer). If you know you’re being watched, this may trigger various feelings – both positive and negative.

Besides the impact on the people being watched, the pantopticon often has an adverse effect on the watcher. The panopticon gives the illusion of transparency and completeness – so the watcher comes to believe three fallacies

  • that everything visible is undistorted truth
  • that everything visible is important
  • that everything important is visible

This is one of the reasons why surveillance mechanisms often become dysfunctional even for those doing the surveillance. For example, instead of customer relationship management (CRM) promoting better relationships with the customer, it becomes a bureaucratic obsession with the content of the customer database.

See also Surveillance 2 (September 2005)