Showing posts with label risk. Show all posts
Showing posts with label risk. Show all posts

Wednesday, June 26, 2019

False Sense of Security

According to the dictionary, a false sense of security is a feeling of being safer than one really is. Apparently that's a bad thing.

Peter Sandman is a strong believer in what he calls precaution advocacy - to arouse some healthy outrage and use it to mobilize people to take precautions or demand precautions. He has helped environmental groups arouse public concern about the need for recycling, the dangers of factory emissions, etc. In such contexts, his concern is that people are disregarding or underestimating some category of risk, and he is urging the introduction of appropriate precautions - whether individual or collective.

There are countless risk and security experts who take a similar position - for example, advocating greater diligence in corporate security, especially cybersecurity.

However, as Dr Sandman acknowledges, the notion of a false sense of security is often used rhetorically, suggesting that a given regulation or other precaution is not only unnecessary but even counter-productive, making people careless or complacent. This argument is sometimes based on the notion of risk homeostasis or risk compensation - that people adjust their behaviour to maintain a comfortable level of risk. The classic example is people with seatbelts and airbags driving faster and more recklessly.

Dr Sandman notes that the rhetoric can sometimes be deployed by both sides of an argument - for example "gun controls create a false sense of security" versus "guns create a false sense of security". What this suggests is that the rhetoric is often about other people - the implication is that We have a true sense of security, but They would be misled.

The notion of a false sense of security also arises in connection with security theatre - a performance that may have little real impact on security, but is intended to reassure people that Something Is Being Done. When Bruce Schneier introduced this term in his 2003 book, he regarded security theatre as fraudulent, and believed it was always a Bad Thing. However, he later came to acknowledge that security theatre, while still deceptive and potentially problematic, could sometimes be valuable. His example is security bracelets on newborn babies, which don't do much to protect against the actual but extremely small risk of abduction, but do a great deal to calm anxious parents. If Dr Sandman's precaution advocacy is targetted at situations of High Hazard, Low Outrage (in other words, people not worrying enough), then Security Theatre could be legitimately targetted at situations of Low Hazard, High Outrage (people worrying too much).

So perhaps sometimes giving people a false sense of security is ethically justified?



Peter Glaskowsky, Bruce Schneier's New View on Security Theater (CNET, 9 April 2008)

Peter Sandman, False Sense of Security (25 May 2018), Precaution Advocacy (undated)

Bruce Schneier, Beyond Fear (2003), In Praise of Security Theatre (Wired, 25 January 2007)

Gerald Wilde, Risk homeostasis theory: an overview (Injury Prevention Vol 4 No 2, 1998)

Wikipedia: Risk Compensation, Security Theatre

Related posts: Surveillance and its Effects (May 2005), Technical Security and Context (September 2005), Hard Cases Make Bad Law (September 2009), The Illusion of Architecture (September 2012), Anxiety as a Cost (January 2013), Listening for Trouble (June 2019), Lie Detectors at Airports (April 2022)


Updated 28 June 2019. Thanks to Peter Sandman for comments.

Saturday, February 09, 2019

Insurance and the Veil of Ignorance

Put simply, the purpose of insurance is to shift risk from the individual to the collective. When an individual cannot afford to bear a given risk, the individual purchases some risk cover from an organization - typically an insurance company or mutual - which spreads the risk over many individuals and is supposedly better able to bear these risks.

Individuals are sometimes obliged to purchase insurance - for example, car insurance before driving on the public roads, or house insurance before getting a mortgage. In some countries, there may be legal requirements to have some form of health insurance.

Insurance companies typically charge different premiums to different individuals depending on the perceived risk and the available statistics. For example, if young inexperienced drivers and very elderly drivers have more accidents, it would seem fair for these drivers to pay a higher premium.

Insurance companies therefore try to obtain as much information about the individual as possible, in order to calculate the correct premium, or even to decide whether to offer cover at all. But this is problematic for two reasons.

The first problem is about fairness, as these calculations may embed various forms of deliberate or inadvertent discrimination. As Joi Ito explains,
The original idea of risk spreading and the principle of solidarity was based on the notion that sharing risk bound people together, encouraging a spirit of mutual aid and interdependence. By the final decades of the 20th century, however, this vision had given way to the so-called actuarial fairness promoted by insurance companies to justify discrimination.
The second problem is about knowledge and what Foucault calls biopower. Just suppose your insurance company is monitoring your driving habits through sensors in the vehicle or cameras in the street, knows how much red meat you are eating, knows your drinking habits through the motion and location sensors on your phone, is inferring your psychological state from your Facebook profile, and has complete access to your fitness tracker and your DNA. If the insurance company now has so much data about you that it can accurately predict car accidents, ill-health and death, the amount of risk actually taken by the insurance company is minimized, and the risk is thrown back onto the individual who is perceived (fairly or unfairly) as a high-risk.

In her latest book, Shoshana Zuboff describes how insurance companies are using the latest technologies, including the Internet of Things, not only to monitor drivers but also to control them.
Telematics are not intended merely to know but also to do (economics of action). They are hammers; they are muscular; they enforce. Behavioral underwriting promises to reduce risk through machine processes designed to modify behavior in the direction of maximum profitability. Behavioral surplus is used to trigger punishments, such as real-time rate hikes, financial penalties, curfews, and engine lockdowns, or rewards, such as rate discounts, coupons, and gold stars to redeem for future benefits. The consultancy firm AT Kearney anticipates 'IoT enriched relationships' to connect 'more holistically' with customers 'to influence their behaviors'. (p215)

So much for risk sharing then. Surely this undermines the whole point of insurance?



Sami Coll, Consumption as Biopower: Governing Bodies with Loyalty Cards, (Journal of Consumer Culture 13(3) 2013) pp 210-220

Caley Horan, Actuarial age: insurance and the emergence of neoliberalism in the postwar United States (PhD Thesis 2011)

Joi Ito, Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination (Wired Magazine, 5 February 2019) HT @WolfieChristl @zeynep

AT Kearney, The Internet of Things: Opportunity for Insurers (2014)

Cathy O'Neil, How algorithms rule our working lives (The Guardian, 1 September 2016)

Jathan Sadowski, Alarmed by Admiral's data grab? Wait until insurers can see the contents of your fridge (The Guardian, 2 November 2016)

Carissa Véliz, If AI Is Predicting Your Future, Are You Still Free? (Wired, 27 December 2021)

Shoshana Zuboff, The Age of Surveillance Capitalism (Profile Books 2019) esp pages 212-218


Stanford Encyclopedia of Philosophy: Foucault

Related posts
: The Transparency of Algorithms (October 2016) Pay as you Share (November 2016), Shoshana Zuboff on Surveillance Capitalism (Book Review, February 2019) 

 

Update: I have just come across a journal special issue on the Personalization of Insurance (Big Data and Society, November 2020). I note that the editorial starts with the same Zuboff quote that I used here. Also adding link to a recent article by Professor Véliz.

Friday, May 23, 2014

Smart Guns

Just suppose that smart guns were safer than ordinary guns.

For example, if someone grabbed your gun and tried to point it at you. If it was a smart gun you'd be perfectly safe because there is a fool-proof mechanism that prevents its use by an unauthorized person.

As with any technological advance, some people are sceptical. How much do you trust new technology? Suppose the gun doesn't work when you need it. Maybe an electromagnetic pulse (triggered by terrorists or natural solar activity) might take out all weapons in the area. Or maybe the bad guys (or the FBI) can hack into this mechanism and disable your gun before they attack you.

Meanwhile, like many technological advances, there are political implications. In the USA, the key question is whether such a mechanism might help reduce gun violence. Some gun control activists think such a mechanism would be pretty irrelevant.

But that doesn't stop the gun rights activists freaking out at the prospect of any damn technology on their precious weaponry. A shop owner in the US claims to have received death threats from pro-gun lobbyists for offering to sell the weapons. Meanwhile, as Joseph Steinberg suggests, an obsession with smart guns may inhibit other technological innovations that could make guns and gun-owning safer.

The belief here is that once these smart guns are available, by a process of technological determinism, they will become irresistible to legislators. Before long, they fear, you won't be able to buy regular guns.

Obviously that's a cause worth killing for.




David Kopel, Brady Center lawsuit to use “smart” gun mandate to trigger handgun ban in New Jersey (Washington Post 22 May 2014)

Karen McVeigh, Gun control groups accuse New Jersey of ignoring 'smart gun' law (Guardian 21 May 2014)

Michael S. Rosenwald, Maryland dealer, under pressure from gun-rights activists, drops plan to sell smart gun (Washington Post, 1 May 2014)

Joseph Steinberg, Why You Should Be Concerned About The New 'Smart Guns' (Whether You Love Or Hate Guns) (Forbes 4 May 2014)

Nicholas Tufnell, Smart guns: How smart are they? (BBC News, 23 May 2014)

Eugene Volokh, Smart guns, electromagnetic pulse, and planning for unknown-probability dangers (Washington Post 23 May 2014)


See also Batman/Catwoman: Trail of the Gun (hat tip @ChBrain).

Related post Technological Determism (December 2020)

Friday, August 27, 2010

Does Basel want bigger banks?

@Peston via @diane1859 "Basel tilts playing field further towards big banks. It wasn't supposed to be this way."

Here's the dilemma. The official purpose of banking regulation is to protect the whole system from risk. But the actual effect of controlling each bank separately (whether in terms of capital ratios or any other measure) is to encourage the concentration of risk, which makes the whole system more risky.

Counter-productive regulation is an extremely common phenomenon. Stafford Beer's POSIWID principle tells us that the de facto purpose of a complex system is often at odds with the official purpose.

True systems thinking on the part of legislators and regulators might lead us to more effective and appropriate ways of regulating the whole system than simply imposing controls on the individual players within the system. But what's the chance of that happening?


Jane Merriman, Big banks winners from new contingent capital move (Reuters, 27 August 2010)


Related posts Does Britain need smaller banks? (April 2009), Bearing Limit and Financial Regulation (September 2010)

Thursday, November 08, 2007

Childhood Diseases

Should children be protected against minor diseases? Or are childhood diseases a normal (and perhaps even necessary) part of growing up? 

Some doctors are now recommending routine vaccination against chicken pox (varicella) - there is a suggestion that it might be combined with the (already controversial) MMR vaccine to produce MMRV [BBC News November 8th 2007]. 

Chicken pox killed six children in the UK and Irish Republic last year, and there were 112 cases involving severe complications. So there is certainly a risk. But is this risk high enough to warrant action? Each mass vaccination campaign has

  • financial costs - could the same resources deployed elsewhere have delivered greater medical benefits to a greater number of people?
  • medical side-effects - possible negative reactions to the vaccination itself in some children, reduced protection against related diseases such as shingles
  • social costs - fear of side-effects (whether founded or unfounded) reducing the take-up of all vaccines, not just this one

But I have a more general concern. If this proposal makes sense, then it would make sense for every other infectious disease that kills a small number of people every year. Medical researchers think they understand the effect of a single vaccine on the human immune system, or even a compound vaccine such as MMR. But how would it be if a child never got ill, because every possible disease was preempted by vaccination? Would the immune system develop normally, or would it be weak from lack of exercise? Would new diseases emerge to fill the gap? Will medical research tell us the answers to these questions before it is too late? 

Childhood disease involves some suffering, and a tiny risk of complications and even death, and most parents accept that. If I wanted to protect my children totally from any suffering or risk, then they wouldn't learn to cross the road or ride a bicycle or climb trees; they wouldn't be allowed to use the kettle or the toaster, or bathe in more than 3cm of water; and they certainly wouldn't have any contact with the opposite sex until they were at least 25 years old. This is of course ridiculous - I would be condemning them to a life-without-life. 

While my heart goes out to those parents who have lost their children to childhood disease, I don't think the answer is to eliminate childhood disease altogether. It is a normal part of growing up: it develops the immune system, and equally importantly it develops confidence in the immune system. A child can feel poorly one week, with spots all over her face, and then be back at school the following week: this experience engenders a deep belief in your ability to recover, a belief that however bad you feel right now, you should feel better tomorrow. 

What doesn't kill you makes you stronger.

Update

The Chief Executive of RoSPA (Royal Society for the Prevention of Accidents) made a similar point in his 2007 Annual Report.
"Rather than adopt the extremist protectionism of ‘cotton wool kids’ our argument is that a skinned knee or a twisted ankle in a challenging and exciting play environment is not just acceptable, it is a positive necessity in order to educate our children and to prepare them for a complex, dangerous world, in which healthy, robust activity is more a national need than ever before."

Of course this is not an argument dismissing safety precautions altogether, and my blogpost should not be read as an argument against all vaccinations - merely an argument against the extreme idea that we need to vaccinate against every possible condition.

See also

Gever Tulley: 5 dangerous things you should let your kids do (Ted Talks, March 2007)

Tuesday, January 20, 2004

Complexity: Devious Management and Investment Risk

cross-posted from System Viability and Corporate Governance



A common feature of much recent malfeasance is the construction of highly complex corporate structures, apparently beyond the legitimate demands of Requisite Variety.

Complexity – whether requisite or otherwise – adds to investment risk. Dodgy management practices may cause a continuous erosion of corporate value, or they may trigger a sudden collapse of value (Enron, Parmalat). Some companies have a sufficiently robust business model that they remain viable even with a certain level of malfeasance. Other companies turn out to be merely pseudo-viable – only remaining solvent thanks to dodgy accounting. Some investors may be willing to tolerate erosion, but do not wish to be confronted with sudden collapse.

Complexity is (or should be) a warning sign. The purpose of complexity is what it does. If it doesn’t serve a legitimate purpose, then it is surely reasonable to assume it is there to serve some other agenda.

Using the theory of complexity, we should be able to construct geological maps of the corporate world, showing (probabilistically) where it might be worth drilling for the next accounting black hole.

For example, it now seems that Parmalat was non-viable, only sustained in pseudo-viability by paper cashflows from a non-existent bank account. But such transactions can only be concealed by having lots of apparently genuine intra-company transactions. There is therefore a control mechanism that forces complexity onto the company at the operational level, and a higher level mechanism that manages the smoke and mirrors. It might not be easy to detect the fraud by looking at the operational company alone; it may be the existence of the control mechanisms that gives the game away.

Conversely, if a management is obliged to construct and present evidence of its bona fides, this evidence needs to include a properly grounded account of the control mechanisms, including a justification of the degrees of complexity and intracompany coupling.