Saturday, February 09, 2019

Insurance and the Veil of Ignorance

Put simply, the purpose of insurance is to shift risk from the individual to the collective. When an individual cannot afford to bear a given risk, the individual purchases some risk cover from an organization - typically an insurance company or mutual - which spreads the risk over many individuals and is supposedly better able to bear these risks.

Individuals are sometimes obliged to purchase insurance - for example, car insurance before driving on the public roads, or house insurance before getting a mortgage. In some countries, there may be legal requirements to have some form of health insurance.

Insurance companies typically charge different premiums to different individuals depending on the perceived risk and the available statistics. For example, if young inexperienced drivers and very elderly drivers have more accidents, it would seem fair for these drivers to pay a higher premium.

Insurance companies therefore try to obtain as much information about the individual as possible, in order to calculate the correct premium, or even to decide whether to offer cover at all. But this is problematic for two reasons.

The first problem is about fairness, as these calculations may embed various forms of deliberate or inadvertent discrimination. As Joi Ito explains,
The original idea of risk spreading and the principle of solidarity was based on the notion that sharing risk bound people together, encouraging a spirit of mutual aid and interdependence. By the final decades of the 20th century, however, this vision had given way to the so-called actuarial fairness promoted by insurance companies to justify discrimination.
The second problem is about knowledge and what Foucault calls biopower. Just suppose your insurance company is monitoring your driving habits through sensors in the vehicle or cameras in the street, knows how much red meat you are eating, knows your drinking habits through the motion and location sensors on your phone, is inferring your psychological state from your Facebook profile, and has complete access to your fitness tracker and your DNA. If the insurance company now has so much data about you that it can accurately predict car accidents, ill-health and death, the amount of risk actually taken by the insurance company is minimized, and the risk is thrown back onto the individual who is perceived (fairly or unfairly) as a high-risk.

In her latest book, Shoshana Zuboff describes how insurance companies are using the latest technologies, including the Internet of Things, not only to monitor drivers but also to control them.
Telematics are not intended merely to know but also to do (economics of action). They are hammers; they are muscular; they enforce. Behavioral underwriting promises to reduce risk through machine processes designed to modify behavior in the direction of maximum profitability. Behavioral surplus is used to trigger punishments, such as real-time rate hikes, financial penalties, curfews, and engine lockdowns, or rewards, such as rate discounts, coupons, and gold stars to redeem for future benefits. The consultancy firm AT Kearney anticipates 'IoT enriched relationships' to connect 'more holistically' with customers 'to influence their behaviors'. (p215)

So much for risk sharing then. Surely this undermines the whole point of insurance?



Sami Coll, Consumption as Biopower: Governing Bodies with Loyalty Cards, (Journal of Consumer Culture 13(3) 2013) pp 210-220

Caley Horan, Actuarial age: insurance and the emergence of neoliberalism in the postwar United States (PhD Thesis 2011)

Joi Ito, Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination (Wired Magazine, 5 February 2019) HT @WolfieChristl @zeynep

AT Kearney, The Internet of Things: Opportunity for Insurers (2014)

Cathy O'Neil, How algorithms rule our working lives (The Guardian, 1 September 2016)

Jathan Sadowski, Alarmed by Admiral's data grab? Wait until insurers can see the contents of your fridge (The Guardian, 2 November 2016)

Carissa Véliz, If AI Is Predicting Your Future, Are You Still Free? (Wired, 27 December 2021)

Shoshana Zuboff, The Age of Surveillance Capitalism (Profile Books 2019) esp pages 212-218


Stanford Encyclopedia of Philosophy: Foucault

Related posts
: The Transparency of Algorithms (October 2016) Pay as you Share (November 2016), Shoshana Zuboff on Surveillance Capitalism (Book Review, February 2019) 

 

Update: I have just come across a journal special issue on the Personalization of Insurance (Big Data and Society, November 2020). I note that the editorial starts with the same Zuboff quote that I used here. Also adding link to a recent article by Professor Véliz.

No comments:

Post a Comment