Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use

Philosophy and Technology 36 (3):1-31 (2023)
  Copy   BIBTEX

Abstract

Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to people unless special efforts are made to avoid them. Insurance is no exception. In this paper, we provide a thorough analysis of algorithmic fairness in the case of insurance premiums. We ask what “fairness” might mean in this context and how the fairness of a premium system can be measured. For this, we apply the established fairness frameworks of the fair machine learning literature to the case of insurance premiums and show which of the existing fairness criteria can be applied to assess the fairness of insurance premiums. We argue that two of the often-discussed group fairness criteria, independence (also called statistical parity or demographic parity) and separation (also known as equalized odds), are not normatively appropriate for insurance premiums. Instead, we propose the sufficiency criterion (also known as well-calibration) as a morally defensible alternative that allows us to test for systematic biases in premiums towards certain groups based on the risk they bring to the pool. In addition, we clarify the connection between group fairness and different degrees of personalization. Our findings enable insurers to assess the fairness properties of their risk models, helping them avoid reputation damage resulting from potentially unfair and discriminatory premium systems.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,435

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

What's Fair about Individual Fairness?Will Fleisher - 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society.
On Fairness.Craig L. Carr - 2000 - Routledge.
On Fairness and Claims.Patrick Tomlin - 2012 - Utilitas 24 (2):200-213.
A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity.Hoda Heidari - 2019 - Proceedings of the Conference on Fairness, Accountability, and Transparency 1.
Rawls’s Original Position and Algorithmic Fairness.Ulrik Franke - 2021 - Philosophy and Technology 34 (4):1803-1817.
Fairness, Political Obligation, and the Justificatory Gap.Jiafeng Zhu - 2015 - Journal of Moral Philosophy 12 (3):290-312.
Disagreement about Fairness.Christopher McMahon - 2010 - Philosophical Topics 38 (2):91-110.
Fairness, Political Obligation, and the Justificatory Gap.Jiafeng Zhu - 2014 - Journal of Moral Philosophy (4):1-23.

Analytics

Added to PP
2023-06-20

Downloads
38 (#414,036)

6 months
33 (#102,172)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Joachim Baumann
University of Zürich

References found in this work

What is equality? Part 1: Equality of welfare.Ronald Dworkin - 1981 - Philosophy and Public Affairs 10 (3):185-246.
On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
Health-care needs and distributive justice.Norman Daniels - 1981 - Philosophy and Public Affairs 10 (2):146-179.
Nothing personal: On statistical discrimination.Kasper Lippert-Rasmussen - 2007 - Journal of Political Philosophy 15 (4):385–403.

View all 10 references / Add more references