Evaluating causes of algorithmic bias in juvenile criminal recidivism

Artificial Intelligence and Law 29 (2):111-147 (2020)
  Copy   BIBTEX

Abstract

In this paper we investigate risk prediction of criminal re-offense among juvenile defendants using general-purpose machine learning algorithms. We show that in our dataset, containing hundreds of cases, ML models achieve better predictive power than a structured professional risk assessment tool, the Structured Assessment of Violence Risk in Youth, at the expense of not satisfying relevant group fairness metrics that SAVRY does satisfy. We explore in more detail two possible causes of this algorithmic bias that are related to biases in the data with respect to two protected groups, foreigners and women. In particular, we look at the differences in the prevalence of re-offense between protected groups and the influence of protected group or correlated features in the prediction. Our experiments show that both can lead to disparity between groups on the considered group fairness metrics. We observe that methods to mitigate the influence of either cause do not guarantee fair outcomes. An analysis of feature importance using LIME, a machine learning interpretability method, shows that some mitigation methods can shift the set of features that ML techniques rely on away from demographics and criminal history which are highly correlated with sensitive features.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,202

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
Machine Decisions and Human Consequences.Teresa Scantamburlo, Andrew Charlesworth & Nello Cristianini - 2019 - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford: Oxford University Press.
Bias in algorithmic filtering and personalization.Engin Bozdag - 2013 - Ethics and Information Technology 15 (3):209-227.
Detecting racial bias in algorithms and machine learning.Nicol Turner Lee - 2018 - Journal of Information, Communication and Ethics in Society 16 (3):252-260.
Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
Restorative Conferencing in Thailand: A Resounding Success with Juvenile Crime.Abbey J. Porter - 2009 - Journal for Peace and Justice Studies 18 (1/2):108-112.
Psychological characteristics of juvenile offenders with constant integration problems.Slávka Démuthová - 2012 - Journal for Perspectives of Economic Political and Social Integration 18 (1-2):177-192.

Analytics

Added to PP
2020-06-07

Downloads
24 (#617,476)

6 months
3 (#880,460)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations