Legal requirements on explainability in machine learning

Artificial Intelligence and Law 29 (2):149-169 (2020)
  Copy   BIBTEX

Abstract

Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on explainability.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Human Semi-Supervised Learning.Bryan R. Gibson, Timothy T. Rogers & Xiaojin Zhu - 2013 - Topics in Cognitive Science 5 (1):132-172.
Model theory and machine learning.Hunter Chase & James Freitag - 2019 - Bulletin of Symbolic Logic 25 (3):319-332.

Analytics

Added to PP
2020-07-31

Downloads
115 (#151,882)

6 months
64 (#67,914)

Historical graph of downloads
How can I increase my downloads?