Explainable machine learning practices: opening another black box for reliable medical AI

AI and Ethics:1-14 (2022)
  Copy   BIBTEX

Abstract

In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London’s position. In particular, we make two claims. First, we claim that London’s solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London’s views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,853

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
Should we be afraid of medical AI?Ezio Di Nucci - 2019 - Journal of Medical Ethics 45 (8):556-558.
Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).

Analytics

Added to PP
2022-02-16

Downloads
112 (#158,394)

6 months
22 (#122,772)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Citations of this work

AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).

Add more citations

References found in this work

Science, Policy, and the Value-Free Ideal.Heather Douglas - 2009 - University of Pittsburgh Press.
Functional analysis.Robert E. Cummins - 1975 - Journal of Philosophy 72 (November):741-64.
Values in Science.Ernan McMullin - 1982 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1982 (4):3-28.

Add more references