Jacob Browning
New York University
Mark Theunissen
The New School
There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it is unclear either address the epistemic worries of the medical professionals using these systems. We argue these systems do require an explanation, but an institutional explanation. These types of explanations provide the reasons why the medical professional should rely on the system in practice—that is, they focus on trying to address the epistemic concerns of those using the system in specific contexts and specific occasions. But ensuring that these institutional explanations are fit for purpose means ensuring the institutions designing and deploying these systems are transparent about the assumptions baked into the system. This requires coordination with experts and end-users concerning how it will function in the field, the metrics used to evaluate its accuracy, and the procedures for auditing the system to prevent biases and failures from going unaddressed. We contend this broader explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices.
Keywords No keywords specified (fix it)
Categories No categories specified
(categorize this paper)
DOI 10.1007/s10676-022-09649-8
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

Upload a copy of this paper     Check publisher's policy     Papers currently archived: 70,130
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

Personal Knowledge.Michael Polanyi - 1958 - Chicago: University of Chicago Press.
Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.

View all 24 references / Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Explanations in AI as Claims of Tacit Knowledge.Nardi Lam - 2022 - Minds and Machines 32 (1):135-158.
Interpretability and Unification.Adrian Erasmus & Tyler D. P. Brunet - 2022 - Philosophy and Technology 35 (2):1-6.
Dreaming in the Multilevel Framework.Katja Valli - 2011 - Consciousness and Cognition 20 (4):1084-1090.
Abductive Equivalence in First-Order Logic.Katsumi Inoue & Chiaki Sakama - 2006 - Logic Journal of the IGPL 14 (2):333-346.
Deontic Power and Institutional Contexts.Isabela Fairclough - 2019 - Journal of Argumentation in Context 8 (1):136-171.
What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.


Added to PP index

Total views
7 ( #1,067,645 of 2,506,442 )

Recent downloads (6 months)
7 ( #102,990 of 2,506,442 )

How can I increase my downloads?


My notes