Stacy S. Chen
University of Toronto, St. George Campus
Objective: To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application. Study Design and Setting: This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC. Results: We find that concerns regarding explainability are not limited to MLHC, but rather extend to numerous well-validated treatment interventions as well as to human clinical judgment itself. We examine the role of evidence-based medicine in evaluating unexplainable treatments and technologies, and highlight the analogy between the concept of explainability in MLHC and the related concept of mechanistic reasoning in evidence-based medicine. Conclusion: Ultimately, we conclude that the value of explainability in MLHC is not intrinsic, but is instead instrumental to achieving greater imperatives such as performance and trust. We caution against the uncompromising pursuit of explainability, and advocate instead for the development of robust empirical methods to successfully evaluate increasingly inexplicable algorithmic systems.
Keywords machine learning  explainability  evidence-based medicine  mechanistic reasoning  algorithms  artificial intelligence
Categories (categorize this paper)
DOI 10.1016/j.jclinepi.2021.11.001
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

Upload a copy of this paper     Check publisher's policy     Papers currently archived: 65,636
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Consequences of Unexplainable Machine Learning for the Notions of a Trusted Doctor and Patient Autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
Three Problems with Big Data and Artificial Intelligence in Medicine.Benjamin Chin-Yee & Ross Upshur - 2019 - Perspectives in Biology and Medicine 62 (2):237-256.
Should We Be Afraid of Medical AI?Ezio Di Nucci - 2019 - Journal of Medical Ethics 45 (8):556-558.


Added to PP index

Total views
15 ( #683,103 of 2,462,279 )

Recent downloads (6 months)
15 ( #50,533 of 2,462,279 )

How can I increase my downloads?


My notes