Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship

Philosophy and Technology 35 (1):1-20 (2022)
  Copy   BIBTEX

Abstract

The initial successes in recent years in harnessing machine learning technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician as the agent responsible for the implementation of diagnosis, therapy, and care is unable to access the generation of findings and recommendations. There is widespread agreement that, generally, a complete traceability is preferable to opaque recommendations; however, there are differences about addressing ML-based systems whose functioning seems to remain opaque to some degree—even if so-called explicable or interpretable systems gain increasing amounts of interest. This essay approaches the epistemic foundations of ML-generated information specifically and medical knowledge generally to advocate differentiations of decision-making situations in clinical contexts regarding their necessary depth of insight into the process of information generation. Empirically accurate or reliable outcomes are sufficient for some decision situations in healthcare, whereas other clinical decisions require extensive insight into ML-generated outcomes because of their inherently normative implications.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,423

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
Consequences of unexplainable machine learning for the notions of a trusted doctor and patient autonomy.Michal Klincewicz & Lily Frank - 2020 - Proceedings of the 2nd EXplainable AI in Law Workshop (XAILA 2019) Co-Located with 32nd International Conference on Legal Knowledge and Information Systems (JURIX 2019).
From privacy to anti-discrimination in times of machine learning.Thilo Hagendorff - 2019 - Ethics and Information Technology 21 (4):331-343.
Machine Learning and Job Posting Classification: A Comparative Study.Ibrahim M. Nasser & Amjad H. Alzaanin - 2020 - International Journal of Engineering and Information Systems (IJEAIS) 4 (9):06-14.

Analytics

Added to PP
2022-01-29

Downloads
27 (#576,320)

6 months
9 (#295,075)

Historical graph of downloads
How can I increase my downloads?