Understanding, Idealization, and Explainable AI

Episteme 19 (4):534-560 (2022)
  Copy   BIBTEX

Abstract

Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. I argue for a unified account of these key concepts that treats the concept of understanding as fundamental. This allows resources from the philosophy of science and the epistemology of understanding to help guide opacity alleviation efforts. A first significant benefit of this understanding account is that it defuses one of the primary, in-principle objections to post hoc explainable AI (XAI) methods. This “rationalization objection” argues that XAI methods provide mere rationalizations rather than genuine explanations. This is because XAI methods involve using a separate “explanation” system to approximate the original black box system. These explanation systems function in a completely different way than the original system, yet XAI methods make inferences about the original system based on the behavior of the explanation system. I argue that, if we conceive of XAI methods as idealized scientific models, this rationalization worry is dissolved. Idealized scientific models misrepresent their target phenomena, yet are capable of providing significant and genuine understanding of their targets.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,127

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

SIDEs: Separating Idealization from Deceptive ‘Explanations’ in xAI.Emily Sullivan - forthcoming - Proceedings of the 2024 Acm Conference on Fairness, Accountability, and Transparency.
Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
Interpretability and Unification.Adrian Erasmus & Tyler D. P. Brunet - 2022 - Philosophy and Technology 35 (2):1-6.

Analytics

Added to PP
2022-11-04

Downloads
332 (#64,170)

6 months
102 (#48,872)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Will Fleisher
Georgetown University

References found in this work

Models in Science (2nd edition).Roman Frigg & Stephan Hartmann - 2021 - The Stanford Encyclopedia of Philosophy.
Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Explanatory unification.Philip Kitcher - 1981 - Philosophy of Science 48 (4):507-531.
Understanding Why.Alison Hills - 2015 - Noûs 49 (2):661-688.
No understanding without explanation.Michael Strevens - 2013 - Studies in History and Philosophy of Science Part A 44 (3):510-515.

View all 31 references / Add more references