Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models

Proceedings of the 12th Conference on Language Resources and Evaluation (2020)
  Copy   BIBTEX

Abstract

As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we assert the impossibility of causal explanations from attention layers over text data. We then introduce NLP researchers to contemporary philosophy of science theories that allow robust yet non-causal reasoning in explanation, giving computer scientists a vocabulary for future research

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Abstract versus Causal Explanations?Reutlinger Alexander & Andersen Holly - 2016 - International Studies in the Philosophy of Science 30 (2):129-146.
Eight Other Questions about Explanation.Angela Potochnik - 2018 - In Alexander Reutlinger & Juha Saatsi (eds.), Explanation Beyond Causation: Philosophical Perspectives on Non-Causal Explanations. Oxford, United Kingdom: Oxford University Press.
Explanations and candidate explanations in physics.Martin King - 2020 - European Journal for Philosophy of Science 10 (1):1-17.

Analytics

Added to PP
2020-06-23

Downloads
679 (#24,530)

6 months
94 (#48,926)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Julia Bursten
University of Kentucky

Citations of this work

Add more citations