Explanation–Question–Response dialogue: An argumentative tool for explainable AI

Argument and Computation:1-23 (forthcoming)
  Copy   BIBTEX

Abstract

Advancements and deployments of AI-based systems, especially Deep Learning-driven generative language models, have accomplished impressive results over the past few years. Nevertheless, these remarkable achievements are intertwined with a related fear that such technologies might lead to a general relinquishing of our lives’s control to AIs. This concern, which also motivates the increasing interest in the eXplainable Artificial Intelligence (XAI) research field, is mostly caused by the opacity of the output of deep learning systems and the way that it is generated, which is largely obscure to laypeople. A dialectical interaction with such systems may enhance the users’ understanding and build a more robust trust towards AI. Commonly employed as specific formalisms for modelling intra-agent communications, dialogue games prove to be useful tools to rely upon when dealing with user’s explanation needs. The literature already offers some dialectical protocols that expressly handle explanations and their delivery. This paper fully formalises the novel Explanation–Question–Response (EQR) dialogue and its properties, whose main purpose is to provide satisfactory information (i.e., justified according to argumentative semantics) whilst ensuring a simplified protocol, in comparison with other existing approaches, for humans and artificial agents.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,069

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The virtues of interpretable medical artificial intelligence.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
A new dialectical theory of explanation.Douglas Walton - 2004 - Philosophical Explorations 7 (1):71 – 89.
The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics.
Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.

Analytics

Added to PP
2024-03-24

Downloads
15 (#975,816)

6 months
15 (#185,003)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

A completeness theorem in modal logic.Saul Kripke - 1959 - Journal of Symbolic Logic 24 (1):1-14.
High-Level Explanation and the Interventionist’s ‘Variables Problem’.L. R. Franklin-Hall - 2016 - British Journal for the Philosophy of Science 67 (2):553-577.
Contrastive Explanation.Peter Lipton - 1990 - Royal Institute of Philosophy Supplement 27:247-266.

View all 23 references / Add more references