Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions

AI and Society 39 (1):279-293 (2024)
  Copy   BIBTEX

Abstract

Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, or data-related issues, we focus on the very conceptual underpinnings of the design decisions made by developers and other stakeholders during the lifecycle of a machine learning project. For instance, the design and development of an app to track snoring to detect possible health risks presuppose some picture or another of “health”, which is a key notion that conceptually underpins the project. We take it as a premise that these key concepts are necessarily present during design and development, albeit perhaps tacitly. We argue that by providing “justificatory explanations” about how the team understands the relevant key concepts behind its design decisions, interested parties could gain valuable insights and make better sense of the workings and outcomes of systems. Using the concept of “health”, we illustrate how a particular understanding of it might influence decisions during the design and development stages of a machine learning project, and how making this explicit by incorporating it into ex-post explanations might increase the explanatory and justificatory power of these explanations. We posit that a greater conceptual awareness of the key concepts that underpin design and development decisions may be beneficial to any attempt to develop explainability methods. We recommend that “justificatory explanations” are provided as technical documentation. These are declarative statements that contain at its simplest: (1) a high-level account of the understanding of the relevant key concepts a team possess related to a project’s main domain, (2) how these understandings drive decision-making during the life-cycle stages, and (3) it gives reasons (which could be implicit in the account) that the person or persons doing the explanation consider to have plausible justificatory power for the decisions that were made during the project.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,503

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
Big data and algorithmic decision-making.Paul B. de Laat - 2017 - Acm Sigcas Computers and Society 47 (3):39-53.
The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
Concept Representation Analysis in the Context of Human-Machine Interactions.Farshad Badie - 2016 - In 14th International Conference on e-Society. pp. 55-61.

Analytics

Added to PP
2022-03-31

Downloads
24 (#651,177)

6 months
16 (#153,304)

Historical graph of downloads
How can I increase my downloads?