Assuring AI safety: fallible knowledge and the Gricean maxims

AI and Ethics:1-14 (2024)
  Copy   BIBTEX

Abstract

In this paper we argue that safety claims, when justified by a safety case, are descriptive fallible knowledge claims. Even if the aim of a safety case was to justify infallible knowledge about the safety of a system, such infallible safety knowledge is impossible to attain in the case of AI-enabled systems. By their nature AI-enabled systems preclude the possibility of obtaining infallible knowledge concerning their safety or lack thereof. We suggest that one can communicate knowledge of an AI-enabled system’s safety by structuring their exchange according to Paul Grice’s Cooperative Principle which can be achieved via adherence to the Gricean maxims of communication. Furthermore, these same maxims can be used to evaluate the calibre of the exchange, with the aim being to ensure that communicating knowledge about an AI-enabled system’s safety is of the highest calibre, in short, that the communication is relevant, of sufficient quantity and quality, and communicated perspicuously. The high calibre communication of safety claims to an epistemically diverse group of stakeholders is vitally important given the increasingly participatory nature of AI-enabled system design, development and assessment.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,612

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-06-04

Downloads
2 (#1,450,151)

6 months
2 (#1,816,284)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Marten H. L. Kaas
Charité Universitätsmedizin Berlin

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references