Toward safe AI

AI and Society 38 (2):685-696 (2023)
  Copy   BIBTEX

Abstract

Since some AI algorithms with high predictive power have impacted human integrity, safety has become a crucial challenge in adopting and deploying AI. Although it is impossible to prevent an algorithm from failing in complex tasks, it is crucial to ensure that it fails safely, especially if it is a critical system. Moreover, due to AI’s unbridled development, it is imperative to minimize the methodological gaps in these systems’ engineering. This paper uses the well-known Box-Jenkins method for statistical modeling as a framework to identify engineering pitfalls in the adjustment and validation of AI models. Step by step, we point out state-of-the-art strategies and good practices to tackle these engineering drawbacks. In the final step, we integrate an internal and external validation scheme that might support an iterative evaluation of the normative, perceived, substantive, social, and environmental safety of all AI systems.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Call for papers.[author unknown] - 2018 - AI and Society 33 (3):453-455.
Call for papers.[author unknown] - 2018 - AI and Society 33 (3):457-458.
Privacy preserving or trapping?Xiao-yu Sun & Bin Ye - forthcoming - AI and Society:1-11.
AI and social theory.Jakob Mökander & Ralph Schroeder - 2022 - AI and Society 37 (4):1337-1351.

Analytics

Added to PP
2023-01-05

Downloads
26 (#595,031)

6 months
17 (#141,290)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

References found in this work

No references found.

Add more references