Generative AI entails a credit–blame asymmetry

Nature Machine Intelligence 5 (5):472-475 (2023)
  Copy   BIBTEX

Abstract

Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,164

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

ChatGPT.Andrej Poleev - 2023 - Enzymes 21.
Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
ChatGPT: Temptations of Progress.Rushabh H. Doshi, Simar S. Bajaj & Harlan M. Krumholz - 2023 - American Journal of Bioethics 23 (4):6-8.
Attributability, Accountability, and Implicit Bias.Robin Zheng - 2016 - In Michael Brownstein & Jennifer Saul (eds.), Implicit Bias and Philosophy, Volume 2: Moral Responsibility, Structural Injustice, and Ethics. Oxford, GB: Oxford University Press UK. pp. 62-89.

Analytics

Added to PP
2023-06-01

Downloads
199 (#95,839)

6 months
90 (#44,326)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Brian D. Earp
University of Oxford
Sven Nyholm
Ludwig Maximilians Universität, München
5 more

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references