Generative AI models should include detection mechanisms as a condition for public release

Ethics and Information Technology 25 (4):1-7 (2023)
  Copy   BIBTEX

Abstract

The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated into legislation: that any organization developing a foundation model intended for public use must demonstrate a reliable detection mechanism for the content it generates, as a condition of its public release. The detection mechanism should be made publicly available in a tool that allows users to query, for an arbitrary item of content, whether the item was generated (wholly or partly) by the model. In this paper, we argue that this requirement is technically feasible and would play an important role in reducing certain risks from new AI models in many domains. We also outline a number of options for the tool’s design, and summarize a number of points where further input from policymakers and researchers would be required.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,928

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Instructions for Authors.[author unknown] - 2003 - Ethics and Information Technology 5 (4):239-242.
Instructions for Authors.[author unknown] - 1999 - Ethics and Information Technology 1 (1):87-90.
Instructions for Authors.[author unknown] - 2000 - Ethics and Information Technology 2 (4):257-260.
Instructions for Authors.[author unknown] - 2001 - Ethics and Information Technology 3 (4):303-306.
Instructions for authors.[author unknown] - 2002 - Ethics and Information Technology 4 (1):93-96.
Instructions for Authors.[author unknown] - 2001 - Ethics and Information Technology 3 (2):151-154.
Editorial.[author unknown] - 2005 - Ethics and Information Technology 7 (2):49-49.
The ethics of hacking. Ross W. Bellaby.Cécile Fabre - 2023 - Ethics and Information Technology 25 (3):1-4.
Governing (ir)responsibilities for future military AI systems.Liselotte Polderman - 2023 - Ethics and Information Technology 25 (1):1-4.
The Ethics of AI in Human Resources.Evgeni Aizenberg & Matthew J. Dennis - 2022 - Ethics and Information Technology 24 (3):1-3.
Correction to: the Ethics of AI in Human Resources.Evgeni Aizenberg & Matthew J. Dennis - 2023 - Ethics and Information Technology 25 (1):1-1.

Analytics

Added to PP
2023-10-29

Downloads
77 (#215,760)

6 months
68 (#69,952)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references