Deepfakes and Political Misinformation in U.S. Elections

Techné Research in Philosophy and Technology 27 (3):363-386 (2023)
  Copy   BIBTEX

Abstract

Audio and video footage produced with the help of AI can show politicians doing discreditable things that they have not actually done. This is deepfaked material. Deepfakes are sometimes claimed to have special powers to harm the people depicted and their audiences—powers that more traditional forms of faked imagery and sound footage lack. According to some philosophers, deepfakes are particularly “believable,” and widely available technology will soon make deepfakes proliferate. I first give reasons why deepfake technology is not particularly well suited to producing “believable” political misinformation in a sense to be defined. Next, I challenge claims from Don Fallis and Regina Rini about the consequences of the wide availability of deepfakes. My argument is not that deepfakes are harmless, but that their power to do major harm is highly conditional in liberal party political environments that contain sophisticated mass-media.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,296

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2023-11-11

Downloads
38 (#433,096)

6 months
23 (#125,194)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Tom Sorell
University of Warwick

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references