The Epistemic Threat of Deepfakes

Philosophy and Technology 34 (4):623-643 (2020)
  Copy   BIBTEX

Abstract

Deepfakes are realistic videos created using new machine learning techniques rather than traditional photographic means. They tend to depict people saying and doing things that they did not actually say or do. In the news media and the blogosphere, the worry has been raised that, as a result of deepfakes, we are heading toward an “infopocalypse” where we cannot tell what is real from what is not. Several philosophers have now issued similar warnings. In this paper, I offer an analysis of why deepfakes are such a serious threat to knowledge. Utilizing the account of information carrying recently developed by Brian Skyrms, I argue that deepfakes reduce the amount of information that videos carry to viewers. I conclude by drawing some implications of this analysis for addressing the epistemic threat of deepfakes.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,990

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2020-08-06

Downloads
183 (#109,160)

6 months
47 (#103,628)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Don Fallis
Northeastern University