AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors

Philosophy and Technology 37 (7):1-19 (2024)
  Copy   BIBTEX

Abstract

Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such technologies depends on institutional trust that is in short supply. Finally, outsourcing the discrimination between the real and the fake to automated, largely opaque systems runs the risk of undermining epistemic autonomy.

Other Versions

No versions found

Analytics

Added to PP
2024-01-11

Downloads
787 (#36,403)

6 months
232 (#15,343)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Keith Raymond Harris
University of Vienna

References found in this work

Discrimination and perceptual knowledge.Alvin I. Goldman - 1976 - Journal of Philosophy 73 (November):771-791.
Signals: Evolution, Learning, and Information.Brian Skyrms - 2010 - Oxford, GB: Oxford University Press.
Deepfakes and the Epistemic Backstop.Regina Rini - 2020 - Philosophers' Imprint 20 (24):1-16.
Do Your Own Research.Nathan Ballantyne, Jared B. Celniker & David Dunning - 2024 - Social Epistemology 38 (3):302-317.

View all 50 references / Add more references