Failure of chatbot Tay was evil, ugliness and uselessness in its nature or do we judge it through cognitive shortcuts and biases?

AI and Society 36 (1):361-367 (2021)
  Copy   BIBTEX

Abstract

This study deals with the failure of one of the most advanced chatbots called Tay, created by Microsoft. Many users, commentators and experts strongly anthropomorphised this chatbot in their assessment of the case around Tay. This view is so widespread that we can identify it as a certain typical cognitive distortion or bias. This study presents a summary of facts concerning the Tay case, collaborative perspectives from eminent experts: Tay did not mean anything by its morally objectionable statements because, in principle, it was not able to think; the controversial content spread by this AI was interpreted incorrectly—not as a mere compilation of meaning, but as its disclosure; even though chatbots are not members of the symbolic order of spatiotemporal relations of the human world, we treat them in this way in many aspects.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,654

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Kant on the radical evil of human nature.Paul Formosa - 2007 - Philosophical Forum 38 (3):221–245.
Cognitive shortcuts in causal inference.Philip M. Fernbach & Bob Rehder - 2013 - Argument and Computation 4 (1):64 - 88.
Cognitive phenomenology: real life.Galen Strawson - 2011 - In Tim Bayne & Michelle Montague (eds.), Cognitive phenomenology. Oxford University Press. pp. 285--325.
The nature of evil.Daryl Koehn - 2005 - New York: Palgrave-Macmillan.
An Atheistic Argument from Ugliness.Scott F. Aikin & Nicholaos Jones - 2015 - European Journal for Philosophy of Religion 7 (1):209-217.

Analytics

Added to PP
2020-09-02

Downloads
38 (#428,039)

6 months
9 (#343,268)

Historical graph of downloads
How can I increase my downloads?