Abstract
This article is a critical genealogy of Tay, an artificial-intelligence chatbot that Microsoft released on Twitter in 2016, which was quickly hijacked by internet trolls to reproduce racist, misogynist, and antisemitic language. Tay’s repetition and production of hate speech calls for an approach that draws on both media and cultural theory—the Frankfurt School’s dialectical analyses of language and ideology, in particular. Revisiting the Frankfurt School in the age of algorithmic reason shows that, contrary to views foundational to computing, a neural-network chatbot like Tay does not sidestep meaning but rather carries and alters it, with unforeseen social and political consequences. A return to the work of Max Horkheimer and Theodor W. Adorno thus locates ideology in the digital world at the nexus of language’s ability to mean, language and meaning’s susceptibility to computation, and the design of a machine to compute both. Coming to critical terms with the antisemitism produced in Tay’s human-computer synthesis requires, as this article contends, addressing the uncanny embodiment and reflection of thought that is digital computation.