Singularitarianism and schizophrenia

AI and Society 32 (4):573-590 (2017)
  Copy   BIBTEX

Abstract

Given the contemporary ambivalent standpoints toward the future of artificial intelligence, recently denoted as the phenomenon of Singularitarianism, Gregory Bateson’s core theories of ecology of mind, schismogenesis, and double bind, are hereby revisited, taken out of their respective sociological, anthropological, and psychotherapeutic contexts and recontextualized in the field of Roboethics as to a twofold aim: the proposal of a rigid ethical standpoint toward both artificial and non-artificial agents, and an explanatory analysis of the reasons bringing about such a polarized outcome of contradictory views in regard to the future of robots. Firstly, the paper applies the Batesonian ecology of mind for constructing a unified roboethical framework which endorses a flat ontology embracing multiple forms of agency, borrowing elements from Floridi’s information ethics, classic virtue ethics, Felix Guattari’s ecosophy, Braidotti’s posthumanism, and the Japanese animist doctrine of Rinri. The proposed framework wishes to act as a pragmatic solution to the endless dispute regarding the nature of consciousness or the natural/artificial dichotomy and as a further argumentation against the recognition of future artificial agency as a potential existential threat. Secondly, schismogenic analysis is employed to describe the emergence of the hostile human–robot cultural contact, tracing its origins in the early scientific discourse of man–machine symbiosis up to the contemporary countermeasures against superintelligent agents. Thirdly, Bateson’s double bind theory is utilized as an analytic methodological tool of humanity’s collective agency, leading to the hypothesis of collective schizophrenic symptomatology, due to the constancy and intensity of confronting messages emitted by either proponents or opponents of artificial intelligence. The double bind’s treatment is the mirroring “therapeutic double bind,” and the article concludes in proposing the conceptual pragmatic imperative necessary for such a condition to follow: humanity’s conscience of habitualizing danger and familiarization with its possible future extinction, as the result of a progressive blurrification between natural and artificial agency, succeeded by a totally non-organic intelligent form of agency.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,202

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
Ai: Its Nature and Future.Margaret A. Boden - 2016 - Oxford University Press UK.

Analytics

Added to PP
2017-10-06

Downloads
23 (#644,212)

6 months
5 (#544,079)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Vassilis Galanos
University of Edinburgh

References found in this work

Computing machinery and intelligence.Alan M. Turing - 1950 - Mind 59 (October):433-60.
The posthuman.Rosi Braidotti - 2013 - Malden, MA, USA: Polity Press.
The Question concerning Technology and Other Essays.Martin Heidegger & William Lovitt - 1981 - International Journal for Philosophy of Religion 12 (3):186-188.
The fourth revolution.Luciano Floridi - 2012 - The Philosophers' Magazine 57 (57):96-101.

View all 13 references / Add more references