Artificial superintelligence and its limits: why AlphaZero cannot become a general agent

AI and Society (forthcoming)
  Copy   BIBTEX

Abstract

An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe. Among those concerned about existential risk related to artificial intelligence, it is common to assume that AI will not only be very intelligent, but also be a general agent. This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between belief and desire in the context of machine agency. One such difference is that while an agent can by itself acquire new beliefs through learning, desires need to be derived from preexisting desires or acquired with the help of an external influence. Such influence could be a human programmer or natural selection. We argue that to become a general agent, a machine needs productive desires, or desires that can direct behavior across multiple contexts. However, productive desires cannot sui generis be derived from non-productive desires. Thus, even though general agency in AI could in principle be created by human agents, general agency cannot be spontaneously produced by a non-general AI agent through an endogenous process. In conclusion, we argue that a common AI scenario, where general agency suddenly emerges in a non-general agent AI, such as DeepMind’s superintelligent board game AI AlphaZero, is not plausible.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. New York, USA: Oxford University Press. pp. 322-337.
Chess, Artificial Intelligence, and Epistemic Opacity.Paul Grünke - 2019 - Információs Társadalom 19 (4):7--17.
Editorial: Risks of general artificial intelligence.Vincent C. Müller - 2014 - Journal of Experimental and Theoretical Artificial Intelligence 26 (3):297-301.
Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
How long before superintelligence?Nick Bostrom - 1998 - International Journal of Futures Studies 2.
Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.

Analytics

Added to PP
2020-10-14

Downloads
47 (#331,642)

6 months
10 (#255,509)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Karim Jebari
Institute for Futures Studies

References found in this work

Group agency: the possibility, design, and status of corporate agents.Christian List & Philip Pettit - 2011 - New York: Oxford University Press. Edited by Philip Pettit.
Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
Computing machinery and intelligence.Alan M. Turing - 1950 - Mind 59 (October):433-60.
Psychophysical and theoretical identifications.David K. Lewis - 1972 - Australasian Journal of Philosophy 50 (3):249-258.

View all 18 references / Add more references