Superintelligence: Fears, Promises and Potentials

Journal of Evolution and Technology 25 (2):55-87 (2015)
  Copy   BIBTEX

Abstract

Oxford philosopher Nick Bostrom; in his recent and celebrated book Superintelligence; argues that advanced AI poses a potentially major existential risk to humanity; and that advanced AI development should be heavily regulated and perhaps even restricted to a small set of government-approved researchers. Bostrom’s ideas and arguments are reviewed and explored in detail; and compared with the thinking of three other current thinkers on the nature and implications of AI: Eliezer Yudkowsky of the Machine Intelligence Research Institute ; and David Weinbaum and Viktoras Veitas of the Global Brain Institute. Relevant portions of Yudkowsky’s book Rationality: From AI to Zombies are briefly reviewed; and it is found that nearly all the core ideas of Bostrom’s work appeared previously or concurrently in Yudkowsky’s thinking. However; Yudkowsky often presents these shared ideas in a more plain-spoken and extreme form; making clearer the essence of what is being claimed. For instance; the elitist strain of thinking that one sees in the background in Bostrom is plainly and openly articulated in Yudkowsky; with many of the same practical conclusions. Bostrom and Yudkowsky view intelligent systems through the lens of reinforcement learning – they view them as “reward-maximizers” and worry about what happens when a very powerful and intelligent reward-maximizer is paired with a goal system that gives rewards for achieving foolish goals like tiling the universe with paperclips. Weinbaum and Veitas’s recent paper “Open-Ended Intelligence” presents a starkly alternative perspective on intelligence; viewing it as centered not on reward maximization; but rather on complex self-organization and self-transcending development that occurs in close coupling with a complex environment that is also ongoingly self-organizing; in only partially knowable ways. It is concluded that Bostrom and Yudkowsky’s arguments for existential risk have some logical foundation; but are often presented in an exaggerated way. For instance; formal arguments whose implication is that the “worst case scenarios” for advanced AI development are extremely dire; are often informally discussed as if they demonstrated the likelihood; rather than just the possibility; of highly negative outcomes. And potential dangers of reward-maximizing AI are taken as problems with AI in general; rather than just as problems of the reward-maximization paradigm as an approach to building superintelligence. If one views past; current; and future intelligence as “open-ended;” in the vernacular of Weaver and Veitas; the potential dangers no longer appear to loom so large; and one sees a future that is wide-open; complex and uncertain; just as it has always been.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,202

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. New York, USA: Oxford University Press. pp. 322-337.
Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
Don't Worry about Superintelligence.Nicholas Agar - 2016 - Journal of Evolution and Technology 26 (1):73-82.
How long before superintelligence?Nick Bostrom - 1998 - International Journal of Futures Studies 2.

Analytics

Added to PP
2019-01-12

Downloads
32 (#473,773)

6 months
1 (#1,459,555)

Historical graph of downloads
How can I increase my downloads?