The singularity: A philosophical analysis

Journal of Consciousness Studies 17 (9-10):9 - 10 (2010)
  Copy   BIBTEX

Abstract

What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work..

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,202

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2010-04-08

Downloads
2,700 (#2,740)

6 months
248 (#8,868)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

David Chalmers
New York University

References found in this work

The extended mind.Andy Clark & David J. Chalmers - 1998 - Analysis 58 (1):7-19.
Reasons and Persons.Derek Parfit - 1984 - Oxford, GB: Oxford University Press.
Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
Consciousness and its place in nature.David Chalmers - 2003 - In Stephen P. Stich & Ted A. Warfield (eds.), Blackwell Guide to the Philosophy of Mind. Blackwell. pp. 102--142.

View all 33 references / Add more references