Self-improving AI: an Analysis [Book Review]

Minds and Machines 17 (3):249-259 (2007)
  Copy   BIBTEX

Abstract

Self-improvement was one of the aspects of AI proposed for study in the 1956 Dartmouth conference. Turing proposed a “child machine” which could be taught in the human manner to attain adult human-level intelligence. In latter days, the contention that an AI system could be built to learn and improve itself indefinitely has acquired the label of the bootstrap fallacy. Attempts in AI to implement such a system have met with consistent failure for half a century. Technological optimists, however, have maintained that a such system is possible, producing, if implemented, a feedback loop that would lead to a rapid exponential increase in intelligence. We examine the arguments for both positions and draw some conclusions.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,779

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2009-01-28

Downloads
110 (#158,403)

6 months
6 (#701,066)

Historical graph of downloads
How can I increase my downloads?