Should Humanity Build a Global AI Nanny to Delay the Singularity Until It's Better Understood?

Journal of Consciousness Studies 19 (1-2):96 (2012)
  Copy   BIBTEX

Abstract

Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more fully understands how to execute a Singularity in a positive way. It is suggested that as technology progresses, humanity may find the creation of an AI Nanny desirable as a means of protecting against the destructive potential of various advanced technologies such as AI, nanotechnology and synthetic biology

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 90,616

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2013-10-27

Downloads
85 (#182,120)

6 months
7 (#175,814)

Historical graph of downloads
How can I increase my downloads?