Thinking Inside the Box: Controlling and Using an Oracle AI

Minds and Machines 22 (4):299-324 (2012)
  Copy   BIBTEX

Abstract

There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in the world except by answering questions. Even this narrow approach presents considerable challenges. In this paper, we analyse and critique various methods of controlling the AI. In general an Oracle AI might be safer than unrestricted AI, but still remains potentially dangerous.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,219

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Trapped inside the Box? Five Questions for Ben Fine.Michael A. Lebowitz - 2010 - Historical Materialism 18 (1):131-149.
The meta-newcomb problem.Nick Bostrom - 2001 - Analysis 61 (4):309–310.
A general Black box theory.Mario Bunge - 1963 - Philosophy of Science 30 (4):346-358.

Analytics

Added to PP
2012-06-06

Downloads
320 (#60,193)

6 months
22 (#114,172)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

References found in this work

Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
Theory of Games and Economic Behavior.John Von Neumann & Oskar Morgenstern - 1944 - Princeton, NJ, USA: Princeton University Press.

View all 28 references / Add more references