Friendly Superintelligent AI: All You Need Is Love

In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer (2017)
  Copy   BIBTEX

Abstract

There is a non-trivial chance that sometime in the future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become “superintelligent”, vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure—long before one arrives—that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in part because most of the final goals we could give an AI admit of so-called “perverse instantiations”. I propose a novel solution to this puzzle: instruct the AI to love humanity. The proposal is compared with Yudkowsky’s Coherent Extrapolated Volition, and Bostrom’s Moral Modeling proposals.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,590

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2020-02-07

Downloads
0

6 months
0

Historical graph of downloads

Sorry, there are not enough data points to plot this chart.
How can I increase my downloads?

Author's Profile

Michael Prinzing
Baylor University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references