Superintelligence as a Cause or Cure for Risks of Astronomical Suffering

Informatica: An International Journal of Computing and Informatics 41 (4):389-400 (2017)
  Copy   BIBTEX

Abstract

Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can both be a suffering risk or help avoid it. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks.

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2018-01-11

Downloads
590 (#31,722)

6 months
136 (#30,173)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Kaj Sotala
Foundational Research Institute

References found in this work

Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
Unfit for the Future: The Need for Moral Enhancement.Ingmar Persson & Julian Savulescu - 2012 - Oxford, GB: Oxford University Press UK. Edited by Julian Savulescu.
The expanding circle: ethics, evolution, and moral progress.Peter Singer - 2011 - Princeton, NJ: Princeton University Press.
Equality or Priority?Derek Parfit - 2002 - In Matthew Clayton & Andrew Williams (eds.), The Ideal of Equality. New York: Palgrave Macmillan. pp. 81-125.

View all 26 references / Add more references