The Unlikeliest of Duos; Why Super Intelligent AI Will Cooperate with Humans

Abstract

The focus of this article is the "good-will theory", which explains the effect humans can have on the safety of AI, along with how it is in the best interest of a superintelligent AI to work alongside humans and not overpower them. Future papers dealing with the good-will theory will be published, but discuss different talking points in regards to possible or real objections to the theory.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

Superintelligence as superethical.Steve Petersen - 2017 - In Patrick Lin, Keith Abney & Ryan Jenkins (eds.), Robot Ethics 2. 0: New Challenges in Philosophy, Law, and Society. New York, USA: Oxford University Press. pp. 322-337.
Other Minds, Other Intelligences: The Problem of Attributing Agency to Machines.Sven Nyholm - 2019 - Cambridge Quarterly of Healthcare Ethics 28 (4):592-598.
AI armageddon and the three laws of robotics.Lee McCauley - 2007 - Ethics and Information Technology 9 (2):153-164.
Petronius 35.4.Ramon Baltar Veloso - 1976 - Classical Quarterly 26 (2):319-319.
Petronius 35.4.Ramon Baltar Veloso - 1976 - Classical Quarterly 26 (02):319-.
Two arguments against human-friendly AI.Ken Daley - 2021 - AI and Ethics 1 (1):435-444.
Robot dreams.Isaac Asimov - 2009 - In Susan Schneider (ed.), Science Fiction and Philosophy: From Time Travel to Superintelligence. Wiley-Blackwell. pp. 117.

Analytics

Added to PP
2023-06-13

Downloads
101 (#171,230)

6 months
62 (#76,154)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Griffin Pithie
Rhodes College

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references