The morality of autonomous robots

Journal of Military Ethics 12 (2):129 - 141 (2013)
  Copy   BIBTEX

Abstract

While there are many issues to be raised in using lethal autonomous robotic weapons (beyond those of remotely operated drones), we argue that the most important question is: should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability, operational questions of chain of command, or legal questions of sovereign borders. We further argue that the answer must be ?no? and offer several reasons for banning autonomous robots. (1) Such a robot treats a human as an object, instead of as a person with inherent dignity. (2) A machine can only mimic moral actions, it cannot be moral. (3) A machine run by a program has no human emotions, no feelings about the seriousness of killing a human. (4) Using such a robot would be a violation of military honor. We therefore conclude that the use of an autonomous robot in lethal operations should be banned

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,853

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2013-10-19

Downloads
211 (#95,150)

6 months
39 (#97,807)

Historical graph of downloads
How can I increase my downloads?

References found in this work

Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
Foundations of the metaphysics of morals.Immanuel Kant - 2000 - In Steven M. Cahn (ed.), Exploring Philosophy: An Introductory Anthology. New York, NY, United States of America: Oxford University Press USA.
The Case for Ethical Autonomy in Unmanned Systems.Ronald C. Arkin - 2010 - Journal of Military Ethics 9 (4):332-341.
Saying 'No!' to Lethal Autonomous Targeting.Noel Sharkey - 2010 - Journal of Military Ethics 9 (4):369-383.

View all 8 references / Add more references