Should autonomous robots be pacifists?

Ethics and Information Technology 15 (2):109-123 (2013)
  Copy   BIBTEX

Abstract

Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether autonomous robots ought to be programmed to be pacifists. The answer arrived at is “Yes”, if we decide to create autonomous robots, they ought to be pacifists. This is to say that robots ought not to be programmed to willingly and intentionally kill human beings, or, by extension, participate in or promote warfare, as something that predictably involves the killing of humans. Insofar as we are the ones that will be determining the content of the robot’s value system, then we ought to program robots to be pacifists, rather than ‘warists’. This is (in part) because we ought to be pacifists, and creating and programming machines to be “autonomous lethal robotic systems” directly violates this normative demand on us. There are no mitigating reasons to program lethal autonomous machines to contribute to or participate in warfare. Even if the use of autonomous lethal robotic systems could be consistent with received just war theory and the international laws of war, and even if their involvement could make warfare less inhumane in certain ways, these reasons do not compensate for the ubiquitous harms characteristic of modern warfare. In this paper, I provide four main reasons why autonomous robots ought to be pacifists, most of which do not depend on the truth of pacifism. The strong claim being argued for here is that automated warfare ought not to be pursued. The weaker claim being argued for here is that automated warfare ought not to be pursued, unless it is the most pacifist option available at the time, and other alternatives have been reasonably explored, and we are simultaneously promoting a (long term) pacifist agenda in (many) other ways. Thus, the more ambitious goal of this paper is to convince readers that automated warfare is something that we ought not to promote or pursue, while the more modest—and I suspect, more palatable—goal is to spark sustained critical discussion about the assumptions underlying the drive towards automated warfare, and to generate legitimate consideration of its pacifist alternatives,in theory, policy, and practice.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,672

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The case against robotic warfare: A response to Arkin.Ryan Tonkens - 2012 - Journal of Military Ethics 11 (2):149-168.
Framing robot arms control.Wendell Wallach & Colin Allen - 2013 - Ethics and Information Technology 15 (2):125-135.
Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
Out of character: on the creation of virtuous machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
Embodied Cognition for Autonomous Interactive Robots.Guy Hoffman - 2012 - Topics in Cognitive Science 4 (4):759-772.
Robots, Trust and War.Thomas W. Simpson - 2011 - Philosophy and Technology 24 (3):325-337.
Responsibility Practices and Unmanned Military Technologies.Merel Noorman - 2014 - Science and Engineering Ethics 20 (3):809-826.

Analytics

Added to PP
2013-11-21

Downloads
87 (#194,061)

6 months
4 (#775,606)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Ryan Tonkens
Dalhousie University

Citations of this work

Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
The case against robotic warfare: A response to Arkin.Ryan Tonkens - 2012 - Journal of Military Ethics 11 (2):149-168.

Add more citations

References found in this work

Killing in war.Jeff McMahan - 2009 - New York: Oxford University Press.
Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
Machine Ethics.Michael Anderson & Susan Leigh Anderson (eds.) - 2011 - Cambridge Univ. Press.
Innocence, Self‐Defense and Killing in War.Jeff McMahan - 1994 - Journal of Political Philosophy 2 (3):193-221.
War and massacre.Thomas Nagel - 1972 - Philosophy and Public Affairs 1 (2):123-144.

View all 20 references / Add more references