Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law

Temple International and Comparative Law Journal 30 (1):99-117 (2016)
  Copy   BIBTEX

Abstract

While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about.

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
Framing robot arms control.Wendell Wallach & Colin Allen - 2013 - Ethics and Information Technology 15 (2):125-135.
The morality of autonomous robots.Aaron M. Johnson & Sidney Axinn - 2013 - Journal of Military Ethics 12 (2):129 - 141.
Saying 'No!' to Lethal Autonomous Targeting.Noel Sharkey - 2010 - Journal of Military Ethics 9 (4):369-383.
The Strategic Robot Problem: Lethal Autonomous Weapons in War.Heather M. Roff - 2014 - Journal of Military Ethics 13 (3):211-227.
The Golden Rule Principle in African Ethics and Kant’s Categorical Imperative.Godwin Azenabor - 2008 - Proceedings of the Xxii World Congress of Philosophy 10:17-23.
The Rule of Law as the Rule of Reasons.Mathilde Cohen - 2010 - Archiv für Rechts- und Sozialphilosophie 96 (1):1-16.

Analytics

Added to PP
2017-01-17

Downloads
706 (#21,885)

6 months
92 (#43,247)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Duncan MacIntosh
Dalhousie University

References found in this work

No references found.

Add more references