Abstract
There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that even in this context there would be something ethically problematic about such targeting. I argue that an account of the non-consequentialist foundations of the principle of distinction suggests that the use of autonomous weapon systems is unethical by virtue of failing to show appropriate respect for the humanity of our enemies. However, the success of the strongest form of this argument depends upon understanding the robot itself as doing the killing. To the extent that we believe that, on the contrary, AWS are only the means whereby those who order them into action kill, the idea that the use of AWS fails to respect the humanity of our enemy will turn upon an account of what is required by respect that is essentially conventional. Thus, while the theoretical foundations of the idea that AWS are weapons that are “evil in themselves” are weaker than critics have sometimes maintained, they are nonetheless sufficient to the task of demanding a prohibition of the development and deployment of such weapons.