Abstract
International Journal of Machine Consciousness, Volume 06, Issue 02, Page 141-161, December 2014. This paper follows directly from an earlier paper where we discussed the requirements for an artifact to be a moral agent and concluded that the artifactual question is ultimately a red herring. As before, we take moral agency to be that condition in which an agent can appropriately be held responsible for her actions and their consequences. We set a number of stringent conditions on moral agency. A moral agent must be embedded in a cultural and specifically moral context and embodied in a suitable physical form. It must be, in some substantive sense, alive. It must exhibit self-conscious awareness. It must exhibit sophisticated conceptual abilities, going well beyond what the likely majority of conceptual agents possess: not least that it must possess a well-developed moral space of reasons. Finally, it must be able to communicate its moral agency through some system of signs: A "private" moral world is not enough. After reviewing these conditions and pouring cold water on recent claims for having achieved "minimal" machine consciousness, we turn our attention to a number of existing and, in some cases, commonplace artifacts that lack moral agency yet nevertheless require one to take a moral stance toward them, as if they were moral agents. Finally, we address another class of agents raising a related set of issues: autonomous military robots.