Abstract
Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode these conditions and generate a flowchart that we call the Moral Responsibility Test. This test can be used as a tool both to evaluate whether an entity is a morally responsible agent and to inform human moral decision-making over the influencing variables of the context of action. We apply the test to the case of Artificial Moral Advisors and conclude that this form of AI cannot qualify as morally responsible agents. We further discuss the implications for the use of AMAs as moral enhancement and show that using AMAs to offload human responsibility is inadequate. We argue instead that AMAs could morally enhance users if they are interpreted as enablers for moral knowledge of the contextual variables surrounding human moral decision-making, with the implication that such a use might actually enlarge human moral responsibility.