Abstract
In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument for computational rationality as an integrative element that effectively combines the philosophical and computational aspects of artificial moral agency. This logically leads to a philosophically coherent and scientifically consistent model for building artificial moral agents. Besides providing a possible answer to the question of how to build artificial moral agents, this model also invites sound debate from multiple disciplines, which should help to advance the field of machine ethics forward.