Abstract
The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This gave rise to the development of artificial morality, an emerging field in artificial intelligence which explores whether and how artificial systems can be furnished with moral capacities. This will have a deep impact on our lives. Yet, the methodological foundations of artificial morality are still sketchy and often far off from possible applications. One important area of application of artificial systems with moral capacities is geriatric care. The goal of this article is to afford the methodological foundations for artificial morality, i.e., for implementing moral capacities in artificial systems in general, and to discuss them with respect to an assistive system in geriatric care which is capable of moral learning.