Abstract
The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated into their design, and are, as such, dependent on human agents. As a consequence, AI systems cannot be held morally responsible, and responsibility attributions should take into account normative and social aspects involved in the design and deployment of the said AI. My argument falls in line with approaches critical of attributing moral agency to artificial agents, but draws from the philosophy of action, highlighting further philosophical underpinnings of current debates on artificial agency.