Switch to: References

Add citations

You must login to add citations.
  1. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - forthcoming - Social Epistemology.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Modernity and Contemporaneity.Evangelos D. Protopapadakis & Georgios Arabatzis (eds.) - 2022 - The NKUA Applied Philosophy Research Lab Press.
    Modernity and Contemporaneity is the 3rd volume in the Hellenic-Serbian Philosophical Dialogue Series, a project that was initiated as an emphatic token of the will and commitment to establish permanent and fruitful collaboration between two strongly bonded Departments of Philosophy, this of the National and Kapodistrian University of Athens, and that of the University of Novi Sad respectively. This collaboration was founded from the very beginning upon friendship, mutual respect and strong engagement, as well us upon our firm resolution to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then provide the details of a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • A willingness to be vulnerable: norm psychology and human–robot relationships.Stephen A. Setman - 2021 - Ethics and Information Technology 23 (4):815-824.
    Should we welcome social robots into interpersonal relationships? In this paper I show that an adequate answer to this question must take three factors into consideration: (1) the psychological vulnerability that characterizes ordinary interpersonal relationships, (2) the normative significance that humans attach to other people’s attitudes in such relationships, and (3) the tendency of humans to anthropomorphize and “mentalize” artificial agents, often beyond their actual capacities. I argue that we should welcome social robots into interpersonal relationships only if they are (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Can Autonomous Agents Without Phenomenal Consciousness Be Morally Responsible?László Bernáth - 2021 - Philosophy and Technology 34 (4):1363-1382.
    It is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be morally responsible. I attempt to (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Can AlphaGo be apt subjects for Praise/Blame for "Move 37"?Mubarak Hussain - 2023 - Aies '23: Aaai/Acm Conference on Ai, Ethics, and Society, Montréal, Qc, Canada, August.
    This paper examines whether machines (algorithms/programs/ AI systems) are apt subjects for praise or blame for some actions or performances. I consider "Move 37" of AlphaGo as a case study. DeepMind’s AlphaGo is an AI algorithm developed to play the game of Go. The AlphaGo utilizes Deep Neural Networks. As AlphaGo is trained through reinforcement learning, the AI algorithm can improve itself over a period of time. Such AI models can go beyond the intended task and perform novel and unpredictable (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • The Ethics of Artificial Intelligence and Robotization in Tourism and Hospitality – A Conceptual Framework and Research Agenda.Stanislav Ivanov & Steven Umbrello - 2021 - Journal of Smart Tourism 1 (2):9-18.
    The impacts that AI and robotics systems can and will have on our everyday lives are already making themselves manifest. However, there is a lack of research on the ethical impacts and means for amelioration regarding AI and robotics within tourism and hospitality. Given the importance of designing technologies that cross national boundaries, and given that the tourism and hospitality industry is fundamentally predicated on multicultural interactions, this is an area of research and application that requires particular attention. Specifically, tourism (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  • AI and Ethics: Reality or Oxymoron?Jean Kühn Keyser - manuscript
    A philosophical linguistic exploration into the existence of not of AI ethics. Using Adorno's negative dialectics the author considers contemporary approaches to AI and Ethics, especially with regards to policy and law considerations. Looking at if these approaches are in fact speaking to our historical conception of AI and what the actual emergence of the latter could imply for future ethical concerns.
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Digital Me Ontology and Ethics.Ljupco Kocarev & Jasna Koteska - manuscript
    Digital me ontology and ethics. 21 December 2020. -/- Ljupco Kocarev and Jasna Koteska. -/- This paper addresses ontology and ethics of an AI agent called digital me. We define digital me as autonomous, decision-making, and learning agent, representing an individual and having practically immortal own life. It is assumed that digital me is equipped with the big-five personality model, ensuring that it provides a model of some aspects of a strong AI: consciousness, free will, and intentionality. As computer-based personality (...)
    Direct download  
     
    Export citation  
     
    Bookmark