Switch to: References

Add citations

You must login to add citations.
  1. Technology as Driver for Morally Motivated Conceptual Engineering.Herman Veluwenkamp, Marianna Capasso, Jonne Maas & Lavinia Marin - 2022 - Philosophy and Technology 35 (3):1-25.
    New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  • Introduction to the Topical Collection on AI and Responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Philosophy and Technology 35 (4):1-6.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Statistically Responsible Artificial Intelligences.Smith Nicholas & Darby Vickers - 2021 - Ethics and Information Technology 23 (3):483-493.
    As artificial intelligence becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Artificial Intelligence and Responsibility Gaps: What is the Problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3).
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomous Artificial Intelligence and Liability: a Comment on List.Michael Da Silva - 2022 - Philosophy and Technology 35 (2):1-6.
    Christian List argues that responsibility gaps created by viewing artificial intelligence as intentional agents are problematic enough that regulators should only permit the use of autonomous AI in high-stakes settings where AI is designed to be moral or a liability transfer agreement will fill any gaps. This work challenges List’s proposed condition. A requirement for “moral” AI is too onerous given technical challenges and other ways to check AI quality. Moreover, transfer agreements only plausibly fill responsibility gaps by applying independently (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Autonomous weapon systems and jus ad bellum.Alexander Blanchard & Mariarosaria Taddeo - forthcoming - AI and Society:1-7.
    In this article, we focus on the scholarly and policy debate on autonomous weapon systems and particularly on the objections to the use of these weapons which rest on jus ad bellum principles of proportionality and last resort. Both objections rest on the idea that AWS may increase the incidence of war by reducing the costs for going to war or by providing a propagandistic value. We argue that whilst these objections offer pressing concerns in their own right, they suffer (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations