Switch to: References

Add citations

You must login to add citations.
  1. On the indignity of killer robots.Garry Young - 2021 - Ethics and Information Technology 23 (3):473-482.
    Recent discussion on the ethics of killer robots has focused on the supposed lack of respect their deployment would show to combatants targeted, thereby causing their undignified deaths. I present two rebuttals of this argument. The weak rebuttal maintains that while deploying killer robots is an affront to the dignity of combatants, their use should nevertheless be thought of as a pro tanto wrong, making deployment permissible if the affront is outweighed by some right-making feature. This rebuttal is, however, vulnerable (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Collective Responsibility and Artificial Intelligence.Isaac Taylor - 2024 - Philosophy and Technology 37 (1):1-18.
    The use of artificial intelligence (AI) to make high-stakes decisions is sometimes thought to create a troubling responsibility gap – that is, a situation where nobody can be held morally responsible for the outcomes that are brought about. However, philosophers and practitioners have recently claimed that, even though no individual can be held morally responsible, groups of individuals might be. Consequently, they think, we have less to fear from the use of AI than might appear to be the case. This (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  • Autonomous weapons systems and the moral equality of combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 22 (3):197-209.
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial intelligence and responsibility.Lode Lauwaert - 2021 - AI and Society 36 (3):1001-1009.
    In the debate on whether to ban LAWS, moral arguments are mainly used. One of these arguments, proposed by Sparrow, is that the use of LAWS goes hand in hand with the responsibility gap. Together with the premise that the ability to hold someone responsible is a necessary condition for the admissibility of an act, Sparrow believes that this leads to the conclusion that LAWS should be prohibited. In this article, it will be shown that Sparrow’s argumentation for both premises (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue (...)
    Direct download  
     
    Export citation  
     
    Bookmark