Switch to: References

Add citations

You must login to add citations.
  1. Hiring, Algorithms, and Choice: Why Interviews Still Matter.Vikram R. Bhargava & Pooria Assadi - 2024 - Business Ethics Quarterly 34 (2):201-230.
    Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Introduction to the Topical Collection on AI and Responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Philosophy and Technology 35 (4):1-6.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Technology as Driver for Morally Motivated Conceptual Engineering.Herman Veluwenkamp, Marianna Capasso, Jonne Maas & Lavinia Marin - 2022 - Philosophy and Technology 35 (3):1-25.
    New technologies are the source of uncertainties about the applicability of moral and morally connotated concepts. These uncertainties sometimes call for conceptual engineering, but it is not often recognized when this is the case. We take this to be a missed opportunity, as a recognition that different researchers are working on the same kind of project can help solve methodological questions that one is likely to encounter. In this paper, we present three case studies where philosophers of technology implicitly engage (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Reasons for Meaningful Human Control.Herman Veluwenkamp - 2022 - Ethics and Information Technology 24 (4):1-9.
    ”Meaningful human control” is a term invented in the political and legal debate on autonomous weapons system, but it is nowadays also used in many other contexts. It is supposed to specify conditions under which an artificial system is under the right kind of control to avoid responsibility gaps: that is, situations in which no moral agent is responsible. Santoni de Sio and Van den Hoven have recently suggested a framework that can be used by system designers to operationalize this (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Design for values and conceptual engineering.Herman Veluwenkamp & Jeroen van den Hoven - 2023 - Ethics and Information Technology 25 (1):1-12.
    Politicians and engineers are increasingly realizing that values are important in the development of technological artefacts. What is often overlooked is that different conceptualizations of these abstract values lead to different design-requirements. For example, designing social media platforms for deliberative democracy sets us up for technical work on completely different types of architectures and mechanisms than designing for so-called liquid or direct forms of democracy. Thinking about Democracy is not enough, we need to design for the proper conceptualization of these (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Statistically responsible artificial intelligences.Smith Nicholas & Darby Vickers - 2021 - Ethics and Information Technology 23 (3):483-493.
    As artificial intelligence becomes ubiquitous, it will be increasingly involved in novel, morally significant situations. Thus, understanding what it means for a machine to be morally responsible is important for machine ethics. Any method for ascribing moral responsibility to AI must be intelligible and intuitive to the humans who interact with it. We argue that the appropriate approach is to determine how AIs might fare on a standard account of human moral responsibility: a Strawsonian account. We make no claim that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-2.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Why Command Responsibility May (not) Be a Solution to Address Responsibility Gaps in LAWS.Ann-Katrien Oimann - forthcoming - Criminal Law and Philosophy:1-27.
    The possible future use of lethal autonomous weapons systems (LAWS) and the challenges associated with assigning moral responsibility leads to several debates. Some authors argue that the highly autonomous capability of such systems may lead to a so-called responsibility gap in situations where LAWS cause serious violations of international humanitarian law. One proposed solution is the doctrine of command responsibility. Despite the doctrine’s original development to govern human interactions on the battlefield, it is worth considering whether the doctrine of command (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  • Of trolleys and self-driving cars: What machine ethicists can and cannot learn from trolleyology.Peter Königs - 2023 - Utilitas 35 (1):70-87.
    Crashes involving self-driving cars at least superficially resemble trolley dilemmas. This article discusses what lessons machine ethicists working on the ethics of self-driving cars can learn from trolleyology. The article proceeds by providing an account of the trolley problem as a paradox and by distinguishing two types of solutions to the trolley problem. According to an optimistic solution, our case intuitions about trolley dilemmas are responding to morally relevant differences. The pessimistic solution denies that this is the case. An optimistic (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to evaluate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Autonomous Artificial Intelligence and Liability: a Comment on List.Michael Da Silva - 2022 - Philosophy and Technology 35 (2):1-6.
    Christian List argues that responsibility gaps created by viewing artificial intelligence as intentional agents are problematic enough that regulators should only permit the use of autonomous AI in high-stakes settings where AI is designed to be moral or a liability transfer agreement will fill any gaps. This work challenges List’s proposed condition. A requirement for “moral” AI is too onerous given technical challenges and other ways to check AI quality. Moreover, transfer agreements only plausibly fill responsibility gaps by applying independently (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Autonomous Military Systems: collective responsibility and distributed burdens.Niël Henk Conradie - 2023 - Ethics and Information Technology 25 (1):1-14.
    The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility. I argue that concerns over the use of AMS can be assuaged by recognising the richly interrelated context in which these systems will most (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Autonomous weapon systems and jus ad bellum.Alexander Blanchard & Mariarosaria Taddeo - forthcoming - AI and Society:1-7.
    In this article, we focus on the scholarly and policy debate on autonomous weapon systems and particularly on the objections to the use of these weapons which rest on jus ad bellum principles of proportionality and last resort. Both objections rest on the idea that AWS may increase the incidence of war by reducing the costs for going to war or by providing a propagandistic value. We argue that whilst these objections offer pressing concerns in their own right, they suffer (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Autonomous Force Beyond Armed Conflict.Alexander Blanchard - 2023 - Minds and Machines 33 (1):251-260.
    Proposals by the San Francisco Police Department (SFPD) to use bomb disposal robots for deadly force against humans have met with widespread condemnation. Media coverage of the furore has tended, incorrectly, to conflate these robots with autonomous weapon systems (AWS), the AI-based weapons used in armed conflict. These two types of systems should be treated as distinct since they have different sets of social, ethical, and legal implications. However, the conflation does raise a pressing question: what _if_ the SFPD had (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue (...)
    Direct download  
     
    Export citation  
     
    Bookmark