Switch to: References

Add citations

You must login to add citations.
  1. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Strictly Human: Limitations of Autonomous Systems.Sadjad Soltanzadeh - 2022 - Minds and Machines 32 (2):269-288.
    Can autonomous systems replace humans in the performance of their activities? How does the answer to this question inform the design of autonomous systems? The study of technical systems and their features should be preceded by the study of the activities in which they play roles. Each activity can be described by its overall goals, governing norms and the intermediate steps which are taken to achieve the goals and to follow the norms. This paper uses the activity realist approach to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • A qualified defense of top-down approaches in machine ethics.Tyler Cook - forthcoming - AI and Society:1-15.
    This paper concerns top-down approaches in machine ethics. It is divided into three main parts. First, I briefly describe top-down design approaches, and in doing so I make clear what those approaches are committed to and what they involve when it comes to training an AI to behave ethically. In the second part, I formulate two underappreciated motivations for endorsing them, one relating to predictability of machine behavior and the other relating to scrutability of machine decision-making. Finally, I present three (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Getting Machines to Do Your Dirty Work.Tomi Francis & Todd Karhu - forthcoming - Philosophical Studies:1-15.
    Autonomous systems are machines that can alter their behavior without direct human oversight or control. How ought we to program them to behave? A plausible starting point is given by the Reduction to Acts Thesis, according to which we ought to program autonomous systems to do whatever a human agent ought to do in the same circumstances. Although the Reduction to Acts Thesis is initially appealing, we argue that it is false: it is sometimes permissible to program a machine to (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • No Right To Mercy - Making Sense of Arguments From Dignity in the Lethal Autonomous Weapons Debate.Maciej Zając - 2020 - Etyka 59 (1):134-55.
    Arguments from human dignity feature prominently in the Lethal Autonomous Weapons moral feasibility debate, even though their exists considerable controversy over their role and soundness and the notion of dignity remains under-defined. Drawing on the work of Dieter Birnbacher, I fix the sub-discourse as referring to the essential value of human persons in general, and to postulated moral rights of combatants not covered within the existing paradigm of the International Humanitarian Law in particular. I then review and critique dignity-based arguments (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • On the indignity of killer robots.Garry Young - 2021 - Ethics and Information Technology 23 (3):473-482.
    Recent discussion on the ethics of killer robots has focused on the supposed lack of respect their deployment would show to combatants targeted, thereby causing their undignified deaths. I present two rebuttals of this argument. The weak rebuttal maintains that while deploying killer robots is an affront to the dignity of combatants, their use should nevertheless be thought of as a pro tanto wrong, making deployment permissible if the affront is outweighed by some right-making feature. This rebuttal is, however, vulnerable (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Autonomous Weapon Systems: A Clarification.Nathan Gabriel Wood - 2023 - Journal of Military Ethics 22 (1):18-32.
    Due to advances in military technology, there has been an outpouring of research on what are known as autonomous weapon systems (AWS). However, it is common in this literature for arguments to be made without first making clear exactly what definitions one is employing, with the detrimental effect that authors may speak past one another or even miss the targets of their arguments. In this article I examine the U.S. Department of Defense and International Committee of the Red Cross definitions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • There Is No Techno-Responsibility Gap.Daniel W. Tigard - 2020 - Philosophy and Technology 34 (3):589-607.
    In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  • Interdisciplinary Confusion and Resolution in the Context of Moral Machines.Jakob Stenseke - 2022 - Science and Engineering Ethics 28 (3):1-17.
    Recent advancements in artificial intelligence have fueled widespread academic discourse on the ethics of AI within and across a diverse set of disciplines. One notable subfield of AI ethics is machine ethics, which seeks to implement ethical considerations into AI systems. However, since different research efforts within machine ethics have discipline-specific concepts, practices, and goals, the resulting body of work is pestered with conflict and confusion as opposed to fruitful synergies. The aim of this paper is to explore ways to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To achieve this goal, we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Customizable Ethics Settings for Building Resilience and Narrowing the Responsibility Gap: Case Studies in the Socio-Ethical Engineering of Autonomous Systems.Sadjad Soltanzadeh, Jai Galliott & Natalia Jevglevskaja - 2020 - Science and Engineering Ethics 26 (5):2693-2708.
    Ethics settings allow for morally significant decisions made by humans to be programmed into autonomous machines, such as autonomous vehicles or autonomous weapons. Customizable ethics settings are a type of ethics setting in which the users of autonomous machines make such decisions. Here two arguments are provided in defence of customizable ethics settings. Firstly, by approaching ethics settings in the context of failure management, it is argued that customizable ethics settings are instrumentally and inherently valuable for building resilience into the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Autonomous weapons systems and the moral equality of combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 22 (3):197-209.
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Do Automated Vehicles Face Moral Dilemmas? A Plea for a Political Approach.Javier Rodríguez-Alcázar, Lilian Bermejo-Luque & Alberto Molina-Pérez - 2021 - Philosophy and Technology 34:811-832.
    How should automated vehicles react in emergency circumstances? Most research projects and scientific literature deal with this question from a moral perspective. In particular, it is customary to treat emergencies involving AVs as instances of moral dilemmas and to use the trolley problem as a framework to address such alleged dilemmas. Some critics have pointed out some shortcomings of this strategy and have urged to focus on mundane traffic situations instead of trolley cases involving AVs. Besides, these authors rightly point (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • No Such Thing as Killer Robots.Michael Robillard - 2017 - Journal of Applied Philosophy 35 (4):705-717.
    There have been two recent strands of argument arguing for the pro tanto impermissibility of fully autonomous weapon systems. On Sparrow's view, AWS are impermissible because they generate a morally problematic ‘responsibility gap’. According to Purves et al., AWS are impermissible because moral reasoning is not codifiable and because AWS are incapable of acting for the ‘right’ reasons. I contend that these arguments are flawed and that AWS are not morally problematic in principle. Specifically, I contend that these arguments presuppose (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • The Moral Case for the Development and Use of Autonomous Weapon Systems.Erich Riesen - 2022 - Journal of Military Ethics 21 (2):132-150.
    Autonomous Weapon Systems (AWS) are artificial intelligence systems that can make and act on decisions concerning the termination of enemy soldiers and installations without direct intervention from a human being. In this article, I provide the positive moral case for the development and use of supervised and fully autonomous weapons that can reliably adhere to the laws of war. Two strong, prima facie obligations make up the positive case. First, we have a strong moral reason to deploy AWS (in an (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem?Sven Nyholm & Jilles Smids - 2016 - Ethical Theory and Moral Practice 19 (5):1275-1289.
    Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   60 citations  
  • The ethics of crashes with self‐driving cars: A roadmap, I.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12507.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  • The ethics of crashes with self‐driving cars: A roadmap, II.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12506.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, we need to think about who should be held responsible when self‐driving cars crash and people are injured or killed. We also need to examine what new ethical obligations might be created for car users by the safety potential of self‐driving cars. The article first considers what lessons might be learned from the growing legal literature on responsibility for crashes with (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.Sven Nyholm - 2018 - Science and Engineering Ethics 24 (4):1201-1219.
    Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   63 citations  
  • Who Should Decide How Machines Make Morally Laden Decisions?Dominic Martin - 2017 - Science and Engineering Ethics 23 (4):951-967.
    Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Robots and Respect: A Response to Robert Sparrow.Ryan Jenkins & Duncan Purves - 2016 - Ethics and International Affairs 30 (3):391-400.
    Robert Sparrow argues that several initially plausible arguments in favor of the deployment of autonomous weapons systems (AWS) in warfare fail, and that their deployment faces a serious moral objection: deploying AWS fails to express the respect for the casualties of war that morality requires. We critically discuss Sparrow’s argument from respect and respond on behalf of some objections he considers. Sparrow’s argument against AWS relies on the claim that they are distinct from accepted weapons of war in that they (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Responsibility for Killer Robots.Johannes Himmelreich - 2019 - Ethical Theory and Moral Practice 22 (3):731-747.
    Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • Trust and resilient autonomous driving systems.Adam Henschke - 2020 - Ethics and Information Technology 22 (1):81-92.
    Autonomous vehicles, and the larger socio-technical systems that they are a part of are likely to have a deep and lasting impact on our societies. Trust is a key value that will play a role in the development of autonomous driving systems. This paper suggests that trust of autonomous driving systems will impact the ways that these systems are taken up, the norms and laws that guide them and the design of the systems themselves. Further to this, in order to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The Problem with Killer Robots.Nathan Gabriel Wood - 2020 - Journal of Military Ethics 19 (3):220-240.
    Warfare is becoming increasingly automated, from automatic missile defense systems to micro-UAVs (WASPs) that can maneuver through urban environments with ease, and each advance brings with it ethical questions in need of resolving. Proponents of lethal autonomous weapons systems (LAWS) provide varied arguments in their favor; robots are capable of better identifying combatants and civilians, thus reducing "collateral damage"; robots need not protect themselves and so can incur more risks to protect innocents or gather more information before using deadly force; (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable?Lily Frank & Sven Nyholm - 2017 - Artificial Intelligence and Law 25 (3):305-323.
    The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Operationalizing the Ethics of Soldier Enhancement.Jovana Davidovic & Forrest S. Crowell - 2022 - Journal of Military Ethics 20 (3-4):180-199.
    This article is a result of a unique project that brought together academics and military practitioners with a mind to addressing difficult moral questions in a way that is philosophically careful,...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   57 citations  
  • What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Can Autonomous Agents Without Phenomenal Consciousness Be Morally Responsible?László Bernáth - 2021 - Philosophy and Technology 34 (4):1363-1382.
    It is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be morally responsible. I attempt to (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    No categories
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  • Should we campaign against sex robots?John Danaher, Brian D. Earp & Anders Sandberg - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. Cambridge, MA: MIT Press.
    In September 2015 a well-publicised Campaign Against Sex Robots (CASR) was launched. Modelled on the longer-standing Campaign to Stop Killer Robots, the CASR opposes the development of sex robots on the grounds that the technology is being developed with a particular model of female-male relations (the prostitute-john model) in mind, and that this will prove harmful in various ways. In this chapter, we consider carefully the merits of campaigning against such a technology. We make three main arguments. First, we argue (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations