Switch to: References

Citations of:

Killer robots

Journal of Applied Philosophy 24 (1):62–77 (2007)

Add citations

You must login to add citations.
  1. Gamification, Side Effects, and Praise and Blame for Outcomes.Sven Nyholm - 2024 - Minds and Machines 34 (1):1-21.
    Abstract“Gamification” refers to adding game-like elements to non-game activities so as to encourage participation. Gamification is used in various contexts: apps on phones motivating people to exercise, employers trying to encourage their employees to work harder, social media companies trying to stimulate user engagement, and so on and so forth. Here, I focus on gamification with this property: the game-designer (a company or other organization) creates a “game” in order to encourage the players (the users) to bring about certain outcomes (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide humans (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2012 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Retribution-Gap and Responsibility-Loci Related to Robots and Automated Technologies: A Reply to Nyholm.Roos de Jong - 2020 - Science and Engineering Ethics 26 (2):727-735.
    Automated technologies and robots make decisions that cannot always be fully controlled or predicted. In addition to that, they cannot respond to punishment and blame in the ways humans do. Therefore, when automated cars harm or kill people, for example, this gives rise to concerns about responsibility-gaps and retribution-gaps. According to Sven Nyholm, however, automated cars do not pose a challenge on human responsibility, as long as humans can control them and update them. He argues that the agency exercised in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Operationalizing the Ethics of Soldier Enhancement.Jovana Davidovic & Forrest S. Crowell - 2022 - Journal of Military Ethics 20 (3-4):180-199.
    This article is a result of a unique project that brought together academics and military practitioners with a mind to addressing difficult moral questions in a way that is philosophically careful,...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Tragic Choices and the Virtue of Techno-Responsibility Gaps.John Danaher - 2022 - Philosophy and Technology 35 (2):1-26.
    There is a concern that the widespread deployment of autonomous machines will open up a number of ‘responsibility gaps’ throughout society. Various articulations of such techno-responsibility gaps have been proposed over the years, along with several potential solutions. Most of these solutions focus on ‘plugging’ or ‘dissolving’ the gaps. This paper offers an alternative perspective. It argues that techno-responsibility gaps are, sometimes, to be welcomed and that one of the advantages of autonomous machines is that they enable us to embrace (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Techno-optimism: an Analysis, an Evaluation and a Modest Defence.John Danaher - 2022 - Philosophy and Technology 35 (2):1-29.
    What is techno-optimism and how can it be defended? Although techno-optimist views are widely espoused and critiqued, there have been few attempts to systematically analyse what it means to be a techno-optimist and how one might defend this view. This paper attempts to address this oversight by providing a comprehensive analysis and evaluation of techno-optimism. It is argued that techno-optimism is a pluralistic stance that comes in weak and strong forms. These vary along a number of key dimensions but each (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Surprising judgments about robot drivers: Experiments on rising expectations and blaming humans.Peter Danielson - 2015 - Etikk I Praksis - Nordic Journal of Applied Ethics 1 (1):73-86.
    N-Reasons is an experimental Internet survey platform designed to enhance public participation in applied ethics and policy. N-Reasons encourages individuals to generate reasons to support their judgments, and groups to converge on a common set of reasons pro and con various issues. In the Robot Ethics Survey some of the reasons contributed surprising judgments about autonomous machines. Presented with a version of the trolley problem with an autonomous train as the agent, participants gave unexpected answers, revealing high expectations for the (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   58 citations  
  • Nehal Bhuta, Susanne Beck, Robin Geiß, Hin-Yan Liu and Claus Kreß . Autonomous Weapons Systems: Law, Ethics, Policy: Cambridge: Cambridge University Press, 2016. Paperback. ISBN 978-1-316-60765-7. €30, 422 pp.John Danaher - 2017 - Ethical Theory and Moral Practice 20 (4):931-933.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Introduction to the Topical Collection on AI and Responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Philosophy and Technology 35 (4):1-6.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomous Military Systems: collective responsibility and distributed burdens.Niël Henk Conradie - 2023 - Ethics and Information Technology 25 (1):1-14.
    The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility. I argue that concerns over the use of AMS can be assuaged by recognising the richly interrelated context in which these systems will most (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Group blameworthiness and group rights.Stephanie Collins - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The following pair of claims is standardly endorsed by philosophers working on group agency: (1) groups are capable of irreducible moral agency and, therefore, can be blameworthy; (2) groups are not capable of irreducible moral patiency, and, therefore, lack moral rights. This paper argues that the best case for (1) brings (2) into question. Section 2 paints the standard picture, on which groups’ blameworthiness derives from their functionalist or interpretivist moral agency, while their lack of moral rights derives from their (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • From Killer Machines to Doctrines and Swarms, or Why Ethics of Military Robotics Is not (Necessarily) About Robots.Mark Coeckelbergh - 2011 - Philosophy and Technology 24 (3):269-278.
    Ethical reflections on military robotics can be enriched by a better understanding of the nature and role of these technologies and by putting robotics into context in various ways. Discussing a range of ethical questions, this paper challenges the prevalent assumptions that military robotics is about military technology as a mere means to an end, about single killer machines, and about “military” developments. It recommends that ethics of robotics attend to how military technology changes our aims, concern itself not only (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • From killer machines to doctrines and swarms.Mark Coeckelbergh - 2011 - Philosophy and Technology 24 (3):269-278.
    Ethical reflections on military robotics can be enriched by a better understanding of the nature and role of these technologies and by putting robotics into context in various ways. Discussing a range of ethical questions, this paper challenges the prevalent assumptions that military robotics is about military technology as a mere means to an end, about single killer machines, and about “military” developments. It recommends that ethics of robotics attend to how military technology changes our aims, concern itself not only (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Drones, information technology, and distance: mapping the moral epistemology of remote fighting. [REVIEW]Mark Coeckelbergh - 2013 - Ethics and Information Technology 15 (2):87-98.
    Ethical reflection on drone fighting suggests that this practice does not only create physical distance, but also moral distance: far removed from one’s opponent, it becomes easier to kill. This paper discusses this thesis, frames it as a moral-epistemological problem, and explores the role of information technology in bridging and creating distance. Inspired by a broad range of conceptual and empirical resources including ethics of robotics, psychology, phenomenology, and media reports, it is first argued that drone fighting, like other long-range (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.Mark Coeckelbergh - 2020 - Science and Engineering Ethics 26 (4):2051-2068.
    This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  • Machine agency and representation.Beba Cibralic & James Mattingly - 2024 - AI and Society 39 (1):345-352.
    Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - 2021 - Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • A Moral Bind? — Autonomous Weapons, Moral Responsibility, and Institutional Reality.Bartlomiej Chomanski - 2023 - Philosophy and Technology 36 (2):1-14.
    In “Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit” (2022), Mariarosaria Taddeo and Alexander Blanchard answer one of the most vexing issues in current ethics of technology: how to close the so-called “responsibility gap”? Their solution is to require that autonomous weapons systems (AWSs) may only be used if there is some human being who accepts the ex ante responsibility for those actions of the AWS that could not have been predicted or intended (in such cases, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The Mandatory Ontology of Robot Responsibility.Marc Champagne - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):448–454.
    Do we suddenly become justified in treating robots like humans by positing new notions like “artificial moral agency” and “artificial moral responsibility”? I answer no. Or, to be more precise, I argue that such notions may become philosophically acceptable only after crucial metaphysical issues have been addressed. My main claim, in sum, is that “artificial moral responsibility” betokens moral responsibility to the same degree that a “fake orgasm” betokens an orgasm.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Machine and human agents in moral dilemmas: automation–autonomic and EEG effect.Federico Cassioli, Laura Angioletti & Michela Balconi - forthcoming - AI and Society:1-13.
    Automation is inherently tied to ethical challenges because of its potential involvement in morally loaded decisions. In the present research, participants (n = 34) took part in a moral multi-trial dilemma-based task where the agent (human vs. machine) and the behavior (action vs. inaction) factors were randomized. Self-report measures, in terms of morality, consciousness, responsibility, intentionality, and emotional impact evaluation were gathered, together with electroencephalography (delta, theta, beta, upper and lower alpha, and gamma powers) and peripheral autonomic (electrodermal activity, heart (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • De Bello Robotico. An Ethical Assessment of Military Robotics.Riccardo Campa - 2019 - Studia Humana 8 (1):19-48.
    This article provides a detailed description of robotic weapons and unmanned systems currently used by the U.S. Military and its allies, and an ethical assessment of their actual or potential use on the battlefield. Firstly, trough a review of scientific literature, reports, and newspaper articles, a catalogue of ethical problems related to military robotics is compiled. Secondly, possible solutions for these problems are offered, by relying also on analytic tools provided by the new field of roboethics. Finally, the article explores (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective.Simon Burton, Ibrahim Habli, Tom Lawton, John McDermid, Phillip Morgan & Zoe Porter - 2020 - Artificial Intelligence 279 (C):103201.
  • On the legal responsibility of autonomous machines.Bartosz Brożek & Marek Jakubiec - 2017 - Artificial Intelligence and Law 25 (3):293-304.
    The paper concerns the problem of the legal responsibility of autonomous machines. In our opinion it boils down to the question of whether such machines can be seen as real agents through the prism of folk-psychology. We argue that autonomous machines cannot be granted the status of legal agents. Although this is quite possible from purely technical point of view, since the law is a conventional tool of regulating social interactions and as such can accommodate various legislative constructs, including legal (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Jus in bello Necessity, The Requirement of Minimal Force, and Autonomous Weapons Systems.Alexander Blanchard & Mariarosaria Taddeo - 2022 - Journal of Military Ethics 21 (3):286-303.
    In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapons systems (AWS). We begin our analysis with an account of the principle of necessity as entailing the requirement of minimal force found in Just War Theory, before highlighting the absence of this principle in existing work on AWS. Overlooking this principle means discounting the obligations that combatants have towards one another in times of war. We argue that the requirement of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomous weapon systems and jus ad bellum.Alexander Blanchard & Mariarosaria Taddeo - forthcoming - AI and Society:1-7.
    In this article, we focus on the scholarly and policy debate on autonomous weapon systems and particularly on the objections to the use of these weapons which rest on jus ad bellum principles of proportionality and last resort. Both objections rest on the idea that AWS may increase the incidence of war by reducing the costs for going to war or by providing a propagandistic value. We argue that whilst these objections offer pressing concerns in their own right, they suffer (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Hiring, Algorithms, and Choice: Why Interviews Still Matter.Vikram R. Bhargava & Pooria Assadi - 2024 - Business Ethics Quarterly 34 (2):201-230.
    Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Can Autonomous Agents Without Phenomenal Consciousness Be Morally Responsible?László Bernáth - 2021 - Philosophy and Technology 34 (4):1363-1382.
    It is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be morally responsible. I attempt to (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Automated decision-making and the problem of evil.Andrea Berber - forthcoming - AI and Society:1-10.
    The intention of this paper is to point to the dilemma humanity may face in light of AI advancements. The dilemma is whether to create a world with less evil or maintain the human status of moral agents. This dilemma may arise as a consequence of using automated decision-making systems for high-stakes decisions. The use of automated decision-making bears the risk of eliminating human moral agency and autonomy and reducing humans to mere moral patients. On the other hand, it also (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Government of Evil Machines: an Application of Romano Guardini’s Thought on Technology.Enrico Beltramini - 2021 - Scientia et Fides 9 (1):257-281.
    In this article I propose a theological reflection on the philosophical assumptions behind the idea that intelligent machine can be governed through ethical protocols, which may apply either to the people who develop the machines or to the machines themselves, or both. This idea is particularly relevant in the case of machines’ extreme wrongdoing, a wrongdoing that becomes an existential risk for humankind. I call this extreme wrong-doing, ‘evil.’ Thus, this article is a theological account on the philosophical assumptions behind (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Can AI Weapons Make Ethical Decisions?Ross W. Bellaby - 2021 - Criminal Justice Ethics 40 (2):86-107.
    The ability of machines to make truly independent and autonomous decisions is a goal of many, not least of military leaders who wish to take the human out of the loop as much as possible, claiming that autonomous military weaponry—most notably drones—can make decisions more quickly and with greater accuracy. However, there is no clear understanding of how autonomous weapons should be conceptualized and of the implications that their “autonomous” nature has on them as ethical agents. It will be argued (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Strictly Human: Limitations of Autonomous Systems.Sadjad Soltanzadeh - 2022 - Minds and Machines 32 (2):269-288.
    Can autonomous systems replace humans in the performance of their activities? How does the answer to this question inform the design of autonomous systems? The study of technical systems and their features should be preceded by the study of the activities in which they play roles. Each activity can be described by its overall goals, governing norms and the intermediate steps which are taken to achieve the goals and to follow the norms. This paper uses the activity realist approach to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Armed military robots: editorial.Jürgen Altmann, Peter Asaro, Noel Sharkey & Robert Sparrow - 2013 - Ethics and Information Technology 15 (2):73-76.
    Arming uninhabited vehicles is an increasing trend. Widespread deployment can bring dangers for arms-control agreements and international humanitarian law. Armed UVs can destabilise the situation between potential opponents. Smaller systems can be used for terrorism. Using a systematic definition existing international regulation of armed UVs in the fields of arms control, export control and transparency measures is reviewed; these partly include armed UVs, but leave large gaps. For preventive arms control a general prohibition of armed UVs would be best. If (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Arms control for armed uninhabited vehicles: an ethical issue.Jürgen Altmann - 2013 - Ethics and Information Technology 15 (2):137-152.
    Arming uninhabited vehicles (UVs) is an increasing trend. Widespread deployment can bring dangers for arms-control agreements and international humanitarian law (IHL). Armed UVs can destabilise the situation between potential opponents. Smaller systems can be used for terrorism. Using a systematic definition existing international regulation of armed UVs in the fields of arms control, export control and transparency measures is reviewed; these partly include armed UVs, but leave large gaps. For preventive arms control a general prohibition of armed UVs would be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • What Are Applied Ethics?Fritz Allhoff - 2011 - Science and Engineering Ethics 17 (1):1-19.
    This paper explores the relationships that various applied ethics bear to each other, both in particular disciplines and more generally. The introductory section lays out the challenge of coming up with such an account and, drawing a parallel with the philosophy of science, offers that applied ethics may either be unified or disunified. The second section develops one simple account through which applied ethics are unified, vis-à-vis ethical theory. However, this is not taken to be a satisfying answer, for reasons (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Moralsk ansvar for handlinger til autonome våpensystemer.Kjetil Holtmon Akø - 2023 - Norsk Filosofisk Tidsskrift 58 (2-3):118-128.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Computers Are Syntax All the Way Down: Reply to Bozşahin.William J. Rapaport - 2019 - Minds and Machines 29 (2):227-237.
    A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Robots of Just War: A Legal Perspective.Ugo Pagallo - 2011 - Philosophy and Technology 24 (3):307-323.
    In order to present a hopefully comprehensive framework of what is the stake of the growing use of robot soldiers, the paper focuses on: the different impact of robots on legal systems, e.g., contractual obligations and tort liability; how robots affect crucial notions as causality, predictability and human culpability in criminal law and, finally, specific hypotheses of robots employed in “just wars.” By using the traditional distinction between causes that make wars just and conduct admissible on the battlefield, the aim (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Robots as Weapons in Just Wars.Marcus Schulzke - 2011 - Philosophy and Technology 24 (3):293-306.
    This essay analyzes the use of military robots in terms of the jus in bello concepts of discrimination and proportionality. It argues that while robots may make mistakes, they do not suffer from most of the impairments that interfere with human judgment on the battlefield. Although robots are imperfect weapons, they can exercise as much restraint as human soldiers, if not more. Robots can be used in a way that is consistent with just war theory when they are programmed to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Risk and Responsibility in Context.Adriana Placani & Stearns Broadhead (eds.) - 2023 - New York: Routledge.
    This volume bridges contemporary philosophical conceptions of risk and responsibility and offers an extensive examination of the topic. It shows that risk and responsibility combine in ways that give rise to new philosophical questions and problems. Philosophical interest in the relationship between risk and responsibility continues to rise, due in no small part due to environmental crises, emerging technologies, legal developments, and new medical advances. Despite such interest, scholars are just now working out how to conceive of the links between (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker & Bart van Arem - 2023 - Minds and Machines 33 (4):587-611.
    The paper presents a framework to realise “meaningful human control” over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project “Meaningful Human Control over Automated Driving Systems” lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Philosophy of AI: A structured overview.Vincent C. Müller - 2024 - In Nathalie A. Smuha (ed.), Cambridge handbook on the law, ethics and policy of Artificial Intelligence. Cambridge University Press. pp. 1-25.
    This paper presents the main topics, arguments, and positions in the philosophy of AI at present (excluding ethics). Apart from the basic concepts of intelligence and computation, the main topics of ar-tificial cognition are perception, action, meaning, rational choice, free will, consciousness, and normativity. Through a better understanding of these topics, the philosophy of AI contributes to our understand-ing of the nature, prospects, and value of AI. Furthermore, these topics can be understood more deeply through the discussion of AI; so (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • On the Matter of Robot Minds.Brian P. McLaughlin & David Rose - forthcoming - Oxford Studies in Experimental Philosophy.
    The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
    I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation