Results for 'Artificial Moral Patiency'

999 found
Order:
  1.  88
    The artificial view: toward a non-anthropocentric account of moral patiency.Fabio Tollon - 2020 - Ethics and Information Technology 23 (2):147-155.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  2. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3. The rise of the robots and the crisis of moral patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as responsible (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  4. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  29
    The Democratic Inclusion of Artificial Intelligence? Exploring the Patiency, Agency and Relational Conditions for Demos Membership.Ludvig Beckman & Jonas Hultin Rosenberg - 2022 - Philosophy and Technology 35 (2):1-24.
    Should artificial intelligences ever be included as co-authors of democratic decisions? According to the conventional view in democratic theory, the answer depends on the relationship between the political unit and the entity that is either affected or subjected to its decisions. The relational conditions for inclusion as stipulated by the all-affected and all-subjected principles determine the spatial extension of democratic inclusion. Thus, AI qualifies for democratic inclusion if and only if AI is either affected or subjected to decisions by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6. Artificial Consciousness and Artificial Ethics: Between Realism and Social Relationism.Steve Torrance - 2014 - Philosophy and Technology 27 (1):9-29.
    I compare a ‘realist’ with a ‘social–relational’ perspective on our judgments of the moral status of artificial agents (AAs). I develop a realist position according to which the moral status of a being—particularly in relation to moral patiency attribution—is closely bound up with that being’s ability to experience states of conscious satisfaction or suffering (CSS). For a realist, both moral status and experiential capacity are objective properties of agents. A social relationist denies the existence (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  7. Machines and the Moral Community.Erica L. Neely - 2013 - Philosophy and Technology 27 (1):97-111.
    A key distinction in ethics is between members and nonmembers of the moral community. Over time, our notion of this community has expanded as we have moved from a rationality criterion to a sentience criterion for membership. I argue that a sentience criterion is insufficient to accommodate all members of the moral community; the true underlying criterion can be understood in terms of whether a being has interests. This may be extended to conscious, self-aware machines, as well as (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  8. Will intelligent machines become moral patients?Parisa Moosavi - forthcoming - Philosophy and Phenomenological Research.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9.  23
    Interpreting ordinary uses of psychological and moral terms in the AI domain.Hyungrae Noh - 2023 - Synthese 201 (6):1-33.
    Intuitively, proper referential extensions of psychological and moral terms exclude artifacts. Yet ordinary speakers commonly treat AI robots as moral patients and use psychological terms to explain their behavior. This paper examines whether this referential shift from the human domain to the AI domain entails semantic changes: do ordinary speakers literally consider AI robots to be psychological or moral beings? Three non-literalist accounts for semantic changes concerning psychological and moral terms used in the AI domain will (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. How Could We Know When a Robot was a Moral Patient?Henry Shevlin - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):459-471.
    There is growing interest in machine ethics in the question of whether and under what circumstances an artificial intelligence would deserve moral consideration. This paper explores a particular type of moral status that the author terms psychological moral patiency, focusing on the epistemological question of what sort of evidence might lead us to reasonably conclude that a given artificial system qualified as having this status. The paper surveys five possible criteria that might be applied: (...)
     
    Export citation  
     
    Bookmark   8 citations  
  11.  14
    The Moral Status of AI Entities.Joan Llorca Albareda, Paloma García & Francisco Lara - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 59-83.
    The emergence of AI is posing serious challenges to standard conceptions of moral status. New non-biological entities are able to act and make decisions rationally. The question arises, in this regard, as to whether AI systems possess or can possess the necessary properties to be morally considerable. In this chapter, we have undertaken a systematic analysis of the various debates that are taking place about the moral status of AI. First, we have discussed the possibility that AI systems, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Introduction to the Special Issue on Machine Morality: The Machine as Moral Agent and Patient.David J. Gunkel & Joanna Bryson - 2014 - Philosophy and Technology 27 (1):5-8.
    One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. This special issue of Philosophy and Technology investigates whether and to what extent machines, of various designs and configurations, can or should be considered moral subjects, defined here as either a moral agent, a moral patient, or both. The articles that comprise the issue were competitively selected from papers initially prepared for and presented at a symposium on this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  13.  43
    Eight Kinds of Critters: A Moral Taxonomy for the Twenty-Second Century.Michael Bess - 2018 - Journal of Medicine and Philosophy 43 (5):585-612.
    Over the coming century, the accelerating advance of bioenhancement technologies, robotics, and artificial intelligence (AI) may significantly broaden the qualitative range of sentient and intelligent beings. This article proposes a taxonomy of such beings, ranging from modified animals to bioenhanced humans to advanced forms of robots and AI. It divides these diverse beings into three moral and legal categories—animals, persons, and presumed persons—describing the moral attributes and legal rights of each category. In so doing, the article sets (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  14.  69
    Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - 2021 - Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  15.  58
    Critically engaging the ethics of AI for a global audience.Samuel T. Segun - 2021 - Ethics and Information Technology 23 (2):99-105.
    This article introduces readers to the special issue on Selected Issues in the Ethics of Artificial Intelligence. In this paper, I make a case for a wider outlook on the ethics of AI. So far, much of the engagements with the subject have come from Euro-American scholars with obvious influences from Western epistemic traditions. I demonstrate that socio-cultural features influence our conceptions of ethics and in this case the ethics of AI. The goal of this special issue is to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  16.  13
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - London: Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. Artificial (...)
    Direct download  
     
    Export citation  
     
    Bookmark   31 citations  
  17.  67
    Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘ (...) moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions ofagencyor to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion ofartificial moral responsibility. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  18. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  19.  9
    Do Androids Dread an Electric Sting?Izak Tait & Neşet Tan - 2023 - Qeios 1:1-18.
    Conscious sentient AI seems to be all but a certainty in our future, whether in fifty years’ time or only five years. When that time comes, we will be faced with entities with the potential to experience more pain and suffering than any other living entity on Earth. In this paper, we look at this potential for suffering and the reasons why we would need to create a framework for protecting artificial entities. We look to current animal welfare laws (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  20. The Artificial Moral Advisor. The “Ideal Observer” Meets Artificial Intelligence.Alberto Giubilini & Julian Savulescu - 2018 - Philosophy and Technology 31 (2):169-188.
    We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the “artificial moral advisor”. The AMA would implement a quasi-relativistic version of the “ideal observer” famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth’s ideal observer. Like Firth’s ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth’s observer, the AMA is non-absolutist, because it would (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  21. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  22. Virtuous vs. utilitarian artificial moral agents.William A. Bauer - 2020 - AI and Society (1):263-271.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  23.  8
    The ethics of war.Patience Coster - 2013 - New York: Rosen Central.
    What is war? -- The ethical arguments -- The history of war ethics -- Can war be justified? -- Lawful authority -- Humanitarian intervention -- With good intention? -- A last resort? -- A good chance of success -- Waging war -- Pre-emptive strikes -- Proportionality -- Weapons -- War and religion -- Holy wars -- Pacifism -- Non-violence -- Aftermath -- War crimes.
    Direct download  
     
    Export citation  
     
    Bookmark  
  24. Artificial moral experts: asking for ethical advice to artificial intelligent assistants.Blanca Rodríguez-López & Jon Rueda - 2023 - AI and Ethics.
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  48
    Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  26. Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.Muntean Ioan & Don Howard - 2017 - In Thomas M. Powers (ed.), Philosophy and Computing: Essays in epistemology, philosophy of mind, logic, and ethics. Cham: Springer.
    This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to “read” the behavior of human actors, available as collected data, and to categorize their moral behavior based on moral patterns herein. (...)
     
    Export citation  
     
    Bookmark   1 citation  
  27. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  28. Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  29.  53
    Artificial moral agents: an intercultural perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
    In this paper I will argue that artificial moral agents are a fitting subject of intercultural information ethics because of the impact they may have on the relationship between information rich and information poor countries. I will give a limiting definition of AMAs first, and discuss two different types of AMAs with different implications from an intercultural perspective. While AMAs following preset rules might raise con-cerns about digital imperialism, AMAs being able to adjust to their user‘s behavior will (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  30. A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  31. Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   64 citations  
  32. Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  33.  86
    Artificial morality and artificial law.Lothar Philipps - 1993 - Artificial Intelligence and Law 2 (1):51-63.
    The article investigates the interplay of moral rules in computer simulation. The investigation is based on two situations which are well-known to game theory: the prisoner''s dilemma and the game of Chicken. The prisoner''s dilemma can be taken to represent contractual situations, the game of Chicken represents a competitive situation on the one hand and the provision for a common good on the other. Unlike the rules usually used in game theory, each player knows the other''s strategy. In that (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Artificial moral and legal personhood.John-Stewart Gordon - forthcoming - AI and Society:1-15.
    This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  35.  79
    Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing (...) character? An experiment describes situations where either human or AI agents engage in virtuous or vicious behavior and experiment participants then judge their level of virtue or vice. The scenarios represent different virtue ethics domains of truth, justice, fear, wealth, and honor. Quantitative and qualitative analyses show that moral attributions are weakened for AIs compared to humans, and the reasoning and explanations for the attributions are varied and more complex. On “relational” views of membership in the moral community, virtuous machines would indeed be included, even if they are indeed weakened. Hence, while our moral relationships with artificial agents may be of the same types, they may yet remain substantively different than our relationships to human beings. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  36. Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  37. ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  40.  5
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. _Artificial Morality_ (...)
    Direct download  
     
    Export citation  
     
    Bookmark   23 citations  
  41. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human (...) error; and (3) 'Human-Like AMAs' programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology. Sections 2–4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in inhumane ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of magnifying some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us. This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over each other. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term ‘circumstances of justice’ between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve. I thus conclude on a skeptical note. Different approaches to developing ‘safe, ethical AI’ generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be ultimately surmountable. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  42.  26
    Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?Robert James M. Boyles - 2024 - Filosofija. Sociologija 35 (1).
    This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Evolutionary and religious perspectives on morality.Artificial Intelligence - forthcoming - Zygon.
  44.  55
    Karol Wojtyla on Artificial Moral Agency andMoral Accountability.Richard A. Spinello - 2011 - The National Catholic Bioethics Quarterly 11 (3):469-491.
    As the notion of artificial moral agency gains popularity among ethicists, it threatens the unique status of the human person as a responsible moral agent. The philosophy of ontocentrism, popularized by Luciano Floridi, argues that biocentrism is too restrictive and must yield to a new philosophical vision that endows all beings with some intrinsic value. Floridi’s macroethics also regards more sophisticated digital entities such as robots as accountable moral agents. To refute these principles, this paper turns (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  5
    On the Ethical Limitation of AMA (Artificial Moral Agent). 이향연 - 2021 - Journal of the Daedong Philosophical Association 95:103-118.
    본 연구는 최근 주목 받고 있는 AI에 적용 가능한 윤리적 접근법이 타당한지를 검토해 보고자 한다. 이러한 검토는 우선 AI가 과연 윤리적일 수 있는가 하는 근본적인 물음에서 부터 어떻게 기술적으로 AMA(인공 도덕행위자)를 구현 가능한가 하는 방법론의 검토를 동시에 요한다. 이는 다시 AMA가 과연 자율적인 개체로서 인식될 수 있는지에 대한 문제 와 지속적으로 논의되고 있는 AMA 관련 윤리적 접근법 또한 포함한다. 필자는 이와 관련 된 여러 논의들을 검토하고 각각의 논의들이 가진 특징 및 한계점을 분석하고자 한다. 이 러한 모든 검토들은 AMA의 근본적인 한계를 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  47. Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory (...)
     
    Export citation  
     
    Bookmark   77 citations  
  48.  56
    Artificial moral agents: creative, autonomous, social. An approach based on evolutionary computation.Ioan Muntean & Don Howard - 2014 - In Johanna Seibt, Raul Hakli & Marco Norskov (eds.), Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy. IOS Press.
  49.  61
    Artificial systems with moral capacities? A research design and its implementation in a geriatric care system.Catrin Misselhorn - 2020 - Artificial Intelligence 278 (C):103179.
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This gave rise to the development of artificial morality, an emerging field in artificial intelligence which explores whether and how artificial systems can be furnished with moral capacities. This will have a deep impact on our lives. Yet, the methodological foundations of artificial morality are still sketchy and often far off from possible applications. One important (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  50.  11
    A Theological Account of Artificial Moral Agency.Ximian Xu - 2023 - Studies in Christian Ethics 36 (3):642-659.
    This article seeks to explore the idea of artificial moral agency from a theological perspective. By drawing on the Reformed theology of archetype-ectype, it will demonstrate that computational artefacts are the ectype of human moral agents and, consequently, have a partial moral agency. In this light, human moral agents mediate and extend their moral values through computational artefacts, which are ontologically connected with humans and only related to limited particular moral issues. This (...) leitmotif opens up a way to deploy carebots into Christian pastoral care while maintaining the human agent's uniqueness and responsibility in pastoral caregiving practices. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 999