Results for 'artificial moral agent'

986 found
Order:
  1.  1
    Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?Robert James M. Boyles - 2024 - Filosofija. Sociologija 35 (1).
    This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  3. Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  4.  38
    Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  5. Virtuous vs. utilitarian artificial moral agents.William A. Bauer - 2020 - AI and Society (1):263-271.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  6. Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  7.  49
    Artificial moral agents: an intercultural perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
    In this paper I will argue that artificial moral agents are a fitting subject of intercultural information ethics because of the impact they may have on the relationship between information rich and information poor countries. I will give a limiting definition of AMAs first, and discuss two different types of AMAs with different implications from an intercultural perspective. While AMAs following preset rules might raise con-cerns about digital imperialism, AMAs being able to adjust to their user‘s behavior will (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  8. Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  9. Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral (...)
     
    Export citation  
     
    Bookmark   75 citations  
  10.  24
    Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  4
    An Experimental Discussion on the Possibility of Artificial Moral Agent in Neo-Confucianism - From the Point of View of Yulgok -. 최복희 - 2023 - Journal of the New Korean Philosophical Association 113:317-337.
    본 논문은 비록 시험적 단계에 불과하지만, 마음에 대한 성리학적 개념들을 형식적으로 기술하여 컴퓨터에서 다룰 수 있는 형태로 구성할 수 있을 것인가를 검토하는 것을 목적으로 한다. 필자는 도덕적 마음의 작용에 관해 논의한 내용을 바탕으로 성리학적 도덕행위자구현을 위한 시뮬레이션이 가능한가를 논의해 보았다.BR 성리학에서 마음의 개념을 설명할 때, 본체인 선천적 도덕 능력에 주목하는가 아니면 현실적으로 작용한 마음의 도덕성 여부에 주목하는가에 따라 개념화의 방식이 달라질 수 있다. 필자는 후자에 가까운 율곡의 관점이 상대적으로 경험주의적이라고 보고, 마음을 실물처럼, 몸처럼 명백하게 설명하려고 했다는 점이 인공적 도덕행위자(AMA, (...) Moral Agent) 토론에 참여하는 데 용이하지 않을까 하여 예시로 삼아 보았다.BR 율곡은 마음의 작용을 기(氣)의 ‘자동적 패턴[機自爾]’으로 설명하면서, 그 패턴이 도덕적 원리로 작동되도록 하는 수양방법을 제시했다. 그래서 필자는 우선 ‘자동적 패턴[機自爾]’이라 했던 기(氣)의 작용이 마음에서 어떻게 전개되는가를 분석하고, 그 다음에 마음의 도덕적 작용, 즉 도덕원리의 우선적 발현을 정형화할 수 있는가를 검토해보았다.BR 나아가, 검토의 과정에서 인공적 도덕행위자 논쟁의 쟁점 중 하나인 무의식적 도덕성이 수기(修己)의 차원에서는 용납될 수 없지만, 치인(治人)의 차원에서는 가능하다는 판단 하에, 필자는 비인격적 모델인 치인(治人) 시스템에 관한 논의를 위해 율곡의 정치개혁론을 검토해보았다. 이러한 시론으로 성리학적 인공적 도덕행위자 구현 가능성에 대한 본격적인 토론이 촉발될 수 있기를 기대한다. (shrink)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13.  56
    Artificial moral agents: creative, autonomous, social. An approach based on evolutionary computation.Ioan Muntean & Don Howard - 2014 - In Johanna Seibt, Raul Hakli & Marco Nørskov (eds.), Frontiers in Artificial Intelligence and Applications.
  14.  95
    Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  15. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  16.  5
    On the Ethical Limitation of AMA (Artificial Moral Agent). 이향연 - 2021 - Journal of the Daedong Philosophical Association 95:103-118.
    본 연구는 최근 주목 받고 있는 AI에 적용 가능한 윤리적 접근법이 타당한지를 검토해 보고자 한다. 이러한 검토는 우선 AI가 과연 윤리적일 수 있는가 하는 근본적인 물음에서 부터 어떻게 기술적으로 AMA(인공 도덕행위자)를 구현 가능한가 하는 방법론의 검토를 동시에 요한다. 이는 다시 AMA가 과연 자율적인 개체로서 인식될 수 있는지에 대한 문제 와 지속적으로 논의되고 있는 AMA 관련 윤리적 접근법 또한 포함한다. 필자는 이와 관련 된 여러 논의들을 검토하고 각각의 논의들이 가진 특징 및 한계점을 분석하고자 한다. 이 러한 모든 검토들은 AMA의 근본적인 한계를 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  18.  71
    Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing (...) character? An experiment describes situations where either human or AI agents engage in virtuous or vicious behavior and experiment participants then judge their level of virtue or vice. The scenarios represent different virtue ethics domains of truth, justice, fear, wealth, and honor. Quantitative and qualitative analyses show that moral attributions are weakened for AIs compared to humans, and the reasoning and explanations for the attributions are varied and more complex. On “relational” views of membership in the moral community, virtuous machines would indeed be included, even if they are indeed weakened. Hence, while our moral relationships with artificial agents may be of the same types, they may yet remain substantively different than our relationships to human beings. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  19.  58
    Artificial moral agents: saviors or destroyers?: Wendell Wallach and Colin Allen: Review of moral machines: teaching robots right from wrong. Oxford University Press, 2009, xi + 275 pp, ISBN 978-0-19-537404-9. [REVIEW]Jeff Buechner - 2010 - Ethics and Information Technology 12 (4):363-370.
  20. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  21.  34
    A Philosophical Project of Hybrid Approach to Artificial Moral Agents. 최현철 & 변순용 - 2019 - Journal of Ethics: The Korean Association of Ethics 1 (124):1-16.
  22.  51
    A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  24.  60
    A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights from the Science of Heroism in Humans.Travis J. Wiltshire - 2015 - Minds and Machines 25 (1):57-71.
    The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  25.  20
    An explanation space to align user studies with the technical development of Explainable AI.Garrick Cabour, Andrés Morales-Forero, Élise Ledoux & Samuel Bassetto - 2023 - AI and Society 38 (2):869-887.
    Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26.  53
    Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘ (...) moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions ofagencyor to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion ofartificial moral responsibility. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  27.  17
    Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.Christian Herzog - 2021 - Science and Engineering Ethics 27 (1):1-15.
    In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28.  34
    Should Moral Machines be Banned? A Commentary on van Wynsberghe and Robbins “Critiquing the Reasons for Making Artificial Moral Agents”.Bartek Chomanski - 2020 - Science and Engineering Ethics 26 (6):3469-3481.
    In a stimulating recent article for this journal (van Wynsberghe and Robbins in Sci Eng Ethics 25(3):719–735, 2019), Aimee van Wynsberghe and Scott Robbins mount a serious critique of a number of reasons advanced in favor of building artificial moral agents (AMAs). In light of their critique, vW&R make two recommendations: they advocate a moratorium on the commercialization of AMAs and suggest that the argumentative burden is now shifted onto the proponents of AMAs to come up with new (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  35
    The Ethical Principles for the Development of Artificial Moral Agent - Focusing on the Top - down Approach -. 최현철, 신현주 & 변순용 - 2016 - Journal of Ethics: The Korean Association of Ethics 1 (111):31-53.
  30. Some Technical Challenges in Designing an Artificial Moral Agent.Jarek Gryz - 2020 - In Artificial Intelligence and Soft Computing. ICAISC 2020. Lecture Notes in Computer Science, vol 12416. Springer. pp. 481-491.
    Autonomous agents (robots) are no longer a subject of science fiction novels. Self-driving cars, for example, may be on our roads within a few years. These machines will necessarily interact with the humans and in these interactions must take into account moral outcome of their actions. Yet we are nowhere near designing a machine capable of autonomous moral reasoning. In some sense, this is understandable as commonsense reasoning turns out to be very hard to formalize. -/- In this (...)
    No categories
     
    Export citation  
     
    Bookmark  
  31.  6
    Can Artificial Intelligence be an Autonomous Moral Agent? 신상규 - 2017 - Cheolhak-Korean Journal of Philosophy 132:265-292.
    ‘도덕 행위자(moral agent)’의 개념은 전통적으로 자신의 행동에 대해 책임을 질 수 있는 자유의지를 가진 인격적 존재에 한정되어 적용되었다. 그런데 도덕적 함축을 갖는 다양한 행동을 수행할 수 있는 자율적 AI의 등장은 이러한 행위자 개념의 수정을 요구하는 것처럼 보인다. 필자는 이 논문에서 일정한 요건을 만족시키는 AI에 대해서 인격성을 전제하지 않는 기능적인 의미의 도덕 행위자 자격이 부여될 수 있다고 주장한다. 그리고 그런 한에 있어서, AI에게도 그 행위자성에 걸맞은 책임 혹은 책무성의 귀속이 가능해진다. 이러한 주장을 뒷받침하기 위하여, 본 논문은 예상되는 여러 가능한 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2021 - AI and Society:1-20.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  33. Artificial virtuous agents in a multi-agent tragedy of the commons.Jakob Stenseke - 2022 - AI and Society:1-18.
    Although virtue ethics has repeatedly been proposed as a suitable framework for the development of artificial moral agents, it has been proven difficult to approach from a computational perspective. In this work, we present the first technical implementation of artificial virtuous agents in moral simulations. First, we review previous conceptual and technical work in artificial virtue ethics and describe a functionalistic path to AVAs based on dispositional virtues, bottom-up learning, and top-down eudaimonic reward. We then (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  22
    Artificial virtuous agents: from theory to machine implementation.Jakob Stenseke - 2023 - AI and Society 38 (4):1301-1320.
    Virtue ethics has many times been suggested as a promising recipe for the construction of artificial moral agents due to its emphasis on moral character and learning. However, given the complex nature of the theory, hardly any work has de facto attempted to implement the core tenets of virtue ethics in moral machines. The main goal of this paper is to demonstrate how virtue ethics can be taken all the way from theory to machine implementation. To (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. A minimalist model of the artificial autonomous moral agent (AAMA).Ioan Muntean & Don Howard - 2016 - In SSS-16 Symposium Technical Reports. Association for the Advancement of Artificial Intelligence. AAAI.
    This paper proposes a model for an artificial autonomous moral agent (AAMA), which is parsimonious in its ontology and minimal in its ethical assumptions. Starting from a set of moral data, this AAMA is able to learn and develop a form of moral competency. It resembles an “optimizing predictive mind,” which uses moral data (describing typical behavior of humans) and a set of dispositional traits to learn how to classify different actions (given a given (...)
     
    Export citation  
     
    Bookmark   5 citations  
  36. Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   62 citations  
  37. Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.Muntean Ioan & Don Howard - 2017 - In Thomas Powers (ed.), Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics. Springer.
    This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to “read” the behavior of human actors, available as collected data, and to categorize their moral behavior based on moral patterns (...)
     
    Export citation  
     
    Bookmark   1 citation  
  38. The Artificial Moral Advisor. The “Ideal Observer” Meets Artificial Intelligence.Alberto Giubilini & Julian Savulescu - 2018 - Philosophy and Technology 31 (2):169-188.
    We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the “artificial moral advisor”. The AMA would implement a quasi-relativistic version of the “ideal observer” famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth’s ideal observer. Like Firth’s ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth’s observer, the AMA is non-absolutist, because it would (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  39. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent[REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  41.  12
    What makes full artificial agents morally different.Erez Firt - forthcoming - AI and Society:1-10.
    In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  85
    Ethics and artificial life: From modeling to moral agents. [REVIEW]John P. Sullins - 2005 - Ethics and Information Technology 7 (3):139-148.
    Artificial Life has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  43. Consciousness and ethics: Artificially conscious moral agents.Wendell Wallach, Colin Allen & Stan Franklin - 2011 - International Journal of Machine Consciousness 3 (01):177-192.
  44. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human (...) error; and (3) 'Human-Like AMAs' programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology. Sections 2–4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in inhumane ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of magnifying some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us. This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over each other. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term ‘circumstances of justice’ between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve. I thus conclude on a skeptical note. Different approaches to developing ‘safe, ethical AI’ generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be ultimately surmountable. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  45. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  46. ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47.  75
    Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  48. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  49. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality (...)
    Direct download (17 more)  
     
    Export citation  
     
    Bookmark   287 citations  
  50.  10
    A Theological Account of Artificial Moral Agency.Ximian Xu - 2023 - Studies in Christian Ethics 36 (3):642-659.
    This article seeks to explore the idea of artificial moral agency from a theological perspective. By drawing on the Reformed theology of archetype-ectype, it will demonstrate that computational artefacts are the ectype of human moral agents and, consequently, have a partial moral agency. In this light, human moral agents mediate and extend their moral values through computational artefacts, which are ontologically connected with humans and only related to limited particular moral issues. This (...) leitmotif opens up a way to deploy carebots into Christian pastoral care while maintaining the human agent's uniqueness and responsibility in pastoral caregiving practices. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 986