Results for 'Artificial morality'

999 found
Order:
  1. Sustained Representation of Perspectival Shape.Jorge Morales, Axel Bax & Chaz Firestone - 2020 - Proceedings of the National Academy of Sciences of the United States of America 117 (26):14873–14882.
    Arguably the most foundational principle in perception research is that our experience of the world goes beyond the retinal image; we perceive the distal environment itself, not the proximal stimulation it causes. Shape may be the paradigm case of such “unconscious inference”: When a coin is rotated in depth, we infer the circular object it truly is, discarding the perspectival ellipse projected on our eyes. But is this really the fate of such perspectival shapes? Or does a tilted coin retain (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  2. Evolutionary and religious perspectives on morality.Artificial Intelligence - forthcoming - Zygon.
  3. Ties without Tethers.Artificial Heart Trial - 2007 - In Lisa A. Eckenwiler & Felicia Cohn (eds.), The Ethics of Bioethics: Mapping the Moral Landscape. Johns Hopkins University Press.
    No categories
     
    Export citation  
     
    Bookmark  
  4. Stress, Coping, and Resilience Before and After COVID-19: A Predictive Model Based on Artificial Intelligence in the University Environment.Francisco Manuel Morales-Rodríguez, Juan Pedro Martínez-Ramón, Inmaculada Méndez & Cecilia Ruiz-Esteban - 2021 - Frontiers in Psychology 12.
    The COVID-19 global health emergency has greatly impacted the educational field. Faced with unprecedented stress situations, professors, students, and families have employed various coping and resilience strategies throughout the confinement period. High and persistent stress levels are associated with other pathologies; hence, their detection and prevention are needed. Consequently, this study aimed to design a predictive model of stress in the educational field based on artificial intelligence that included certain sociodemographic variables, coping strategies, and resilience capacity, and to study (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  5.  16
    Information-seeking dialogue for explainable artificial intelligence: Modelling and analytics.Ilia Stepin, Katarzyna Budzynska, Alejandro Catala, Martín Pereira-Fariña & Jose M. Alonso-Moral - 2024 - Argument and Computation 15 (1):49-107.
    Explainable artificial intelligence has become a vitally important research field aiming, among other tasks, to justify predictions made by intelligent classifiers automatically learned from data. Importantly, efficiency of automated explanations may be undermined if the end user does not have sufficient domain knowledge or lacks information about the data used for training. To address the issue of effective explanation communication, we propose a novel information-seeking explanatory dialogue game following the most recent requirements to automatically generated explanations. Further, we generalise (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  26
    Toward safe AI.Andres Morales-Forero, Samuel Bassetto & Eric Coatanea - 2023 - AI and Society 38 (2):685-696.
    Since some AI algorithms with high predictive power have impacted human integrity, safety has become a crucial challenge in adopting and deploying AI. Although it is impossible to prevent an algorithm from failing in complex tasks, it is crucial to ensure that it fails safely, especially if it is a critical system. Moreover, due to AI’s unbridled development, it is imperative to minimize the methodological gaps in these systems’ engineering. This paper uses the well-known Box-Jenkins method for statistical modeling as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  39
    Review of Carlos Montemayor's "The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment". London, 2023. Bloomsbury Academic, Bloomsbury Publishing. [REVIEW]Diego Morales - 2023 - Journal of Applied Philosophy 40 (4):766-768.
    Book review of Carlos Montemayor's "The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment" || Reseña del libro "The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment", escrito por Carlos Montemayor.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  9
    Problemática antropológica detrás de la discriminación generada a partir de los algoritmos de la inteligencia artificial.Gabriela Morales Ramírez - 2023 - Medicina y Ética 34 (2):429-480.
    Actualmente la inteligencia artificial se encuentra en un punto de desarrollo nunca visto prometiendo grandes beneficios que trascienden en las distintas esferas sociales. Una problemática al respecto es la aparente neutralidad de los algoritmos utilizados en su programación y su impacto a gran escala en relación con la discriminación generada a partir de los sesgos inmersos en ellos, provenientes de sus diseñadores. Esto como resultado de una mirada parcial a la realidad y la persona misma. La solución a la (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  8
    Self-Esteem at University: Proposal of an Artificial Neural Network Based on Resilience, Stress, and Sociodemographic Variables.Juan Pedro Martínez-Ramón, Francisco Manuel Morales-Rodríguez, Cecilia Ruiz-Esteban & Inmaculada Méndez - 2022 - Frontiers in Psychology 13.
    Artificial intelligence is a useful predictive tool for a wide variety of fields of knowledge. Despite this, the educational field is still an environment that lacks a variety of studies that use this type of predictive tools. In parallel, it is postulated that the levels of self-esteem in the university environment may be related to the strategies implemented to solve problems. For these reasons, the aim of this study was to analyze the levels of self-esteem presented by teaching staff (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  21
    An explanation space to align user studies with the technical development of Explainable AI.Garrick Cabour, Andrés Morales-Forero, Élise Ledoux & Samuel Bassetto - 2023 - AI and Society 38 (2):869-887.
    Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  6
    Hyperbolic Secant representation of the logistic function: Application to probabilistic Multiple Instance Learning for CT intracranial hemorrhage detection.Francisco M. Castro-Macías, Pablo Morales-Álvarez, Yunan Wu, Rafael Molina & Aggelos K. Katsaggelos - 2024 - Artificial Intelligence 331 (C):104115.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  4
    Sensitive loss: Improving accuracy and fairness of face representations with discrimination-aware deep learning.Ignacio Serna, Aythami Morales, Julian Fierrez & Nick Obradovich - 2022 - Artificial Intelligence 305 (C):103682.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  57
    Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  14.  11
    Approximation of action theories and its application to conformant planning.Phan Huy Tu, Tran Cao Son, Michael Gelfond & A. Ricardo Morales - 2011 - Artificial Intelligence 175 (1):79-119.
  15. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  16. The Artificial Moral Advisor. The “Ideal Observer” Meets Artificial Intelligence.Alberto Giubilini & Julian Savulescu - 2018 - Philosophy and Technology 31 (2):169-188.
    We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the “artificial moral advisor”. The AMA would implement a quasi-relativistic version of the “ideal observer” famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth’s ideal observer. Like Firth’s ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth’s observer, the AMA is non-absolutist, because it would take into account (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  17. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional systems developing (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  11
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - London: Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. Artificial (...) goes further, by promoting communication, testing and copying of principles and by stressing empirical tests. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   30 citations  
  19. Artificial moral experts: asking for ethical advice to artificial intelligent assistants.Blanca Rodríguez-López & Jon Rueda - 2023 - AI and Ethics.
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this article, we begin (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  21.  4
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. _Artificial Morality_ goes (...)
    Direct download  
     
    Export citation  
     
    Bookmark   20 citations  
  22. Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.Muntean Ioan & Don Howard - 2017 - In Thomas Powers (ed.), Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics. Springer.
    This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to “read” the behavior of human actors, available as collected data, and to categorize their moral behavior based on moral patterns herein. The present model is (...)
     
    Export citation  
     
    Bookmark   1 citation  
  23. Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   62 citations  
  24. Artificial moral and legal personhood.John-Stewart Gordon - forthcoming - AI and Society:1-15.
    This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  25.  43
    Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  26.  78
    Artificial morality and artificial law.Lothar Philipps - 1993 - Artificial Intelligence and Law 2 (1):51-63.
    The article investigates the interplay of moral rules in computer simulation. The investigation is based on two situations which are well-known to game theory: the prisoner''s dilemma and the game of Chicken. The prisoner''s dilemma can be taken to represent contractual situations, the game of Chicken represents a competitive situation on the one hand and the provision for a common good on the other. Unlike the rules usually used in game theory, each player knows the other''s strategy. In that way, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Virtuous vs. utilitarian artificial moral agents.William A. Bauer - 2020 - AI and Society (1):263-271.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  28. A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  29.  50
    Artificial moral agents: an intercultural perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
    In this paper I will argue that artificial moral agents are a fitting subject of intercultural information ethics because of the impact they may have on the relationship between information rich and information poor countries. I will give a limiting definition of AMAs first, and discuss two different types of AMAs with different implications from an intercultural perspective. While AMAs following preset rules might raise con-cerns about digital imperialism, AMAs being able to adjust to their user‘s behavior will lead (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  30. Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  31. Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  32. ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  34. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  35. Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from (...)
     
    Export citation  
     
    Bookmark   75 citations  
  36.  56
    Artificial moral agents: creative, autonomous, social. An approach based on evolutionary computation.Ioan Muntean & Don Howard - 2014 - In Johanna Seibt, Raul Hakli & Marco Nørskov (eds.), Frontiers in Artificial Intelligence and Applications.
  37.  1
    Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?Robert James M. Boyles - 2024 - Filosofija. Sociologija 35 (1).
    This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  97
    Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  39.  5
    On the Ethical Limitation of AMA (Artificial Moral Agent). 이향연 - 2021 - Journal of the Daedong Philosophical Association 95:103-118.
    본 연구는 최근 주목 받고 있는 AI에 적용 가능한 윤리적 접근법이 타당한지를 검토해 보고자 한다. 이러한 검토는 우선 AI가 과연 윤리적일 수 있는가 하는 근본적인 물음에서 부터 어떻게 기술적으로 AMA(인공 도덕행위자)를 구현 가능한가 하는 방법론의 검토를 동시에 요한다. 이는 다시 AMA가 과연 자율적인 개체로서 인식될 수 있는지에 대한 문제 와 지속적으로 논의되고 있는 AMA 관련 윤리적 접근법 또한 포함한다. 필자는 이와 관련 된 여러 논의들을 검토하고 각각의 논의들이 가진 특징 및 한계점을 분석하고자 한다. 이 러한 모든 검토들은 AMA의 근본적인 한계를 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  41.  10
    A Theological Account of Artificial Moral Agency.Ximian Xu - 2023 - Studies in Christian Ethics 36 (3):642-659.
    This article seeks to explore the idea of artificial moral agency from a theological perspective. By drawing on the Reformed theology of archetype-ectype, it will demonstrate that computational artefacts are the ectype of human moral agents and, consequently, have a partial moral agency. In this light, human moral agents mediate and extend their moral values through computational artefacts, which are ontologically connected with humans and only related to limited particular moral issues. This moral leitmotif opens up a way to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  75
    Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  52
    Karol Wojtyla on Artificial Moral Agency andMoral Accountability.Richard A. Spinello - 2011 - The National Catholic Bioethics Quarterly 11 (3):469-491.
    As the notion of artificial moral agency gains popularity among ethicists, it threatens the unique status of the human person as a responsible moral agent. The philosophy of ontocentrism, popularized by Luciano Floridi, argues that biocentrism is too restrictive and must yield to a new philosophical vision that endows all beings with some intrinsic value. Floridi’s macroethics also regards more sophisticated digital entities such as robots as accountable moral agents. To refute these principles, this paper turns to the thought (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  45. Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  46.  35
    The Ethical Principles for the Development of Artificial Moral Agent - Focusing on the Top - down Approach -. 최현철, 신현주 & 변순용 - 2016 - Journal of Ethics: The Korean Association of Ethics 1 (111):31-53.
  47.  19
    Contraception: Natural, Artificial, Moral.Snježana Prijić-Samaržija - 2011 - Filozofska Istrazivanja 31 (2):277-290.
    Direct download  
     
    Export citation  
     
    Bookmark  
  48.  62
    Artificial moral agents: saviors or destroyers?: Wendell Wallach and Colin Allen: Review of moral machines: teaching robots right from wrong. Oxford University Press, 2009, xi + 275 pp, ISBN 978-0-19-537404-9. [REVIEW]Jeff Buechner - 2010 - Ethics and Information Technology 12 (4):363-370.
  49.  73
    Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  50.  54
    A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 999