Results for 'Artificial Morality'

999 found
Order:
  1.  13
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - London: Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. Artificial (...) goes further, by promoting communication, testing and copying of principles and by stressing empirical tests. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   31 citations  
  2.  67
    Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  3. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  4. The Artificial Moral Advisor. The “Ideal Observer” Meets Artificial Intelligence.Alberto Giubilini & Julian Savulescu - 2018 - Philosophy and Technology 31 (2):169-188.
    We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the “artificial moral advisor”. The AMA would implement a quasi-relativistic version of the “ideal observer” famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth’s ideal observer. Like Firth’s ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth’s observer, the AMA is non-absolutist, because it would take into account (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  5. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  6. Virtuous vs. utilitarian artificial moral agents.William A. Bauer - 2020 - AI and Society (1):263-271.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an artificial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  7. Artificial moral experts: asking for ethical advice to artificial intelligent assistants.Blanca Rodríguez-López & Jon Rueda - 2023 - AI and Ethics.
    In most domains of human life, we are willing to accept that there are experts with greater knowledge and competencies that distinguish them from non-experts or laypeople. Despite this fact, the very recognition of expertise curiously becomes more controversial in the case of “moral experts”. Do moral experts exist? And, if they indeed do, are there ethical reasons for us to follow their advice? Likewise, can emerging technological developments broaden our very concept of moral expertise? In this article, we begin (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional systems developing (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  48
    Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  10. Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.Muntean Ioan & Don Howard - 2017 - In Thomas M. Powers (ed.), Philosophy and Computing: Essays in epistemology, philosophy of mind, logic, and ethics. Cham: Springer.
    This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to “read” the behavior of human actors, available as collected data, and to categorize their moral behavior based on moral patterns herein. The present model is (...)
     
    Export citation  
     
    Bookmark   1 citation  
  11. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12. Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  13.  53
    Artificial moral agents: an intercultural perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
    In this paper I will argue that artificial moral agents are a fitting subject of intercultural information ethics because of the impact they may have on the relationship between information rich and information poor countries. I will give a limiting definition of AMAs first, and discuss two different types of AMAs with different implications from an intercultural perspective. While AMAs following preset rules might raise con-cerns about digital imperialism, AMAs being able to adjust to their user‘s behavior will lead (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  14. A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  15. Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   64 citations  
  16. Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  17.  86
    Artificial morality and artificial law.Lothar Philipps - 1993 - Artificial Intelligence and Law 2 (1):51-63.
    The article investigates the interplay of moral rules in computer simulation. The investigation is based on two situations which are well-known to game theory: the prisoner''s dilemma and the game of Chicken. The prisoner''s dilemma can be taken to represent contractual situations, the game of Chicken represents a competitive situation on the one hand and the provision for a common good on the other. Unlike the rules usually used in game theory, each player knows the other''s strategy. In that way, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Artificial moral and legal personhood.John-Stewart Gordon - forthcoming - AI and Society:1-15.
    This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  19.  79
    Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  20. Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  21. ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  24.  5
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. _Artificial Morality_ goes (...)
    Direct download  
     
    Export citation  
     
    Bookmark   23 citations  
  25. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  26.  26
    Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?Robert James M. Boyles - 2024 - Filosofija. Sociologija 35 (1).
    This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Evolutionary and religious perspectives on morality.Artificial Intelligence - forthcoming - Zygon.
  28.  55
    Karol Wojtyla on Artificial Moral Agency andMoral Accountability.Richard A. Spinello - 2011 - The National Catholic Bioethics Quarterly 11 (3):469-491.
    As the notion of artificial moral agency gains popularity among ethicists, it threatens the unique status of the human person as a responsible moral agent. The philosophy of ontocentrism, popularized by Luciano Floridi, argues that biocentrism is too restrictive and must yield to a new philosophical vision that endows all beings with some intrinsic value. Floridi’s macroethics also regards more sophisticated digital entities such as robots as accountable moral agents. To refute these principles, this paper turns to the thought (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  5
    On the Ethical Limitation of AMA (Artificial Moral Agent). 이향연 - 2021 - Journal of the Daedong Philosophical Association 95:103-118.
    본 연구는 최근 주목 받고 있는 AI에 적용 가능한 윤리적 접근법이 타당한지를 검토해 보고자 한다. 이러한 검토는 우선 AI가 과연 윤리적일 수 있는가 하는 근본적인 물음에서 부터 어떻게 기술적으로 AMA(인공 도덕행위자)를 구현 가능한가 하는 방법론의 검토를 동시에 요한다. 이는 다시 AMA가 과연 자율적인 개체로서 인식될 수 있는지에 대한 문제 와 지속적으로 논의되고 있는 AMA 관련 윤리적 접근법 또한 포함한다. 필자는 이와 관련 된 여러 논의들을 검토하고 각각의 논의들이 가진 특징 및 한계점을 분석하고자 한다. 이 러한 모든 검토들은 AMA의 근본적인 한계를 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  31. Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from (...)
     
    Export citation  
     
    Bookmark   77 citations  
  32.  56
    Artificial moral agents: creative, autonomous, social. An approach based on evolutionary computation.Ioan Muntean & Don Howard - 2014 - In Johanna Seibt, Raul Hakli & Marco Norskov (eds.), Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy. IOS Press.
  33.  61
    Artificial systems with moral capacities? A research design and its implementation in a geriatric care system.Catrin Misselhorn - 2020 - Artificial Intelligence 278 (C):103179.
    The development of increasingly intelligent and autonomous technologies will eventually lead to these systems having to face morally problematic situations. This gave rise to the development of artificial morality, an emerging field in artificial intelligence which explores whether and how artificial systems can be furnished with moral capacities. This will have a deep impact on our lives. Yet, the methodological foundations of artificial morality are still sketchy and often far off from possible applications. One (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  34.  11
    A Theological Account of Artificial Moral Agency.Ximian Xu - 2023 - Studies in Christian Ethics 36 (3):642-659.
    This article seeks to explore the idea of artificial moral agency from a theological perspective. By drawing on the Reformed theology of archetype-ectype, it will demonstrate that computational artefacts are the ectype of human moral agents and, consequently, have a partial moral agency. In this light, human moral agents mediate and extend their moral values through computational artefacts, which are ontologically connected with humans and only related to limited particular moral issues. This moral leitmotif opens up a way to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  36. Stress, Coping, and Resilience Before and After COVID-19: A Predictive Model Based on Artificial Intelligence in the University Environment.Francisco Manuel Morales-Rodríguez, Juan Pedro Martínez-Ramón, Inmaculada Méndez & Cecilia Ruiz-Esteban - 2021 - Frontiers in Psychology 12.
    The COVID-19 global health emergency has greatly impacted the educational field. Faced with unprecedented stress situations, professors, students, and families have employed various coping and resilience strategies throughout the confinement period. High and persistent stress levels are associated with other pathologies; hence, their detection and prevention are needed. Consequently, this study aimed to design a predictive model of stress in the educational field based on artificial intelligence that included certain sociodemographic variables, coping strategies, and resilience capacity, and to study (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  37.  59
    A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  38.  75
    Extending the Is-ought Problem to Top-down Artificial Moral Agents.Robert James M. Boyles - 2022 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 9 (2):171–189.
    This paper further cashes out the notion that particular types of intelligent systems are susceptible to the is-ought problem, which espouses the thesis that no evaluative conclusions may be inferred from factual premises alone. Specifically, it focuses on top-down artificial moral agents, providing ancillary support to the view that these kinds of artifacts are not capable of producing genuine moral judgements. Such is the case given that machines built via the classical programming approach are always composed of two parts, (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  39.  5
    An Experimental Discussion on the Possibility of Artificial Moral Agent in Neo-Confucianism - From the Point of View of Yulgok -. 최복희 - 2023 - Journal of the New Korean Philosophical Association 113:317-337.
    본 논문은 비록 시험적 단계에 불과하지만, 마음에 대한 성리학적 개념들을 형식적으로 기술하여 컴퓨터에서 다룰 수 있는 형태로 구성할 수 있을 것인가를 검토하는 것을 목적으로 한다. 필자는 도덕적 마음의 작용에 관해 논의한 내용을 바탕으로 성리학적 도덕행위자구현을 위한 시뮬레이션이 가능한가를 논의해 보았다.BR 성리학에서 마음의 개념을 설명할 때, 본체인 선천적 도덕 능력에 주목하는가 아니면 현실적으로 작용한 마음의 도덕성 여부에 주목하는가에 따라 개념화의 방식이 달라질 수 있다. 필자는 후자에 가까운 율곡의 관점이 상대적으로 경험주의적이라고 보고, 마음을 실물처럼, 몸처럼 명백하게 설명하려고 했다는 점이 인공적 도덕행위자(AMA, (...) Moral Agent) 토론에 참여하는 데 용이하지 않을까 하여 예시로 삼아 보았다.BR 율곡은 마음의 작용을 기(氣)의 ‘자동적 패턴[機自爾]’으로 설명하면서, 그 패턴이 도덕적 원리로 작동되도록 하는 수양방법을 제시했다. 그래서 필자는 우선 ‘자동적 패턴[機自爾]’이라 했던 기(氣)의 작용이 마음에서 어떻게 전개되는가를 분석하고, 그 다음에 마음의 도덕적 작용, 즉 도덕원리의 우선적 발현을 정형화할 수 있는가를 검토해보았다.BR 나아가, 검토의 과정에서 인공적 도덕행위자 논쟁의 쟁점 중 하나인 무의식적 도덕성이 수기(修己)의 차원에서는 용납될 수 없지만, 치인(治人)의 차원에서는 가능하다는 판단 하에, 필자는 비인격적 모델인 치인(治人) 시스템에 관한 논의를 위해 율곡의 정치개혁론을 검토해보았다. 이러한 시론으로 성리학적 인공적 도덕행위자 구현 가능성에 대한 본격적인 토론이 촉발될 수 있기를 기대한다. (shrink)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40.  66
    Artificial moral agents: saviors or destroyers?: Wendell Wallach and Colin Allen: Review of moral machines: teaching robots right from wrong. Oxford University Press, 2009, xi + 275 pp, ISBN 978-0-19-537404-9. [REVIEW]Jeff Buechner - 2010 - Ethics and Information Technology 12 (4):363-370.
  41. Ties without Tethers.Artificial Heart Trial - 2007 - In Lisa A. Eckenwiler & Felicia Cohn (eds.), The ethics of bioethics: mapping the moral landscape. Baltimore: Johns Hopkins University Press.
    No categories
     
    Export citation  
     
    Bookmark  
  42.  36
    The Ethical Principles for the Development of Artificial Moral Agent - Focusing on the Top - down Approach -. 최현철, 변순용 & 신현주 - 2016 - Journal of Ethics: The Korean Association of Ethics 1 (111):31-53.
    스스로 도덕적 결정을 내리는 로봇을 가리켜 ‘인공적 도덕 행위자(Artificial Moral Agent: AMA)’라고 부르는데, 현재 인공적 도덕 행위자를 위한 윤리를 마련하고자 하는 접근은 크게 세 가지로 구분된다. 우선 전통적인 공리주의나 의무론적 윤리이론에 기반을 둔 하향식(top-down) 접근, 콜버그나 튜링의 방식을 따르는 상향식(bottom-up) 접근, 그리고 이 두 접근을 융합하려는 혼합식(hybrid) 접근이 있다. 인공적 도덕 행위자 설계에 대한 하향식 접근은 어떤 구체적 윤리이론을 선택한 다음, 그 이론을 구현할 수 있는 계산적 알고리즘과 시스템 설계를 이끌어내는 방식이다. 이 때 선택된 구체적 윤리이론은 도덕적 직관이 불확실할 (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Peter Danielson, Artificial Morality: Virtuous Robots for Virtual Games Reviewed by.Paul Viminitz - 1993 - Philosophy in Review 13 (5):223-225.
     
    Export citation  
     
    Bookmark  
  44.  66
    A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights from the Science of Heroism in Humans.Travis J. Wiltshire - 2015 - Minds and Machines 25 (1):57-71.
    The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview of what it (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  45. How computers extend artificial morality.Peter Danielson - 1998 - In Terrell Ward Bynum & James Moor (eds.), The Digital Phoenix: How Computers are Changing Philosophy. Cambridge: Blackwell.
  46. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules as action-guiding. They need to (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  47. Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  48.  20
    Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.Christian Herzog - 2021 - Science and Engineering Ethics 27 (1):1-15.
    In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  21
    Contraception: Natural, Artificial, Moral.Snježana Prijić-Samaržija - 2011 - Filozofska Istrazivanja 31 (2):277-290.
    Direct download  
     
    Export citation  
     
    Bookmark  
  50.  44
    Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 999