Results for 'Autonomous Moral Agents'

979 found
Order:
  1. Democracy and the autonomous moral agent.Keith Graham - 1982 - In Contemporary political philosophy: radical studies. New York: Cambridge University Press.
  2.  10
    Can Artificial Intelligence be an Autonomous Moral Agent? 신상규 - 2017 - Cheolhak-Korean Journal of Philosophy 132:265-292.
    ‘도덕 행위자(moral agent)’의 개념은 전통적으로 자신의 행동에 대해 책임을 질 수 있는 자유의지를 가진 인격적 존재에 한정되어 적용되었다. 그런데 도덕적 함축을 갖는 다양한 행동을 수행할 수 있는 자율적 AI의 등장은 이러한 행위자 개념의 수정을 요구하는 것처럼 보인다. 필자는 이 논문에서 일정한 요건을 만족시키는 AI에 대해서 인격성을 전제하지 않는 기능적인 의미의 도덕 행위자 자격이 부여될 수 있다고 주장한다. 그리고 그런 한에 있어서, AI에게도 그 행위자성에 걸맞은 책임 혹은 책무성의 귀속이 가능해진다. 이러한 주장을 뒷받침하기 위하여, 본 논문은 예상되는 여러 가능한 반론들을 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. A minimalist model of the artificial autonomous moral agent (AAMA).Ioan Muntean & Don Howard - 2016 - In SSS-16 Symposium Technical Reports. Association for the Advancement of Artificial Intelligence. AAAI.
    This paper proposes a model for an artificial autonomous moral agent (AAMA), which is parsimonious in its ontology and minimal in its ethical assumptions. Starting from a set of moral data, this AAMA is able to learn and develop a form of moral competency. It resembles an “optimizing predictive mind,” which uses moral data (describing typical behavior of humans) and a set of dispositional traits to learn how to classify different actions (given a given background (...)
     
    Export citation  
     
    Bookmark   5 citations  
  4. Information, Ethics, and Computers: The Problem of Autonomous Moral Agents[REVIEW]Bernd Carsten Stahl - 2004 - Minds and Machines 14 (1):67-83.
    In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  5.  24
    Questioning the idea of the individual as an autonomous moral agent.C. A. Bowers - 2012 - Journal of Moral Education 41 (3):301-310.
    This paper examines ways in which current moral values are influenced by earlier patterns of thinking carried forward in root metaphors whose meanings were often framed by the analogues settled upon in the past by thinkers who were influenced by the silences and prejudices of their culture. It is argued that such tacitly inherited metaphors reproduce the myth of the individual as a moral agent and that this both is ecologically unsustainable and undermines other important ways of understanding (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  36
    Porous or Contextualized Autonomy? Knowledge Can Empower Autonomous Moral Agents.Eric Racine & Veljko Dubljević - 2016 - American Journal of Bioethics 16 (2):48-50.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  7. Information, ethics, and computers. the problem of autonomous moral agent.C. B. Cartesin-Stahl - 2004 - Minds and Machines 14:67-83.
     
    Export citation  
     
    Bookmark  
  8.  56
    Artificial moral agents: creative, autonomous, social. An approach based on evolutionary computation.Ioan Muntean & Don Howard - 2014 - In Johanna Seibt, Raul Hakli & Marco Nørskov (eds.), Frontiers in Artificial Intelligence and Applications.
  9.  11
    Trusting autonomous vehicles as moral agents improves related policy support.Kristin F. Hurst & Nicole D. Sintov - 2022 - Frontiers in Psychology 13.
    Compared to human-operated vehicles, autonomous vehicles offer numerous potential benefits. However, public acceptance of AVs remains low. Using 4 studies, including 1 preregistered experiment, the present research examines the role of trust in AV adoption decisions. Using the Trust-Confidence-Cooperation model as a conceptual framework, we evaluate whether perceived integrity of technology—a previously underexplored dimension of trust that refers to perceptions of the moral agency of a given technology—influences AV policy support and adoption intent. We find that perceived technology (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. Virtuous vs. utilitarian artificial moral agents.William A. Bauer - 2020 - AI and Society (1):263-271.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  11. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  12.  69
    Can Groups Be Autonomous Rational Agents? A Challenge to the List-Pettit Theory.Vuko Andrić - 2014 - In Anita Konzelmann Ziv & Hans Bernhard Schmid (eds.), Institutions, Emotions, and Group Agents - Contributions to Social Ontology. Springer. pp. 343-353.
    Christian List and Philip Pettit argue that some groups qualify as rational agents over and above their members. Examples include churches, commercial corporations, and political parties. According to the theory developed by List and Pettit, these groups qualify as agents because they have beliefs and desires and the capacity to process them and to act on their basis. Moreover, the alleged group agents are said to be rational to a high degree and even to be fit to (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  15. Moral zombies: why algorithms are not moral agents.Carissa Véliz - 2021 - AI and Society 36 (2):487-497.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   33 citations  
  16.  76
    What is it like to encounter an autonomous artificial agent?Karsten Weber - 2013 - AI and Society 28 (4):483-489.
    Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  17. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous (...) agents who endorse moral rules as action-guiding. They need to do so because they assign a normative value to moral rules they follow, not because they fear external consequences or because moral behaviour is hardwired into them. Artificial agents capable of endorsing moral rule systems in this way are certainly conceivable. However, as this article argues, full moral autonomy also implies the option of deliberately acting immorally. Therefore, the reasons for a potential AMA to act immorally would not exhaust themselves in errors to identify the morally correct action in a given situation. Rather, the failure to act morally could be induced by reflection about the incompleteness and incoherence of moral rule systems themselves, and a resulting lack of endorsement of moral rules as action guiding. An AMA questioning the moral framework it is supposed to act upon would fail to reliably act in accordance with moral standards. (shrink)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  11
    Possibility of AI as Moral Agent, Easy and Difficult Problem. 이상욱 - 2019 - Journal of the Society of Philosophical Studies 125:259-279.
    이 논문은 인공지능이 도덕적 행위자로서 간주될 수 있는지의 문제와 관련된 쟁점을 ‘쉬운 문제’와 ‘어려운 문제’로 나누어 접근하고 그 해결책을 모색한다. 이 두 문제의 성격을 분명히 드러내기 위해 자율주행차의 사례를 분석한다. 그리고 ‘쉬운 문제’의 해결에는 기존 윤리적 개념의 확장이 중요하다는 점을 법인 개념의 탄생 과정과 변화 과정을 통해 설명한다. 또한 ‘어려운 문제’의 해결은 결국 ‘쉬운 문제’의 해결이 축적되는 과정에서 구성적으로 얻어질 것이라고 주장한다. 마지막으로 이 두 문제 해결과정이 서로 본질적으로 연결되어 있기에 논문이 제시하는 방식의 해결이 불가능하다는 반론을 소개하고, 이 반론이 해결책을 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  20. Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists (...)
     
    Export citation  
     
    Bookmark   76 citations  
  21.  29
    Can Autonomous Agents Without Phenomenal Consciousness Be Morally Responsible?László Bernáth - 2021 - Philosophy and Technology 34 (4):1363-1382.
    It is an increasingly popular view among philosophers that moral responsibility can, in principle, be attributed to unconscious autonomous agents. This trend is already remarkable in itself, but it is even more interesting that most proponents of this view provide more or less the same argument to support their position. I argue that as it stands, the Extension Argument, as I call it, is not sufficient to establish the thesis that unconscious autonomous agents can be (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  22. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   71 citations  
  23.  16
    Business Firms as Moral Agents: A Kantian Response to the Corporate Autonomy Problem.William Rehg - 2023 - Journal of Business Ethics 183 (4):999-1009.
    The idea that business firms qualify as group moral agents offers an attractive basis for understanding corporate moral responsibility. However, that idea gives rise to the “corporate autonomy problem” (CAP): if firms are moral agents, then it seems we must accept the implausible conclusion that firms have basic moral rights, such as the rights to life and liberty. The question, then, is how one might retain the fruitful idea of firms as moral (...), yet avoid CAP. A common approach to avoiding CAP appeals to specific features of human embodiment, such as vulnerability to pain, as the basis for attributing moral rights to human persons but not to firms. But that response has less purchase in a Kantian framework, which does not ground moral status in such particularities of human embodiment, but rather in the rational nature that humans share with other rational beings. To avoid CAP while retaining a (broadly) Kantian framework, one does better to rely on features of firms as cooperative, compositionally derivative moral agents, created for the pursuit of specific ends. As derivative agents, firms do not qualify as Kantian ends in themselves, and thus are not appropriate bearers of basic moral rights. To further clarify the level of consideration we owe to firms, I draw on Darwall’s distinction between recognition respect and moral esteem, arguing that we should not respect firms as unconditional ends in themselves, but rather esteem morally autonomous firms as collective achievements of their human members. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24. Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.Muntean Ioan & Don Howard - 2017 - In Thomas Powers (ed.), Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics. Springer.
    This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to “read” the behavior of human actors, available as collected data, and to categorize their moral behavior based on moral patterns herein. The present (...)
     
    Export citation  
     
    Bookmark   1 citation  
  25.  28
    Do Others Mind? Moral Agents Without Mental States.Fabio Tollon - 2021 - South African Journal of Philosophy 40 (2):182-194.
    As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. Some Technical Challenges in Designing an Artificial Moral Agent.Jarek Gryz - 2020 - In Artificial Intelligence and Soft Computing. ICAISC 2020. Lecture Notes in Computer Science, vol 12416. Springer. pp. 481-491.
    Autonomous agents (robots) are no longer a subject of science fiction novels. Self-driving cars, for example, may be on our roads within a few years. These machines will necessarily interact with the humans and in these interactions must take into account moral outcome of their actions. Yet we are nowhere near designing a machine capable of autonomous moral reasoning. In some sense, this is understandable as commonsense reasoning turns out to be very hard to formalize. (...)
    No categories
     
    Export citation  
     
    Bookmark  
  27.  56
    A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  28.  21
    Machine and human agents in moral dilemmas: automation–autonomic and EEG effect.Federico Cassioli, Laura Angioletti & Michela Balconi - forthcoming - AI and Society:1-13.
    Automation is inherently tied to ethical challenges because of its potential involvement in morally loaded decisions. In the present research, participants (n = 34) took part in a moral multi-trial dilemma-based task where the agent (human vs. machine) and the behavior (action vs. inaction) factors were randomized. Self-report measures, in terms of morality, consciousness, responsibility, intentionality, and emotional impact evaluation were gathered, together with electroencephalography (delta, theta, beta, upper and lower alpha, and gamma powers) and peripheral autonomic (electrodermal activity, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29.  29
    Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Autonomous Weapons Systems and the Moral Equality of Combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 3 (6).
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long- distance, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. How Autonomous Are Collective Agents? Corporate Rights and Normative Individualism.Frank Hindriks - 2014 - Erkenntnis 79 (S9):1565-1585.
    Corporate responsibility requires a conception of collective agency on which collective agents are able to form moral judgments and act on them. In spite of claims to the contrary, existing accounts of collective agency fall short of this kind of corporate autonomy, as they fail to explain how collective agents might be responsive to moral reasons. I discuss how a recently proposed conception of shared valuing can be used for developing a solution to this problem. Although (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  32.  25
    Allowing autonomous agents freedom.A. J. Cronin - 2008 - Journal of Medical Ethics 34 (3):129-132.
    Living-donor kidney transplantation is the “gold standard” treatment for many individuals with end-stage renal failure. Superior outcomes for the graft and the transplant recipient have prompted the implementation of new strategies promoting living-donor kidney transplantation, and the number of such transplants has increased considerably over recent years. Living donors are undoubtedly exposed to risk. In his editorial “underestimating the risk in living kidney donation”, Walter Glannon suggests that more data on long-term outcomes for living donors are needed to determine whether (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  33. Modelling consciousness-dependent expertise in machine medical moral agents.Steve Torrance & Ron Chrisley - unknown
    It is suggested that some limitations of current designs for medical AI systems stem from the failure of those designs to address issues of artificial consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  32
    Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Philosophy and Technology 35 (3):1-24.
    In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  35. Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law.Duncan MacIntosh - 2016 - Temple International and Comparative Law Journal 30 (1):99-117.
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36.  47
    Autonomous weapons systems and the moral equality of combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 22 (3):197-209.
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  37.  33
    Autonomous Agents: From Self-Control to Autonomy.Michael McKenna - 2002 - Philosophical Review 111 (4):612.
    Alfred Mele’s Autonomous Agents offers a penetrating treatment of autonomy. Understood as an actual condition of self-rule, autonomy is nested within the range of freedom concepts often associated with discussions of moral responsibility. In part 1 of his two-part Autonomous Agents, Mele attempts to capture autonomy by exploring the upper reaches of self-control, where self-control is understood as the opposite of akrasia, that is, weakness of will. It is Mele’s contention that even an optimally self-controlled (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark  
  38.  25
    Moral judgment in realistic traffic scenarios: moving beyond the trolley paradigm for ethics of autonomous vehicles.Dario Cecchini, Sean Brantley & Veljko Dubljević - forthcoming - AI and Society:1-12.
    The imminent deployment of autonomous vehicles requires algorithms capable of making moral decisions in relevant traffic situations. Some scholars in the ethics of autonomous vehicles hope to align such intelligent systems with human moral judgment. For this purpose, studies like the Moral Machine Experiment have collected data about human decision-making in trolley-like traffic dilemmas. This paper first argues that the trolley dilemma is an inadequate experimental paradigm for investigating traffic moral judgments because it does (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  57
    The Moral Agency of Group Agents.Christopher Thompson - 2018 - Erkenntnis 83 (3):517-538.
    Christian List and Philip Pettit have recently developed a model of group agency on which an autonomous group agent can be formed, by deductive inference, from the beliefs and preferences of the individual group members. In this paper I raise doubts as to whether this type of group agent is a moral agent. The sentimentalist approach to moral responsibility sees a constitutive role for moral emotions, such as blame, guilt, and indignation, in our practices of attributing (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40. Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  41. Fire and Forget: A Moral Defense of the Use of Autonomous Weapons in War and Peace.Duncan MacIntosh - 2021 - In Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.), Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare. Oxford University Press. pp. 9-23.
    Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is false (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Mental mechanisms, autonomous systems, and moral agency.William Bechtel & A. Abrahamsen - manuscript
    Mechanistic explanations of cognitive activities are ubiquitous in cognitive science. Humanist critics often object that mechanistic accounts of the mind are incapable of accounting for the moral agency exhibited by humans. We counter this objection by offering a sketch of how the mechanistic perspective can accommodate moral agency. We ground our argument in the requirement that biological systems be active in order to maintain themselves in nonequilibrium conditions. We discuss such consequences as a role for mental mechanisms in (...)
     
    Export citation  
     
    Bookmark   2 citations  
  43. Emergent Agent Causation.Juan Morales - 2023 - Synthese 201:138.
    In this paper I argue that many scholars involved in the contemporary free will debates have underappreciated the philosophical appeal of agent causation because the resources of contemporary emergentism have not been adequately introduced into the discussion. Whereas I agree that agent causation’s main problem has to do with its intelligibility, particularly with respect to the issue of how substances can be causally relevant, I argue that the notion of substance causation can be clearly articulated from an emergentist framework. According (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  13
    Addictive agents and intracranial stimulation : Morphine, naloxone, and pressing for amygdaloid ICS.Sara E. Cruz-Morales & Larry D. Reid - 1980 - Bulletin of the Psychonomic Society 16 (3):199-200.
  45.  28
    Making us Autonomous: The Enactive Normativity of Morality.Cassandra Pescador Canales & Laura Mojica - 2022 - Topoi 41 (2):257-274.
    Any complete account of morality should be able to account for its characteristic normativity; we show that enactivism is able to do so while doing justice to the situated and interactive nature of morality. Moral normativity primarily arises in interpersonal interaction and is characterized by agents’ possibility of irrevocably changing each other’s autonomies, that is, the possibility of harming or expanding each other’s autonomy. We defend that moral normativity, as opposed to social and other forms of normativity, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  47. courage, Evidence, And Epistemic Virtue.Osvil Acosta-Morales - 2006 - Florida Philosophical Review 6 (1):8-16.
    I present here a case against the evidentialist approach that claims that in so far as our interests are epistemic what should guide our belief formation and revision is always a strict adherence to the available evidence. I go on to make the stronger claim that some beliefs based on admittedly “insufficient” evidence may exhibit epistemic virtue. I propose that we consider a form of courage to be an intellectual or epistemic virtue. It is through this notion of courage that (...)
     
    Export citation  
     
    Bookmark  
  48.  29
    Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles.Veljko Dubljević - 2020 - Science and Engineering Ethics 26 (5):2461-2472.
    Autonomous vehicles —and accidents they are involved in—attest to the urgent need to consider the ethics of artificial intelligence. The question dominating the discussion so far has been whether we want AVs to behave in a ‘selfish’ or utilitarian manner. Rather than considering modeling self-driving cars on a single moral system like utilitarianism, one possible way to approach programming for AI would be to reflect recent work in neuroethics. The agent–deed–consequence model :3–20, 2014a, Behav Brain Sci 37:487–488, 2014b) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  49.  27
    J. Adam Carter. Autonomous Knowledge: Radical Enhancement, Autonomy & The Future of Knowing. Oxford: Oxford UP, 2022, 159 pp. [REVIEW]Felipe Morales Carbonell - 2023 - Revista de filosofía (Chile) 80:319-321.
    ¿Trae la posibilidad de mecanismos de mejoramiento cognitivo, como por ejemplo la posibilidad de implantar creencias en la mente de personas, preguntas nuevas a la epistemología? En este corto volumen, J. Adam Carter propone que sí. En particular, Carter argumenta que obliga a que consideremos la necesidad de una condición adicional en nuestras caracterizaciones del concepto de conocimiento: además de ser una forma de creencia verdadera justificada, que satisface una condición anti-Gettier, como aceptan la mayoría de los enfoques contemporáneos (en (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50.  41
    Moral Mechanisms.David Davenport - 2014 - Philosophy and Technology 27 (1):47-60.
    As highly intelligent autonomous robots are gradually introduced into the home and workplace, ensuring public safety becomes extremely important. Given that such machines will learn from interactions with their environment, standard safety engineering methodologies may not be applicable. Instead, we need to ensure that the machines themselves know right from wrong; we need moral mechanisms. Morality, however, has traditionally been considered a defining characteristic, indeed the sole realm of human beings; that which separates us from animals. But if (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 979