Results for 'artificial moral agency'

1000+ found
Order:
  1.  39
    Review of Carlos Montemayor's "The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment". London, 2023. Bloomsbury Academic, Bloomsbury Publishing. [REVIEW]Diego Morales - 2023 - Journal of Applied Philosophy 40 (4):766-768.
    Book review of Carlos Montemayor's "The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment" || Reseña del libro "The Prospect of a Humanitarian Artificial Intelligence: Agency and Value Alignment", escrito por Carlos Montemayor.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  3. ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human (...) error; and (3) 'Human-Like AMAs' programmed to understand and apply moral values in broadly the same way that we do, with a human-like moral psychology. Sections 2–4 then argue that each type of AMA generates unique control and alignment problems that have not been fully appreciated. Section 2 argues that Inhuman AMAs are likely to behave in inhumane ways that pose serious existential risks. Section 3 then contends that Better-Human AMAs run a serious risk of magnifying some sources of human moral error by reducing or eliminating others. Section 4 then argues that Human-Like AMAs would not only likely reproduce human moral failures, but also plausibly be highly intelligent, conscious beings with interests and wills of their own who should therefore be entitled to similar moral rights and freedoms as us. This generates what I call the New Control Problem: ensuring that humans and Human-Like AMAs exert a morally appropriate amount of control over each other. Finally, Section 5 argues that resolving the New Control Problem would, at a minimum, plausibly require ensuring what Hume and Rawls term ‘circumstances of justice’ between humans and Human-Like AMAs. But, I argue, there are grounds for thinking this will be profoundly difficult to achieve. I thus conclude on a skeptical note. Different approaches to developing ‘safe, ethical AI’ generate subtly different control and alignment problems that we do not currently know how to adequately resolve, and which may or may not be ultimately surmountable. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  5.  10
    A Theological Account of Artificial Moral Agency.Ximian Xu - 2023 - Studies in Christian Ethics 36 (3):642-659.
    This article seeks to explore the idea of artificial moral agency from a theological perspective. By drawing on the Reformed theology of archetype-ectype, it will demonstrate that computational artefacts are the ectype of human moral agents and, consequently, have a partial moral agency. In this light, human moral agents mediate and extend their moral values through computational artefacts, which are ontologically connected with humans and only related to limited particular moral issues. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  52
    Karol Wojtyla on Artificial Moral Agency andMoral Accountability.Richard A. Spinello - 2011 - The National Catholic Bioethics Quarterly 11 (3):469-491.
    As the notion of artificial moral agency gains popularity among ethicists, it threatens the unique status of the human person as a responsible moral agent. The philosophy of ontocentrism, popularized by Luciano Floridi, argues that biocentrism is too restrictive and must yield to a new philosophical vision that endows all beings with some intrinsic value. Floridi’s macroethics also regards more sophisticated digital entities such as robots as accountable moral agents. To refute these principles, this paper (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. Kantian Moral Agency and the Ethics of Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - Problemos 100:139-151.
    This paper discusses the philosophical issues pertaining to Kantian moral agency and artificial intelligence. Here, our objective is to offer a comprehensive analysis of Kantian ethics to elucidate the non-feasibility of Kantian machines. Meanwhile, the possibility of Kantian machines seems to contend with the genuine human Kantian agency. We argue that in machine morality, ‘duty’ should be performed with ‘freedom of will’ and ‘happiness’ because Kant narrated the human tendency of evaluating our ‘natural necessity’ through ‘happiness’ (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  50
    Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI).Alexis Fritz, Wiebke Brandt, Henner Gimpel & Sarah Bayer - 2020 - De Ethica 6 (1):3-22.
    Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  9. Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.Muntean Ioan & Don Howard - 2017 - In Thomas Powers (ed.), Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics. Springer.
    This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to “read” the behavior of human actors, available as collected data, and to categorize their moral behavior based on moral patterns (...)
     
    Export citation  
     
    Bookmark   1 citation  
  10.  57
    Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘ (...) moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions ofagencyor to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion ofartificial moral responsibility. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  11. Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12.  59
    Can a Robot Pursue the Good? Exploring Artificial Moral Agency.Amy Michelle DeBaets - 2014 - Journal of Evolution and Technology 24 (3):76-86.
    In this essay I will explore an understanding of the potential moral agency of robots; arguing that the key characteristics of physical embodiment; adaptive learning; empathy in action; and a teleology toward the good are the primary necessary components for a machine to become a moral agent. In this context; other possible options will be rejected as necessary for moral agency; including simplistic notions of intelligence; computational power; and rule-following; complete freedom; a sense of God; (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  13. The Problem Of Moral Agency In Artificial Intelligence.Riya Manna & Rajakishore Nath - 2021 - 2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW).
    Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named ‘augmented intelligence’. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged ‘Cybernetics’ with a view of a (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  14. Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. [REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
  15. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  17.  30
    “Virtue Engineering” and Moral Agency: Will Post-Humans Still Need the Virtues?Fabrice Jotterand - 2011 - American Journal of Bioethics Neuroscience 2 (4):3-9.
    It is not the purpose of this article to evaluate the techno-scientific claims of the transhumanists. Instead, I question seriously the nature of the ethics and morals they claim can, or soon will, be manipulated artificially. I argue that while the possibility to manipulate human behavior via emotional processes exists, the question still remains concerning the content of morality. In other words, neural moral enhancement does not capture the fullness of human moral psychology, which includes moral capacity (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  18. Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  19. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  20. Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  21. Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  24.  51
    Presuppositions of Collective Moral Agency: Analogy, Architectonics, Justice, and Casuistry.David Ardagh - 2012 - Philosophy of Management 11 (2):5-28.
    This is the second of three papers with the overall title: “A Quasi-Personal Alternative to Some Anglo-American Pluralist Models of Organisations: Towards an Analysis of Corporate Self-Governance for Virtuous Organisations”.1 In the first paper, entitled: “Organisations as quasi-personal entities: from ‘governing’ of the self to organisational ‘self’-governance: a Neo-Aristotelian quasi-personal model of organisations”, the artificial corporate analogue of a natural person sketched there, was said to have quasi-directive, quasi-operational and quasi-enabling/resource-provision capacities. Its use of these capacities following joint deliberation (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  62
    A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights from the Science of Heroism in Humans.Travis J. Wiltshire - 2015 - Minds and Machines 25 (1):57-71.
    The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  26.  54
    A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  28. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  29. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  21
    A Critique of Some Anglo-American Models of Collective Moral Agency in Business.David Ardagh - 2013 - Philosophy of Management 12 (3):5-25.
    The paper completes a trilogy of papers, under the title: “A Quasi-Personal Alternative to Some Anglo-American Pluralist Models of Organisations: Towards an Analysis of Corporate Self-Governance for Virtuous Organisations”. The first two papers of the three are published in Philosophy of Management, Volumes 10,3 and 11,2. This last paper argues that three dominant Anglo-American organisational theories which see themselves as “business ethics-friendly,” are less so than they seem. It will be argued they present obstacles to collective corporate moral (...). They are: 1) the dominant “soft pluralist” organisational theory of Bolman and Deal, published in 1984 and more recently expressed in Reframing Organisations: Artistry, Choice, and Leadership, 5th edition, 2013, which is based on “reframing,” and which we will call reframing theory (RT); 2) the Business Ethics deployment of Stakeholder Management Theory (SMT) associated with R. Edward Freeman, and several colleagues, dominant in the same period (1984-); and 3) to a much lesser degree, an adapted version of SMT in the Integrated Social Contract Theory (ISCT) of Donaldson and Dunfee (Ties That Bind, Harvard Business School Press (1999)). This paper suggests a return, from RT, SMT, and ISCT, to an older “participative-structuralist” Neo-Aristotelian virtue-ethics based account, based on an analogy between “natural” persons, and organisations as “artificial” persons, with natural persons seen as “flat” architectonically related sets of capacity in complementary relation, and organisations as even flatter architectonic hierarchies of groups of incumbents in roles. This quasi-personal model preserves the possibility of corporate moral agency and some hierarchical and lateral order between leadership groups and other functional roles in the ethical governance of the whole corporation, as a collective moral agent. The quasi-person model would make possible assigning degrees of responsibility and a more coherent interface of Ethics, Organisational Ethics, and Management Theory; the reconfiguring of the place of business in society; an alternate ethico-political basis for Corporate Social Responsibility; and a rethinking of the design of the business corporate form, within the practice and institutions of business, but embedded in a state as representing the community. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  32.  24
    Anthropological Crisis or Crisis in Moral Status: a Philosophy of Technology Approach to the Moral Consideration of Artificial Intelligence.Joan Llorca Albareda - 2024 - Philosophy and Technology 37 (1):1-26.
    The inquiry into the moral status of artificial intelligence (AI) is leading to prolific theoretical discussions. A new entity that does not share the material substrate of human beings begins to show signs of a number of properties that are nuclear to the understanding of moral agency. It makes us wonder whether the properties we associate with moral status need to be revised or whether the new artificial entities deserve to enter within the circle (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Artificial consciousness: A perspective from the free energy principle.Wanja Wiese - manuscript
    Could a sufficiently detailed computer simulation of consciousness replicate consciousness? In other words, is performing the right computations sufficient for artificial consciousness? Or will there remain a difference between simulating and being a conscious system, because the right computations must be implemented in the right way? From the perspective of Karl Friston's free energy principle, self-organising systems (such as living organisms) share a set of properties that could be realised in artificial systems, but are not instantiated by computers (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  34. Sustained Representation of Perspectival Shape.Jorge Morales, Axel Bax & Chaz Firestone - 2020 - Proceedings of the National Academy of Sciences of the United States of America 117 (26):14873–14882.
    Arguably the most foundational principle in perception research is that our experience of the world goes beyond the retinal image; we perceive the distal environment itself, not the proximal stimulation it causes. Shape may be the paradigm case of such “unconscious inference”: When a coin is rotated in depth, we infer the circular object it truly is, discarding the perspectival ellipse projected on our eyes. But is this really the fate of such perspectival shapes? Or does a tilted coin retain (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  35. Action and Agency in Artificial Intelligence: A Philosophical Critique.Justin Nnaemeka Onyeukaziri - 2023 - Philosophia: International Journal of Philosophy (Philippine e-journal) 24 (1):73-90.
    The objective of this work is to explore the notion of “action” and “agency” in artificial intelligence (AI). It employs a metaphysical notion of action and agency as an epistemological tool in the critique of the notion of “action” and “agency” in artificial intelligence. Hence, both a metaphysical and cognitive analysis is employed in the investigation of the quiddity and nature of action and agency per se, and how they are, by extension employed in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36. Kantian Notion of freedom and Autonomy of Artificial Agency.Manas Kumar Sahu - 2021 - Prometeica - Revista De Filosofía Y Ciencias 23:136-149.
    The objective of this paper is to provide a critical analysis of the Kantian notion of freedom (especially the problem of the third antinomy and its resolution in the critique of pure reason); its significance in the contemporary debate on free-will and determinism, and the possibility of autonomy of artificial agency in the Kantian paradigm of autonomy. Kant's resolution of the third antinomy by positing the ground in the noumenal self resolves the problem of antinomies; however, invites an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  59
    Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Agency as difference-making: causal foundations of moral responsibility.Johannes Himmelreich - 2015 - Dissertation, London School of Economics and Political Science
    We are responsible for some things but not for others. In this thesis, I investigate what it takes for an entity to be responsible for something. This question has two components: agents and actions. I argue for a permissive view about agents. Entities such as groups or artificially intelligent systems may be agents in the sense required for responsibility. With respect to actions, I argue for a causal view. The relation in virtue of which agents are responsible for actions is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  40. Evolutionary and religious perspectives on morality.Artificial Intelligence - forthcoming - Zygon.
  41. Make Them Rare or Make Them Care: Artificial Intelligence and Moral Cost-Sharing.Blake Hereth & Nicholas Evans - 2023 - In Daniel Schoeni, Tobias Vestner & Kevin Govern (eds.), Ethical Dilemmas in the Global Defense Industry. Oxford University Press.
    The use of autonomous weaponry in warfare has increased substantially over the last twenty years and shows no sign of slowing. Our chapter raises a novel objection to the implementation of autonomous weapons, namely, that they eliminate moral cost-sharing. To grasp the basics of our argument, consider the case of uninhabited aerial vehicles that act autonomously (i.e., LAWS). Imagine that a LAWS terminates a military target and that five civilians die as a side effect of the LAWS bombing. Because (...)
     
    Export citation  
     
    Bookmark  
  42. Ties without Tethers.Artificial Heart Trial - 2007 - In Lisa A. Eckenwiler & Felicia Cohn (eds.), The Ethics of Bioethics: Mapping the Moral Landscape. Johns Hopkins University Press.
    No categories
     
    Export citation  
     
    Bookmark  
  43.  25
    The Democratic Inclusion of Artificial Intelligence? Exploring the Patiency, Agency and Relational Conditions for Demos Membership.Ludvig Beckman & Jonas Hultin Rosenberg - 2022 - Philosophy and Technology 35 (2):1-24.
    Should artificial intelligences ever be included as co-authors of democratic decisions? According to the conventional view in democratic theory, the answer depends on the relationship between the political unit and the entity that is either affected or subjected to its decisions. The relational conditions for inclusion as stipulated by the all-affected and all-subjected principles determine the spatial extension of democratic inclusion. Thus, AI qualifies for democratic inclusion if and only if AI is either affected or subjected to decisions by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44.  5
    Moral artificial intelligence and machine puritanism.Jean-François Bonnefon - 2023 - Behavioral and Brain Sciences 46:e297.
    Puritanism may evolve into a technological variant based on norms of delegation of actions and perceptions to artificial intelligence. Instead of training self-control, people may be expected to cede their agency to self-controlled machines. The cost–benefit balance of this machine puritanism may be less aversive to wealthy individualistic democracies than the old puritanism they have abandoned.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  57
    "I don't trust you, you faker!" On Trust, Reliance, and Artificial Agency.Fabio Fossa - 2019 - Teoria 39 (1):63-80.
    The aim of this paper is to clarify the extent to which relationships between Human Agents (HAs) and Artificial Agents (AAs) can be adequately defined in terms of trust. Since such relationships consist mostly in the allocation of tasks to technological products, particular attention is paid to the notion of delegation. In short, I argue that it would be more accurate to describe direct relationships between HAs and AAs in terms of reliance, rather than in terms of trust. However, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  46.  70
    A moral analysis of intelligent decision-support systems in diagnostics through the lens of Luciano Floridi’s information ethics.Dmytro Mykhailov - 2021 - Human Affairs 31 (2):149-164.
    Contemporary medical diagnostics has a dynamic moral landscape, which includes a variety of agents, factors, and components. A significant part of this landscape is composed of information technologies that play a vital role in doctors’ decision-making. This paper focuses on the so-called Intelligent Decision-Support System that is widely implemented in the domain of contemporary medical diagnosis. The purpose of this article is twofold. First, I will show that the IDSS may be considered a moral agent in the practice (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  47. Stress, Coping, and Resilience Before and After COVID-19: A Predictive Model Based on Artificial Intelligence in the University Environment.Francisco Manuel Morales-Rodríguez, Juan Pedro Martínez-Ramón, Inmaculada Méndez & Cecilia Ruiz-Esteban - 2021 - Frontiers in Psychology 12.
    The COVID-19 global health emergency has greatly impacted the educational field. Faced with unprecedented stress situations, professors, students, and families have employed various coping and resilience strategies throughout the confinement period. High and persistent stress levels are associated with other pathologies; hence, their detection and prevention are needed. Consequently, this study aimed to design a predictive model of stress in the educational field based on artificial intelligence that included certain sociodemographic variables, coping strategies, and resilience capacity, and to study (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Ethics of Driving Automation. Artificial Agency and Human Values.Fabio Fossa - 2023 - Cham: Springer.
    This book offers a systematic and thorough philosophical analysis of the ways in which driving automation crosses path with ethical values. Upon introducing the different forms of driving automation and examining their relation to human autonomy, it provides readers with in-depth reflections on safety, privacy, moral judgment, control, responsibility, sustainability, and other ethical issues. Driving is undoubtedly a moral activity as a human act. Transferring it to artificial agents such as connected and automated vehicles necessarily raises many (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  49. Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   180 citations  
  50.  16
    Information-seeking dialogue for explainable artificial intelligence: Modelling and analytics.Ilia Stepin, Katarzyna Budzynska, Alejandro Catala, Martín Pereira-Fariña & Jose M. Alonso-Moral - 2024 - Argument and Computation 15 (1):49-107.
    Explainable artificial intelligence has become a vitally important research field aiming, among other tasks, to justify predictions made by intelligent classifiers automatically learned from data. Importantly, efficiency of automated explanations may be undermined if the end user does not have sufficient domain knowledge or lacks information about the data used for training. To address the issue of effective explanation communication, we propose a novel information-seeking explanatory dialogue game following the most recent requirements to automatically generated explanations. Further, we generalise (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 1000