Results for 'Artificial agents'

993 found
Order:
  1.  96
    Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
    I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  2. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility (...)
    Direct download (17 more)  
     
    Export citation  
     
    Bookmark   287 citations  
  3.  54
    Artificial agents, good care, and modernity.Mark Coeckelbergh - 2015 - Theoretical Medicine and Bioethics 36 (4):265-277.
    When is it ethically acceptable to use artificial agents in health care? This article articulates some criteria for good care and then discusses whether machines as artificial agents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of ‘care’, ‘agency’, and ‘taking over’, but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  4.  18
    Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - forthcoming - Topoi.
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  12
    What makes full artificial agents morally different.Erez Firt - forthcoming - AI and Society:1-10.
    In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various aspects of full-blown AMAs (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. Artificial agents - personhood in law and philosophy.Samir Chopra - manuscript
    Thinking about how the law might decide whether to extend legal personhood to artificial agents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificial agents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificial agents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificial agents. We conclude that it (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  7.  66
    Artificial agents among us: Should we recognize them as agents proper?Migle Laukyte - 2017 - Ethics and Information Technology 19 (1):1-17.
    In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  8. Artificial agents and their moral nature.Luciano Floridi - 2014 - In Peter Kroes (ed.), The moral status of technical artefacts. pp. 185–212.
    Artificial agents, particularly but not only those in the infosphere Floridi (Information – A very short introduction. Oxford University Press, Oxford, 2010a), extend the class of entities that can be involved in moral situations, for they can be correctly interpreted as entities that can perform actions with good or evil impact (moral agents). In this chapter, I clarify the concepts of agent and of artificial agent and then distinguish between issues concerning their moral behaviour vs. issues (...)
     
    Export citation  
     
    Bookmark   2 citations  
  9. Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  10.  82
    Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  11. Modeling artificial agents’ actions in context – a deontic cognitive event ontology.Miroslav Vacura - 2020 - Applied ontology 15 (4):493-527.
    Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - 2022 - In Silja Voeneky, Philipp Kellmeyer, Oliver Mueller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  13.  20
    Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - 2023 - AI and Society 38 (2):947-960.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   58 citations  
  15.  38
    Artificial agents in social cognitive sciences.Thierry Chaminade & Jessica K. Hodgins - 2006 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 7 (3):347-353.
  16. Film. Mirrors of nature: artificial agents in real life and virtual worlds.Paul Dumouchel - 2015 - In Scott Cowdell, Chris Fleming & Joel Hodge (eds.), Mimesis, movies, and media. London: Bloomsbury Academic.
     
    Export citation  
     
    Bookmark  
  17. The ethics of designing artificial agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  18.  15
    Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.Till Feier, Jan Gogoll & Matthias Uhl - 2022 - Science and Engineering Ethics 28 (2):1-19.
    The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  19.  48
    Social Cognition and Artificial Agents.Anna Strasser - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 106-114.
    Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificial agents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  27
    Social Cognition and Artificial Agents.Anna Strasser - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 106-114.
    Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificial agents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  21. The epistemological foundations of artificial agents.Nicola Lacey & M. Lee - 2003 - Minds and Machines 13 (3):339-365.
    A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  22. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23.  24
    How should artificial agents make risky choices on our behalf?Johanna Thoma - 2021 - LSE Philosophy Blog.
  24.  5
    On social laws for artificial agent societies: off-line design.Yoav Shoham & Moshe Tennenholtz - 1995 - Artificial Intelligence 73 (1-2):231-252.
  25.  33
    The Epistemological Foundations of Artificial Agents.Nick J. Lacey & M. H. Lee - 2003 - Minds and Machines 13 (3):339-365.
    A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  57
    This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  27. On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
    Artificial agents such as robots are performing increasingly significant ethical roles in society. As a result, there is a growing literature regarding their moral status with many suggesting it is justified to regard manufactured entities as having intrinsic moral worth. However, the question of whether artificial agents could have the high degree of moral status that is attributed to human persons has largely been neglected. To address this question, the author developed a respect-based account of the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  28.  34
    A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  29. The ethics of designing artificial agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial (...)
     
    Export citation  
     
    Bookmark   1 citation  
  30.  34
    Demonstrating sensemaking emergence in artificial agents: A method and an example.Olivier L. Georgeon & James B. Marshall - 2013 - International Journal of Machine Consciousness 5 (2):131-144.
    We propose an experimental method to study the possible emergence of sensemaking in artificial agents. This method involves analyzing the agent's behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Small Loop Problem. We argue that the agent's behavior demonstrates sensemaking (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Meaning in Artificial Agents: The Symbol Grounding Problem Revisited.Dairon Rodríguez, Jorge Hermosillo & Bruno Lara - 2012 - Minds and Machines 22 (1):25-34.
    The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument. The main thesis in this paper is that although related, these two issues present different (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32.  74
    What is it like to encounter an autonomous artificial agent?Karsten Weber - 2013 - AI and Society 28 (4):483-489.
    Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33.  17
    Bio-Agency and the Possibility of Artificial Agents.Anne Sophie Meincke - 2018 - In Antonio Piccolomini D’Aragona, Martin Carrier, Roger Deulofeu, Axel Gelfert, Jens Harbecke, Paul Hoyningen-Huene, Lara Huber, Peter Hucklenbroich, Ludger Jansen, Elizaveta Kostrova, Keizo Matsubara, Anne Sophie Meincke, Andrea Reichenberger, Kian Salimkhani & Javier Suárez (eds.), Philosophy of Science: Between the Natural Sciences, the Social Sciences, and the Humanities. Cham: Springer Verlag. pp. 65-93.
    Within the philosophy of biology, recently promising steps have been made towards a biologically grounded concept of agency. Agency is described as bio-agency: the intrinsically normative adaptive behaviour of human and non-human organisms, arising from their biological autonomy. My paper assesses the bio-agency approach by examining criticism recently directed by its proponents against the project of embodied robotics. Defenders of the bio-agency approach have claimed that embodied robots do not, and for fundamental reasons cannot, qualify as artificial agents (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  34. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  35. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Https://Orcidorg Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  36.  72
    Can we Develop Artificial Agents Capable of Making Good Moral Decisions?: Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, xi + 273 pp, ISBN: 978-0-19-537404-9.Herman T. Tavani - 2011 - Minds and Machines 21 (3):465-474.
  37.  96
    Privacy and artificial agents, or, is google reading my email?Samir Chopra & Laurence White - manuscript
    in Proceedings of the International Joint Conference on Artificial Intelligence, 2007.
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  24
    The Puzzle of Evaluating Moral Cognition in Artificial Agents.Madeline G. Reinecke, Yiran Mao, Markus Kunesch, Edgar A. Duéñez-Guzmán, Julia Haas & Joel Z. Leibo - 2023 - Cognitive Science 47 (8):e13315.
    In developing artificial intelligence (AI), researchers often benchmark against human performance as a measure of progress. Is this kind of comparison possible for moral cognition? Given that human moral judgment often hinges on intangible properties like “intention” which may have no natural analog in artificial agents, it may prove difficult to design a “like‐for‐like” comparison between the moral behavior of artificial and human agents. What would a measure of moral behavior for both humans and AI (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  22
    Categorization in artificial agents: Guidance on empirical research?William S.-Y. Wang & Tao Gong - 2005 - Behavioral and Brain Sciences 28 (4):511-512.
    By comparing mechanisms in nativism, empiricism, and culturalism, the target article by Steels & Belpaeme (S&B) emphasizes the influence of communicational constraint on sharing color categories. Our commentary suggests deeper considerations of some of their claims, and discusses some modifications that may help in the study of communicational constraints in both humans and robots.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  40. Bio-Agency and the Possibility of Artificial Agents.Anne Sophie Meincke - 2018 - In Alexander Christian, David Hommen, Nina Retzlaff & Gerhard Schurz (eds.), Philosophy of Science - Between the Natural Sciences, the Social Sciences, and the Humanities. Selected Papers from the 2016 conference of the German Society of Philosophy of Science. Dordrecht, Netherlands: pp. 65-93.
    Within the philosophy of biology, recently promising steps have been made towards a biologically grounded concept of agency. Agency is described as bio-agency: the intrinsically normative adaptive behaviour of human and non-human organisms, arising from their biological autonomy. My paper assesses the bio-agency approach by examining criticism recently directed by its proponents against the project of embodied robotics. Defenders of the bio-agency approach have claimed that embodied robots do not, and for fundamental reasons cannot, qualify as artificial agents (...)
     
    Export citation  
     
    Bookmark   1 citation  
  41.  82
    Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  42. Representation in natural and artificial agents.M. Bickhard - 1999 - In Edwina Taborsky (ed.), Semiosis. Evolution. Energy: Towards a Reconceptualization of the Sign. Shaker Verlag. pp. 15--26.
  43. The influence of epistemology on the design of artificial agents.Mark Lee & Nick Lacey - 2003 - Minds and Machines 13 (3):367-395.
    Unlike natural agents, artificial agents are, to varying extent, designed according to sets of principles or assumptions. We argue that the designers philosophical position on truth, belief and knowledge has far reaching implications for the design and performance of the resulting agents. Of the many sources of design information and background we believe philosophical theories are under-rated as valuable influences on the design process. To explore this idea we have implemented some computer-based agents with their (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44. Beyond persons: extending the personal/subpersonal distinction to non-rational animals and artificial agents.Manuel de Pinedo-Garcia & Jason Noble - 2008 - Biology and Philosophy 23 (1):87-100.
    The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  45.  12
    Towards socially-competent and culturally-adaptive artificial agents.Chiara Bassetti, Enrico Blanzieri, Stefano Borgo & Sofia Marangon - 2022 - Interaction Studies 23 (3):469-512.
    The development of artificial agents for social interaction pushes to enrich robots with social skills and knowledge about (local) social norms. One possibility is to distinguish the expressive and the functional orders during a human-robot interaction. The overarching aim of this work is to set a framework to make the artificial agent socially-competent beyond dyadic interaction – interaction in varying multi-party social situations – and beyond individual-based user personalization, thereby enlarging the current conception of “culturally-adaptive”. The core (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  76
    Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents[REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  47. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  48.  30
    Arguments as Drivers of Issue Polarisation in Debates Among Artificial Agents.Felix Kopecky - 2022 - Journal of Artificial Societies and Social Simulation 25 (1).
    Can arguments and their properties influence the development of issue polarisation in debates among artificial agents? This paper presents an agent-based model of debates with logical constraints based on the theory of dialectical structures. Simulations on this model reveal that the exchange of arguments can drive polarisation even without social influence, and that the usage of different argumentation strategies can influence the obtained levels of polarisation.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  40
    Learning to Manipulate and Categorize in Human and Artificial Agents.Giuseppe Morlino, Claudia Gianelli, Anna M. Borghi & Stefano Nolfi - 2015 - Cognitive Science 39 (1):39-64.
    This study investigates the acquisition of integrated object manipulation and categorization abilities through a series of experiments in which human adults and artificial agents were asked to learn to manipulate two-dimensional objects that varied in shape, color, weight, and color intensity. The analysis of the obtained results and the comparison of the behavior displayed by human and artificial agents allowed us to identify the key role played by features affecting the agent/environment interaction, the relation between category (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  50. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents[REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   34 citations  
1 — 50 / 993