Results for 'Intelligent agents'

993 found
Order:
  1.  47
    Intelligent agents as innovations.Alexander Serenko & Brian Detlor - 2004 - AI and Society 18 (4):364-381.
    This paper explores the treatment of intelligent agents as innovations. Past writings in the area of intelligent agents focus on the technical merits and internal workings of agent-based solutions. By adopting a perspective on agents from an innovations point of view, a new and novel description of agents is put forth in terms of their degrees of innovativeness, competitive implications, and perceived characteristics. To facilitate this description, a series of innovation-based theoretical models are utilized (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2. A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  3. Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.Michael T. Stuart & Markus Kneer - 2021 - Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
    While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Are intelligible agents square?Clea F. Rees - unknown
    In How We Get Along, J. David Velleman argues for two related theses: first, that ‘making sense’ of oneself to oneself and others is a constitutive aim of action; second, that this fact about action grounds normativity. Examining each thesis in turn, I argue against the first that an agent may deliberately act in ways which make sense in terms of neither her self-conception nor others' conceptions of her. Against the second thesis, I argue that some vices are such that (...)
    No categories
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  5.  75
    Intelligent agents and liability: Is it a doctrinal problem or merely a problem of explanation? [REVIEW]Emad Abdel Rahim Dahiyat - 2010 - Artificial Intelligence and Law 18 (1):103-121.
    The question of liability in the case of using intelligent agents is far from simple, and cannot sufficiently be answered by deeming the human user as being automatically responsible for all actions and mistakes of his agent. Therefore, this paper is specifically concerned with the significant difficulties which might arise in this regard especially if the technology behind software agents evolves, or is commonly used on a larger scale. Furthermore, this paper contemplates whether or not it is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  4
    Intelligent agent supporting human–multi-robot team collaboration.Ariel Rosenfeld, Noa Agmon, Oleg Maksimov & Sarit Kraus - 2017 - Artificial Intelligence 252 (C):211-231.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Socially Intelligent Agents-Towards a Science of Social Minds. Submitted to.K. Dautenhahn - forthcoming - Minds and Machines.
  8.  48
    Intelligent agents and contracts: Is a conceptual rethink imperative? [REVIEW]Emad Abdel Rahim Dahiyat - 2007 - Artificial Intelligence and Law 15 (4):375-390.
    The emergence of intelligent software agents that operate autonomously with little or no human intervention has generated many doctrinal questions at a conceptual level and has challenged the traditional rules of contract especially those relating to the intention as an essential requirement of any contract conclusion. In this paper, we will try to explore some of these challenges, and shed light on the conflict between the traditional contract theory and the transactional practice in the case of using (...) software agents. We will try further to examine how intelligent software agents differ from other software applications, and consider then how such differences are legally relevant. This paper, however, is not intended to provide the final answer to all questions and challenges in this regard, but to identify the main components, and provide perspectives on how to deal with such issue. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9.  65
    Unplanned effects of intelligent agents on Internet use: a social informatics approach. [REVIEW]Alexander Serenko, Umar Ruhi & Mihail Cocosila - 2007 - AI and Society 21 (1-2):141-166.
    This paper instigates a discourse on the unplanned effects of intelligent agents in the context of their use on the Internet. By utilizing a social informatics framework as a lens of analysis, the study identifies several unanticipated consequences of using intelligent agents for information- and commerce-based tasks on the Internet. The effects include those that transpire over time at the organizational level, such as e-commerce transformation, operational encumbrance and security overload, as well as those that emerge (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Modelling socially intelligent agents.Bruce Edmonds - manuscript
    The perspective of modelling agents rather than using them for a specificed purpose entails a difference in approach. In particular an emphasis on veracity as opposed to efficiency. An approach using evolving populations of mental models is described that goes some way to meet these concerns. It is then argued that social intelligence is not merely intelligence plus interaction but should allow for individual relationships to develop between agents. This means that, at least, agents must be able (...)
     
    Export citation  
     
    Bookmark   2 citations  
  11.  25
    Science with Artificially Intelligent Agents: The Case of Gerrymandered Hypotheses.Ioannis Votsis - unknown
    Barring some civilisation-ending natural or man-made catastrophe, future scientists will likely incorporate fully fledged artificially intelligent agents in their ranks. Their tasks will include the conjecturing, extending and testing of hypotheses. At present human scientists have a number of methods to help them carry out those tasks. These range from the well-articulated, formal and unexceptional rules to the semi-articulated rules-of-thumb and intuitive hunches. If we are to hand over at least some of the aforementioned tasks to artificially (...) agents, we need to find ways to make explicit and ultimately formal, not to mention computable, the more obscure of the methods that scientists currently employ with some measure of success in their inquiries. The focus of this talk is a problem for which the available solutions are at best semi-articulated and far from perfect. It concerns the question of how to conjecture new hypotheses or extend existing ones such that they do not save phenomena in gerrymandered or ad hoc ways. This talk puts forward a fully articulated formal solution to this problem by specifying what it is about the internal constitution of the content of a hypothesis that makes it gerrymandered or ad hoc. In doing so, it helps prepare the ground for the delegation of a full gamut of investigative duties to the artificially intelligent scientists of the future. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12.  55
    EMIA: Emotion Model for Intelligent Agent.Krishna Asawa & Shikha Jain - 2015 - Journal of Intelligent Systems 24 (4):449-465.
    Emotions play a significant role in human cognitive processes such as attention, motivation, learning, memory, and decision making. Many researchers have worked in the field of incorporating emotions in a cognitive agent. However, each model has its own merits and demerits. Moreover, most studies on emotion focus on steady-state emotions than emotion switching. Thus, in this article, a domain-independent computational model of emotions for intelligent agent is proposed that have modules for emotion elicitation, emotion regulation, and emotion transition. The (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13.  27
    Parameterizing mental model ascription across intelligent agents.Marjorie McShane - 2014 - Interaction Studies 15 (3):404-425.
    Mental model ascription – also called mindreading – is the process of inferring the mental states of others, which happens as a matter of course in social interactions. But although ubiquitous, mindreading is presumably a highly variable process: people mindread to different extents and with _different results._ We hypothesize that human mindreading ability relies on a large number of personal and contextual features: the inherent abilities of specific individuals, their current physical and mental states, their knowledge of the domain of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  14.  20
    Parameterizing mental model ascription across intelligent agents.Marjorie McShane - 2014 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 15 (3):404-425.
    Mental model ascription – also called mindreading – is the process of inferring the mental states of others, which happens as a matter of course in social interactions. But although ubiquitous, mindreading is presumably a highly variable process: people mindread to different extents and with different results. We hypothesize that human mindreading ability relies on a large number of personal and contextual features: the inherent abilities of specific individuals, their current physical and mental states, their knowledge of the domain of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  15.  22
    Aliens in the Space of Reasons? On the Interaction Between Humans and Artificial Intelligent Agents.Bert Heinrichs & Sebastian Knell - 2021 - Philosophy and Technology 34 (4):1569-1580.
    In this paper, we use some elements of the philosophical theories of Wilfrid Sellars and Robert Brandom for examining the interactions between humans and machines. In particular, we adopt the concept of the space of reasons for analyzing the status of artificial intelligent agents. One could argue that AIAs, like the widely used recommendation systems, have already entered the space of reasons, since they seem to make knowledge claims that we use as premises for further claims. This, in (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  16.  10
    Creative collaboration within heterogeneous human/intelligent agent teams.Christopher Kaczmarek - 2021 - Technoetic Arts 19 (3):269-281.
    As we move towards a world that is using machine learning and nascent artificial intelligence to analyse and, in many ways, guide most aspects of our lives, new forms of heterogeneous collaborative teams that include human/intelligent machine agents will become not just possible, but an inevitable part of our shared world. The conscious participation of the arts in the conversation about, and development and implementation of, these new collaborative possibilities is crucial, as the arts serve as our best (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17. Instruments, agents, and artificial intelligence: novel epistemic categories of reliability.Eamon Duede - 2022 - Synthese 200 (6):1-20.
    Deep learning (DL) has become increasingly central to science, primarily due to its capacity to quickly, efficiently, and accurately predict and classify phenomena of scientific interest. This paper seeks to understand the principles that underwrite scientists’ epistemic entitlement to rely on DL in the first place and argues that these principles are philosophically novel. The question of this paper is not whether scientists can be justified in trusting in the reliability of DL. While today’s artificial intelligence exhibits characteristics common to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  22
    Flourishing Ethics and identifying ethical values to instill into artificially intelligent agents.Nesibe Kantar & Terrell Ward Bynum - 2022 - Metaphilosophy 53 (5):599-604.
    The present paper uses a Flourishing Ethics analysis to address the question of which ethical values and principles should be “instilled” into artificially intelligent agents. This is an urgent question that is still being asked seven decades after philosopher/scientist Norbert Wiener first asked it. An answer is developed by assuming that human flourishing is the central ethical value, which other ethical values, and related principles, can be used to defend and advance. The upshot is that Flourishing Ethics can (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  19.  21
    Context for language understanding by intelligent agents.Marjorie McShane & Sergei Nirenburg - 2019 - Applied ontology 14 (4):415-449.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. Oscar: A cognitive architecture for intelligent agents.John Pollock - manuscript
    The “grand problem” of AI has always been to build artificial agents with human-like intelligence. That is the stuff of science fiction, but it is also the ultimate aspiration of AI. In retrospect, we can understand what a difficult problem this is, so since its inception AI has focused more on small manageable problems, with the hope that progress there will have useful implications for the grand problem. Now there is a resurgence of interest in tackling the grand problem (...)
     
    Export citation  
     
    Bookmark  
  21.  16
    Interacting with Machines: Can an Artificially Intelligent Agent Be a Partner?Philipp Schmidt & Sophie Loidolt - 2023 - Philosophy and Technology 36 (3):1-32.
    In the past decade, the fields of machine learning and artificial intelligence (AI) have seen unprecedented developments that raise human-machine interactions (HMI) to the next level.Smart machines, i.e., machines endowed with artificially intelligent systems, have lost their character as mere instruments. This, at least, seems to be the case if one considers how humans experience their interactions with them. Smart machines are construed to serve complex functions involving increasing degrees of freedom, and they generate solutions not fully anticipated by (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22.  20
    Introduction to Special Issue: Mental model ascription by intelligent agents.Marjorie McShane - 2014 - Interaction Studies 15 (3):vii-xii.
  23.  15
    Introduction to Special Issue: Mental model ascription by intelligent agents.Marjorie McShane - 2014 - Interaction Studies 15 (3):vii-xii.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Oscar: A cognitive architecture for intelligent agents.John Pollock - 1990
    The “grand problem” of AI has always been to build artificial agents of human-level intelligence, capable of operating in environments of real-world complexity. OSCAR is a cognitive architecture for such agents, implemented in LISP. OSCAR is based on my extensive work in philosophy concerning both epistemology and rational decision making. This paper provides a detailed overview of OSCAR. The main conclusions are that such agents must be capablew of operating against a background of pervasive ignorance, because the (...)
     
    Export citation  
     
    Bookmark  
  25. Intelligence via ultrafilters: structural properties of some intelligence comparators of deterministic Legg-Hutter agents.Samuel Alexander - 2019 - Journal of Artificial General Intelligence 10 (1):24-45.
    Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  26.  50
    Normal = Normative? The role of intelligent agents in norm innovation.Marco Campenní, Giulia Andrighetto, Federico Cecconi & Rosaria Conte - 2009 - Mind and Society 8 (2):153-172.
    The necessity to model the mental ingredients of norm compliance is a controversial issue within the study of norms. So far, the simulation-based study of norm emergence has shown a prevailing tendency to model norm conformity as a thoughtless behavior, emerging from social learning and imitation rather than from specific, norm-related mental representations. In this paper, the opposite stance—namely, a view of norms as hybrid, two-faceted phenomena, including a behavioral/social and an internal/mental side—is taken. Such a view is aimed at (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  12
    Do Men Have No Need for “Feminist” Artificial Intelligence? Agentic and Gendered Voice Assistants in the Light of Basic Psychological Needs.Laura Moradbakhti, Simon Schreibelmayr & Martina Mara - 2022 - Frontiers in Psychology 13.
    Artificial Intelligence is supposed to perform tasks autonomously, make competent decisions, and interact socially with people. From a psychological perspective, AI can thus be expected to impact users’ three Basic Psychological Needs, namely autonomy, competence, and relatedness to others. While research highlights the fulfillment of these needs as central to human motivation and well-being, their role in the acceptance of AI applications has hitherto received little consideration. Addressing this research gap, our study examined the influence of BPN Satisfaction on Intention (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  12
    Special issue on logics for intelligent agents and multi-agent systems.Mehmet A. Orgun, Guido Governatori, Chuchang Liu, Mark Reynolds & Abdul Sattar - 2011 - Journal of Applied Logic 9 (4):221-222.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29.  7
    Integrating representation learning and skill learning in a human-like intelligent agent.Nan Li, Noboru Matsuda, William W. Cohen & Kenneth R. Koedinger - 2015 - Artificial Intelligence 219 (C):67-91.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  28
    A generic distributed simulation system for intelligent agent design and evaluation.John Anderson - forthcoming - Proceedings of the Tenth Conference on Ai, Simulation and Planning, Ais-2000, Society for Computer Simulation International.
  31.  11
    Emotional Intelligence and Coping Mechanisms among Selected Call Center Agents in Cebu City (2nd edition).Mark Anthony Polinar - 2023 - International Journal of Open-Access, Interdisicplinary and New Educational Discoveries of Etcor Educational Research Center (3):827-838.
    This study evaluated how call center agents manage their emotions when interacting with customers with different emotional states. The coping mechanisms employees develop through experience can impact their communication and satisfaction with customer service. A study was conducted using a descriptive-correlational design in three Business Process Outsourcing companies in Cebu City, Philippines. The study aimed to determine employees' agreement and effectiveness in self-awareness, self-management, social awareness, and relationship management. An online sample size calculator was used to gather data, and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  32.  28
    Intelligent virtual agents as language trainers facilitate multilingualism.Manuela Macedonia, Iris Groher & Friedrich Roithmayr - 2014 - Frontiers in Psychology 5:86783.
    In this paper we introduce a new generation of language trainers: intelligent virtual agents (IVAs) with human appearance and the capability to teach foreign language vocabulary. We report results from studies that we have conducted with Billie, an IVA employed as a vocabulary trainer, as well as research findings on the acceptance of the agent as a trainer by adults and children. The results show that Billie can train humans as well as a human teacher can and that (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33.  17
    Agents preserving privacy on intelligent transportation systems according to EU law.Javier Carbo, Juanita Pedraza & Jose M. Molina - forthcoming - Artificial Intelligence and Law:1-34.
    Intelligent Transportation Systems are expected to automate how parking slots are booked by trucks. The intrinsic dynamic nature of this problem, the need of explanations and the inclusion of private data justify an agent-based solution. Agents solving this problem act with a Believe Desire Intentions reasoning, and are implemented with JASON. Privacy of trucks becomes protected sharing a list of parkings ordered by preference. Furthermore, the process of assigning parking slots takes into account legal requirements on breaks and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  35.  41
    Artificial intelligence and conversational agent evolution – a cautionary tale of the benefits and pitfalls of advanced technology in education, academic research, and practice.Curtis C. Cain, Carlos D. Buskey & Gloria J. Washington - 2023 - Journal of Information, Communication and Ethics in Society 21 (4):394-405.
    Purpose The purpose of this paper is to demonstrate the advancements in artificial intelligence (AI) and conversational agents, emphasizing their potential benefits while also highlighting the need for vigilant monitoring to prevent unethical applications. Design/methodology/approach As AI becomes more prevalent in academia and research, it is crucial to explore ways to ensure ethical usage of the technology and to identify potentially unethical usage. This manuscript uses a popular AI chatbot to write the introduction and parts of the body of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  43
    Rae Earnshaw and John Vince (eds): Intelligent agents for mobile and virtual media. [REVIEW]Richard Ennals - 2004 - AI and Society 18 (1):84-85.
  37.  45
    Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?Jana Sedlakova & Manuel Trachsel - 2022 - American Journal of Bioethics 23 (5):4-13.
    Conversational artificial intelligence (CAI) presents many opportunities in the psychotherapeutic landscape—such as therapeutic support for people with mental health problems and without access to care. The adoption of CAI poses many risks that need in-depth ethical scrutiny. The objective of this paper is to complement current research on the ethics of AI for mental health by proposing a holistic, ethical, and epistemic analysis of CAI adoption. First, we focus on the question of whether CAI is rather a tool or an (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  38.  7
    Can Artificial Intelligence be an Autonomous Moral Agent? 신상규 - 2017 - Cheolhak-Korean Journal of Philosophy 132:265-292.
    ‘도덕 행위자(moral agent)’의 개념은 전통적으로 자신의 행동에 대해 책임을 질 수 있는 자유의지를 가진 인격적 존재에 한정되어 적용되었다. 그런데 도덕적 함축을 갖는 다양한 행동을 수행할 수 있는 자율적 AI의 등장은 이러한 행위자 개념의 수정을 요구하는 것처럼 보인다. 필자는 이 논문에서 일정한 요건을 만족시키는 AI에 대해서 인격성을 전제하지 않는 기능적인 의미의 도덕 행위자 자격이 부여될 수 있다고 주장한다. 그리고 그런 한에 있어서, AI에게도 그 행위자성에 걸맞은 책임 혹은 책무성의 귀속이 가능해진다. 이러한 주장을 뒷받침하기 위하여, 본 논문은 예상되는 여러 가능한 반론들을 살펴보고 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Measuring the intelligence of an idealized mechanical knowing agent.Samuel Alexander - 2020 - Lecture Notes in Computer Science 12226.
    We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  26
    GRASP agents: social first, intelligent later.Gert Jan Hofstede - 2019 - AI and Society 34 (3):535-543.
    This paper urges that if we wish to give social intelligence to our agents, it pays to look at how we acquired our social intelligence ourselves. We are born with drives and motives that are innate and deeply social. Next, as children we are socialized to acquire norms and values and to understand rituals large and small. These social elements are the core of our being. We capture them in the acronym GRASP: Groups, Rituals, Affiliation, Status, Power. As a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  25
    Embodied Intelligence: Smooth Coping in the Learning Intelligent Decision Agent Cognitive Architecture.Christian Kronsted, Sean Kugele, Zachariah A. Neemeh, Kevin J. Ryan & Stan Franklin - 2022 - Frontiers in Psychology 13.
    Much of our everyday, embodied action comes in the form of smooth coping. Smooth coping is skillful action that has become habituated and ingrained, generally placing less stress on cognitive load than considered and deliberative thought and action. When performed with skill and expertise, walking, driving, skiing, musical performances, and short-order cooking are all examples of the phenomenon. Smooth coping is characterized by its rapidity and relative lack of reflection, both being hallmarks of automatization. Deliberative and reflective actions provide the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. Can agent-causation be rendered intelligible?: an essay on the etiology of free action.Andrei A. Buckareff - 1999 - Dissertation, Texas a&M University
    The doctrine of agent-causation has been suggested by many interested in defending libertarian theories of free action to provide the conceptual apparatus necessary to make the notion of incompatibility freedom intelligible. In the present essay the conceptual viability of the doctrine of agent-causation will be assessed. It will be argued that agent-causation is, insofar as it is irreducible to event-causation, mysterious at best, totally unintelligible at worst. First, the arguments for agent-causation made by such eighteenth-century luminaries as Samuel Clarke and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  26
    Artificial Intelligence in Service of Human Needs: Pragmatic First Steps Toward an Ethics for Semi-Autonomous Agents.Travis N. Rieder, Brian Hutler & Debra J. H. Mathews - 2020 - American Journal of Bioethics Neuroscience 11 (2):120-127.
  44.  33
    Agents of History: Autonomous agents and crypto-intelligence.Bernard Dionysius Geoghegan - 2008 - Interaction Studies 9 (3):403-414.
    World War II research into cryptography and computing produced methods, instruments and research communities that informed early research into artificial intelligence and semi-autonomous computing. Alan Turing and Claude Shannon in particular adapted this research into early theories and demonstrations of AI based on computers’ abilities to track, predict and compete with opponents. This formed a loosely bound collection of techniques, paradigms, and practices I call crypto-intelligence. Subsequent researchers such as Joseph Weizenbaum adapted crypto-intelligence but also reproduced aspects of its antagonistic (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  45.  23
    Agents of History: Autonomous agents and crypto-intelligence.Bernard Dionysius Geoghegan - 2008 - Interaction Studiesinteraction Studies Social Behaviour and Communication in Biological and Artificial Systems 9 (3):403-414.
    World War II research into cryptography and computing produced methods, instruments and research communities that informed early research into artificial intelligence and semi-autonomous computing. Alan Turing and Claude Shannon in particular adapted this research into early theories and demonstrations of AI based on computers’ abilities to track, predict and compete with opponents. This formed a loosely bound collection of techniques, paradigms, and practices I call crypto-intelligence. Subsequent researchers such as Joseph Weizenbaum adapted crypto-intelligence but also reproduced aspects of its antagonistic (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  44
    Artificial Intelligence and Agentive Cognition: A Logico-linguistic Approach.Aziz Zambak & Roger Vergauwen - 2009 - Logique Et Analyse 52 (205):57-96.
  47. Generic Intelligent Systems-Agent Systems-Automatic Classification for Grouping Designs in Fashion Design Recommendation Agent System.Kyung-Yong Jung - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4251--310.
  48.  18
    Agents and Artificial Intelligence.Jasper van den Herik, A. Rocha & J. Filipe (eds.) - 2017 - Springer.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  49.  36
    Artificial intelligence as a discursive practice: the case of embodied software agent systems. [REVIEW]Sean Zdenek - 2003 - AI and Society 17 (3-4):340-363.
    In this paper, I explore some of the ways in which Artificial Intelligence (AI) is mediated discursively. I assume that AI is informed by an “ancestral dream” to reproduce nature by artificial means. This dream drives the production of “cyborg discourse”, which hinges on the belief that human nature (especially intelligence) can be reduced to symbol manipulation and hence replicated in a machine. Cyborg discourse, I suggest, produces AI systems by rhetorical means; it does not merely describe AI systems or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. An Analysis of the Interaction Between Intelligent Software Agents and Human Users.Christopher Burr, Nello Cristianini & James Ladyman - 2018 - Minds and Machines 28 (4):735-774.
    Interactions between an intelligent software agent and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality, we frame these (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   35 citations  
1 — 50 / 993