Switch to: References

Add citations

You must login to add citations.
  1. Language and Intelligence.Carlos Montemayor - 2021 - Minds and Machines 31 (4):471-486.
    This paper explores aspects of GPT-3 that have been discussed as harbingers of artificial general intelligence and, in particular, linguistic intelligence. After introducing key features of GPT-3 and assessing its performance in the light of the conversational standards set by Alan Turing in his seminal paper from 1950, the paper elucidates the difference between clever automation and genuine linguistic intelligence. A central theme of this discussion on genuine conversational intelligence is that members of a linguistic community never merely respond “algorithmically” (...)
    Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Claims and Challenges in Evaluating Human-Level Intelligent Systems.John E. Laird, Robert Wray, Robert Marinier & Pat Langley - 2009 - In B. Goertzel, P. Hitzler & M. Hutter (eds.), Proceedings of the Second Conference on Artificial General Intelligence. Atlantis Press.
  • Advantages of Artificial Intelligences, Uploads, and Digital Minds.Kaj Sotala - 2012 - International Journal of Machine Consciousness 4 (01):275-291.
    I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of hardware (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Computational Functionalism for the Deep Learning Era.Ezequiel López-Rubio - 2018 - Minds and Machines 28 (4):667-688.
    Deep learning is a kind of machine learning which happens in a certain type of artificial neural networks called deep networks. Artificial deep networks, which exhibit many similarities with biological ones, have consistently shown human-like performance in many intelligent tasks. This poses the question whether this performance is caused by such similarities. After reviewing the structure and learning processes of artificial and biological neural networks, we outline two important reasons for the success of deep learning, namely the extraction of successively (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial Interdisciplinarity: Artificial Intelligence for Research on Complex Societal Problems.Seth D. Baum - 2020 - Philosophy and Technology 34 (1):45-63.
    This paper considers the question: In what ways can artificial intelligence assist with interdisciplinary research for addressing complex societal problems and advancing the social good? Problems such as environmental protection, public health, and emerging technology governance do not fit neatly within traditional academic disciplines and therefore require an interdisciplinary approach. However, interdisciplinary research poses large cognitive challenges for human researchers that go beyond the substantial challenges of narrow disciplinary research. The challenges include epistemic divides between disciplines, the massive bodies of (...)
    No categories
    Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Intelligence as Accurate Prediction.Trond A. Tjøstheim & Andreas Stephens - 2021 - Review of Philosophy and Psychology 1:1-25.
    This paper argues that intelligence can be approximated by the ability to produce accurate predictions. It is further argued that general intelligence can be approximated by context dependent predictive abilities combined with the ability to use working memory to abstract away contextual information. The flexibility associated with general intelligence can be understood as the ability to use selective attention to focus on specific aspects of sensory impressions to identify patterns, which can then be used to predict events in novel situations (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Approval-Directed Agency and the Decision Theory of Newcomb-Like Problems.Caspar Oesterheld - 2019 - Synthese 198 (Suppl 27):6491-6504.
    Decision theorists disagree about how instrumentally rational agents, i.e., agents trying to achieve some goal, should behave in so-called Newcomb-like problems, with the main contenders being causal and evidential decision theory. Since the main goal of artificial intelligence research is to create machines that make instrumentally rational decisions, the disagreement pertains to this field. In addition to the more philosophical question of what the right decision theory is, the goal of AI poses the question of how to implement any given (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Intuition, Intelligence, Data Compression.Jens Kipper - 2019 - Synthese 198 (Suppl 27):6469-6489.
    The main goal of my paper is to argue that data compression is a necessary condition for intelligence. One key motivation for this proposal stems from a paradox about intuition and intelligence. For the purposes of this paper, it will be useful to consider playing board games—such as chess and Go—as a paradigm of problem solving and cognition, and computer programs as a model of human cognition. I first describe the basic components of computer programs that play board games, namely (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Reframing Ethical Theory, Pedagogy, and Legislation to Bias Open Source AGI Towards Friendliness and Wisdom.John Gray Cox - 2015 - Journal of Evolution and Technology 25 (2):39-54.
    Hopes for biasing the odds towards the development of AGI that is human-friendly depend on finding and employing ethical theories and practices that can be incorporated successfully in the construction; programming and/or developmental growth; education and mature life world of future AGI. Mainstream ethical theories are ill-adapted for this purpose because of their mono-logical decision procedures which aim at “Golden rule” style principles and judgments which are objective in the sense of being universal and absolute. A much more helpful framework (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  • What Is Intelligence in the Context of AGI?Dan J. Bruiger - manuscript
    Lack of coherence in concepts of intelligence has implications for artificial intelligence. ‘Intelligence’ is an abstraction grounded in human experience while supposedly freed from the embodiment that is the basis of that experience. In addition to physical instantiation, embodiment is a condition of dependency, of an autopoietic system upon an environment, which thus matters to the system itself. The autonomy and general capability sought in artificial general intelligence implies artificially re-creating the organism’s natural condition of embodiment. That may not be (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Extending Environments To Measure Self-Reflection In Reinforcement Learning.Samuel Allen Alexander, Michael Castaneda, Kevin Compher & Oscar Martinez - manuscript
    We consider an extended notion of reinforcement learning in which the environment can simulate the agent and base its outputs on the agent's hypothetical behavior. Since good performance usually requires paying attention to whatever things the environment's outputs are based on, we argue that for an agent to achieve on-average good performance across many such extended environments, it is necessary for the agent to self-reflect. Thus, an agent's self-reflection ability can be numerically estimated by running the agent through a battery (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Towards a Unified Framework for Developing Ethical and Practical Turing Tests.Balaji Srinivasan & Kushal Shah - 2019 - AI and Society 34 (1):145-152.
    Since Turing proposed the first test of intelligence, several modifications have been proposed with the aim of making Turing’s proposal more realistic and applicable in the search for artificial intelligence. In the modern context, it turns out that some of these definitions of intelligence and the corresponding tests merely measure computational power. Furthermore, in the framework of the original Turing test, for a system to prove itself to be intelligent, a certain amount of deceit is implicitly required which can have (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  • Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2013 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Plans or Outcomes: How Do We Attribute Intelligence to Others?Marta Kryven, Tomer D. Ullman, William Cowan & Joshua B. Tenenbaum - 2021 - Cognitive Science 45 (9):e13041.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Revisiting Turing and His Test: Comprehensiveness, Qualia, and the Real World.Vincent C. Müller & Aladdin Ayesh (eds.) - 2012 - AISB.
    Proceedings of the papers presented at the Symposium on "Revisiting Turing and his Test: Comprehensiveness, Qualia, and the Real World" at the 2012 AISB and IACAP Symposium that was held in the Turing year 2012, 2–6 July at the University of Birmingham, UK. Ten papers. - http://www.pt-ai.org/turing-test --- Daniel Devatman Hromada: From Taxonomy of Turing Test-Consistent Scenarios Towards Attribution of Legal Status to Meta-modular Artificial Autonomous Agents - Michael Zillich: My Robot is Smarter than Your Robot: On the Need for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Artificial Superintelligence and its Limits: Why AlphaZero Cannot Become a General Agent.Karim Jebari & Joakim Lundborg - forthcoming - AI and Society.
    An intelligent machine surpassing human intelligence across a wide set of skills has been proposed as a possible existential catastrophe. Among those concerned about existential risk related to artificial intelligence, it is common to assume that AI will not only be very intelligent, but also be a general agent. This article explores the characteristics of machine agency, and what it would mean for a machine to become a general agent. In particular, it does so by articulating some important differences between (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
    This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Building Thinking Machines by Solving Animal Cognition Tasks.Matthew Crosby - 2020 - Minds and Machines 30 (4):589-615.
    In ‘Computing Machinery and Intelligence’, Turing, sceptical of the question ‘Can machines think?’, quickly replaces it with an experimentally verifiable test: the imitation game. I suggest that for such a move to be successful the test needs to be relevant, expansive, solvable by exemplars, unpredictable, and lead to actionable research. The Imitation Game is only partially successful in this regard and its reliance on language, whilst insightful for partially solving the problem, has put AI progress on the wrong foot, prescribing (...)
    Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   4 citations  
  • Can Machines Think? An Old Question Reformulated.Achim Hoffmann - 2010 - Minds and Machines 20 (2):203-212.
    This paper revisits the often debated question Can machines think? It is argued that the usual identification of machines with the notion of algorithm has been both counter-intuitive and counter-productive. This is based on the fact that the notion of algorithm just requires an algorithm to contain a finite but arbitrary number of rules. It is argued that intuitively people tend to think of an algorithm to have a rather limited number of rules. The paper will further propose a modification (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Twenty Years Beyond the Turing Test: Moving Beyond the Human Judges Too.José Hernández-Orallo - 2020 - Minds and Machines 30 (4):533-562.
    In the last 20 years the Turing test has been left further behind by new developments in artificial intelligence. At the same time, however, these developments have revived some key elements of the Turing test: imitation and adversarialness. On the one hand, many generative models, such as generative adversarial networks, build imitators under an adversarial setting that strongly resembles the Turing test. The term “Turing learning” has been used for this kind of setting. On the other hand, AI benchmarks are (...)
    Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  • Machines and the Moral Community.Erica L. Neely - 2013 - Philosophy and Technology 27 (1):97-111.
    A key distinction in ethics is between members and nonmembers of the moral community. Over time, our notion of this community has expanded as we have moved from a rationality criterion to a sentience criterion for membership. I argue that a sentience criterion is insufficient to accommodate all members of the moral community; the true underlying criterion can be understood in terms of whether a being has interests. This may be extended to conscious, self-aware machines, as well as to any (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  • How Feasible is the Rapid Development of Artificial Superintelligence?Kaj Sotala - 2017 - Physica Scripta 11 (92).
    What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • On Potential Cognitive Abilities in the Machine Kingdom.José Hernández-Orallo & David L. Dowe - 2013 - Minds and Machines 23 (2):179-210.
    Animals, including humans, are usually judged on what they could become, rather than what they are. Many physical and cognitive abilities in the ‘animal kingdom’ are only acquired (to a given degree) when the subject reaches a certain stage of development, which can be accelerated or spoilt depending on how the environment, training or education is. The term ‘potential ability’ usually refers to how quick and likely the process of attaining the ability is. In principle, things should not be different (...)
    Direct download (16 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Archimedean Trap: Why Traditional Reinforcement Learning Will Probably Not Yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Intelligence Via Ultrafilters: Structural Properties of Some Intelligence Comparators of Deterministic Legg-Hutter Agents.Samuel Alexander - 2019 - Journal of Artificial General Intelligence 10 (1):24-45.
    Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Reward Is Enough.David Silver, Satinder Singh, Doina Precup & Richard S. Sutton - forthcoming - Artificial Intelligence:103535.
  • Minimum Message Length and Statistically Consistent Invariant (Objective?) Bayesian Probabilistic Inference—From (Medical) “Evidence”.David L. Dowe - 2008 - Social Epistemology 22 (4):433 – 460.
    “Evidence” in the form of data collected and analysis thereof is fundamental to medicine, health and science. In this paper, we discuss the “evidence-based” aspect of evidence-based medicine in terms of statistical inference, acknowledging that this latter field of statistical inference often also goes by various near-synonymous names—such as inductive inference (amongst philosophers), econometrics (amongst economists), machine learning (amongst computer scientists) and, in more recent times, data mining (in some circles). Three central issues to this discussion of “evidence-based” are (i) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ako a čím sa od seba odlišujú slabo, stredne a silne usmernené procesy.Robert Burgan - 2012 - E-Logos 19 (1):1-31.
    No categories
    Direct download (5 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  • A Formal Mathematical Model of Cognitive Radio.Ramy A. Fathy, Ahmed A. Abdel-Hafez & Abd El-Halim A. Zekry - 2013 - International Journal of Computer and Information Technology 2 (4).
     
    Export citation  
     
    Bookmark  
  • 20 Years After The Embodied Mind - Why is Cognitivism Alive and Kicking?Vincent C. Müller - 2013 - In Blay Whitby & Joel Parthmore (eds.), Re-Conceptualizing Mental "Illness": The View from Enactivist Philosophy and Cognitive Science - AISB Convention 2013. AISB. pp. 47-49.
    I want to suggest that the major influence of classical arguments for embodiment like "The Embodied Mind" by Varela, Thomson & Rosch (1991) has been a changing of positions rather than a refutation: Cognitivism has found ways to retreat and regroup at positions that have better fortification, especially when it concerns theses about artificial intelligence or artificial cognitive systems. For example: a) Agent-based cognitivism' that understands humans as taking in representations of the world, doing rule-based processing and then acting on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The Troublesome Explanandum in Plantinga’s Argument Against Naturalism.Yingjin Xu - 2011 - International Journal for Philosophy of Religion 69 (1):1-15.
    Intending to have a constructive dialogue with the combination of evolutionary theory (E) and metaphysical naturalism (N), Alvin Plantinga’s evolutionary argument against naturalism (EAAN) takes the reliability of human cognition (in normal environments) as a purported explanandum and E&N as a purported explanans. Then, he considers whether E&N can offer a good explanans for this explanandum, and his answer is negative (an answer employed by him to produce a defeater for N). But I will argue that the whole EAAN goes (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • An Object-Oriented View on Problem Representation as a Search-Efficiency Facet: Minds Vs. Machines. [REVIEW]Reza Zamani - 2010 - Minds and Machines 20 (1):103-117.
    From an object-oriented perspective, this paper investigates the interdisciplinary aspects of problem representation as well the differences between representation of problems in the mind and that in the machine. By defining an object as a combination of a symbol-structure and its associated operations, it shows how the representation of problems can become related to control, which conducts the search in finding a solution. Different types of representation of problems in the machine are classified into four categories, and in a similar (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark  
  • The Assumptions on Knowledge and Resources in Models of Rationality.Pei Wang - 2011 - International Journal of Machine Consciousness 3 (01):193-218.