About this topic
Summary See the category "Philosophy of Artificial Intelligence."
Key works See the category "Philosophy of Artificial Intelligence" for key works.
Introductions See the category "Philosophy of Artificial Intelligence" for introductions.
Related

Contents
170 found
Order:
1 — 50 / 170
  1. Revised: From Color, to Consciousness, toward Strong AI.Xinyuan Gu - manuscript
    This article cohesively discusses three topics, namely color and its perception, the yet-to-be-solved hard problem of consciousness, and the theoretical possibility of strong AI. First, the article restores color back into the physical world by giving cross-species evidence. Secondly, the article proposes a dual-field with function Q hypothesis (DFFQ) which might explain the ‘first-person point of view’ and so the hard problem of consciousness. Finally, the article discusses what DFFQ might bring to artificial intelligence and how it might allow strong (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Conditional and Modal Reasoning in Large Language Models.Wesley H. Holliday & Matthew Mandelkern - manuscript
    The reasoning abilities of large language models (LLMs) are the topic of a growing body of research in artificial intelligence and cognitive science. In this paper, we probe the extent to which a dozen LLMs are able to distinguish logically correct inferences from logically fallacious ones. We focus on inference patterns involving conditionals (e.g., 'If Ann has a queen, then Bob has a jack') and epistemic modals (e.g., 'Ann might have an ace', 'Bob must have a king'). These inference patterns (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. The Unlikeliest of Duos; Why Super Intelligent AI Will Cooperate with Humans.Griffin Pithie - manuscript
    The focus of this article is the "good-will theory", which explains the effect humans can have on the safety of AI, along with how it is in the best interest of a superintelligent AI to work alongside humans and not overpower them. Future papers dealing with the good-will theory will be published, but discuss different talking points in regards to possible or real objections to the theory.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Probable General Intelligence algorithm.Anton Venglovskiy - manuscript
    Contains a description of a generalized and constructive formal model for the processes of subjective and creative thinking. According to the author, the algorithm presented in the article is capable of real and arbitrarily complex thinking and is potentially able to report on the presence of consciousness.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. “Even an AI could do that”.Emanuele Arielli - forthcoming - Http://Manovich.Net/Index.Php/Projects/Artificial-Aesthetics.
    Chapter 1 of the ongoing online publication "Artificial Aesthetics: A Critical Guide to AI, Media and Design", Lev Manovich and Emanuele Arielli -/- Book information: Assume you're a designer, an architect, a photographer, a videographer, a curator, an art historian, a musician, a writer, an artist, or any other creative professional or student. Perhaps you're a digital content creator who works across multiple platforms. Alternatively, you could be an art historian, curator, or museum professional. -/- You may be wondering how (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. Will AI avoid exploitation? Artificial general intelligence and expected utility theory.Adam Bales - forthcoming - Philosophical Studies:1-20.
    A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Designing new Intelligent Machines (COMETT European Symposium, Liège April 1992).D. M. Dubois - forthcoming - Communication and Cognition-Artificial Intelligence.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  8. How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. The Simulation Hypothesis, Social Knowledge, and a Meaningful Life.Grace Helton - forthcoming - Oxford Studies in Philosophy of Mind.
    (Draft of Feb 2023, see upcoming issue for Chalmers' reply) In Reality+: Virtual Worlds and the Problems of Philosophy, David Chalmers argues, among other things, that: if we are living in a full-scale simulation, we would still enjoy broad swathes of knowledge about non-psychological entities, such as atoms and shrubs; and, our lives might still be deeply meaningful. Chalmers views these claims as at least weakly connected: The former claim helps forestall a concern that if objects in the simulation are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Making AI Intelligible: Philosophical Foundations. By Herman Cappelen and Josh Dever. [REVIEW]Nikhil Mahant - forthcoming - Philosophical Quarterly.
    Linguistic outputs generated by modern machine-learning neural net AI systems seem to have the same contents—i.e., meaning, semantic value, etc.—as the corresponding human-generated utterances and texts. Building upon this essential premise, Herman Cappelen and Josh Dever's Making AI Intelligible sets for itself the task of addressing the question of how AI-generated outputs have the contents that they seem to have (henceforth, ‘the question of AI Content’). In pursuing this ambitious task, the book makes several high-level, framework observations about how a (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   42 citations  
  12. Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Some resonances between Eastern thought and Integral Biomathics in the framework of the WLIMES formalism for modelling living systems.Plamen L. Simeonov & Andree C. Ehresmann - forthcoming - Progress in Biophysics and Molecular Biology 131 (Special).
    Forty-two years ago, Capra published “The Tao of Physics” (Capra, 1975). In this book (page 17) he writes: “The exploration of the atomic and subatomic world in the twentieth century has …. necessitated a radical revision of many of our basic concepts” and that, unlike ‘classical’ physics, the sub-atomic and quantum “modern physics” shows resonances with Eastern thoughts and “leads us to a view of the world which is very similar to the views held by mystics of all ages and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Augustine and an artificial soul.Jeffrey White - forthcoming - Embodied Intelligence 2023.
    Prior work proposes a view of development of purpose and source of meaning in life as a more or less temporally distal project ideal self-situation in terms of which intermediate situations are experienced and prospects evaluated. This work considers Augustine on ensoulment alongside current work into self as adapted routines to common social regularities of the sort that Augustine found deficient. How can we account for such diversity of self-reported value orientation in terms of common structural dynamics differently developed, embodied (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Unveiling the Creation of AI-Generated Artworks: Broadening Worringerian Abstraction and Empathy Beyond Contemplation.Leonardo Arriagada - 2024 - Estudios Artísticos 10 (16):142-158.
    In his groundbreaking work, Abstraction and Empathy, Wilhelm Worringer delved into the intricacies of various abstract and figurative artworks, contending that they evoke distinct impulses in the human audience—specifically, the urges towards abstraction and empathy. This article asserts the presence of empirical evidence supporting the extension of Worringer’s concepts beyond the realm of art appreciation to the domain of art-making. Consequently, it posits that abstraction and empathy serve as foundational principles guiding the production of both abstract and figurative art. This (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended cognition (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  18. The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these questions, the paper (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Universal Agent Mixtures and the Geometry of Intelligence.Samuel Allen Alexander, David Quarel, Len Du & Marcus Hutter - 2023 - Aistats.
    Inspired by recent progress in multi-agent Reinforcement Learning (RL), in this work we examine the collective intelligent behaviour of theoretical universal agents by introducing a weighted mixture operation. Given a weighted set of agents, their weighted mixture is a new agent whose expected total reward in any environment is the corresponding weighted average of the original agents' expected total rewards in that environment. Thus, if RL agent intelligence is quantified in terms of performance across environments, the weighted mixture's intelligence is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. La scorciatoia.Nello Cristianini - 2023 - Bologna: Il Mulino.
    La scorciatoia - Come le macchine sono diventate intelligenti senza pensare in modo umano -/- Le nostre creature sono diverse da noi e talvolta più forti. Per poterci convivere dobbiamo imparare a conoscerle Vagliano curricula, concedono mutui, scelgono le notizie che leggiamo: le macchine intelligenti sono entrate nelle nostre vite, ma non sono come ce le aspettavamo. Fanno molte delle cose che volevamo, e anche qualcuna in più, ma non possiamo capirle o ragionare con loro, perché il loro comportamento è (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the harder (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences.Jake Quilty-Dunn, Nicolas Porot & Eric Mandelbaum - 2023 - Behavioral and Brain Sciences 46:e261.
    Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language-of-thought (LoT). We outline six core properties of LoTs: (i) discrete constituents; (ii) role-filler independence; (iii) predicate–argument structure; (iv) logical operators; (v) inferential (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  24. How far can we get in creating a digital replica of a philosopher?Anna Strasser, Eric Schwitzgebel & Matthew Crosby - 2023 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy 2022. IOS PRESS. pp. 371-380.
    Can we build machines with which we can have interesting conversations? Observing the new optimism of AI regarding deep learning and new language models, we set ourselves an ambitious goal: We want to find out how far we can get in creating a digital replica of a philosopher. This project has two aims; one more technical, investigating of how the best model can be built. The other one, more philosophical, explores the limits and risks which are accompanied by the creation (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. Pseudo-visibility: A Game Mechanic Involving Willful Ignorance.Samuel Allen Alexander & Arthur Paul Pedersen - 2022 - FLAIRS-35.
    We present a game mechanic called pseudo-visibility for games inhabited by non-player characters (NPCs) driven by reinforcement learning (RL). NPCs are incentivized to pretend they cannot see pseudo-visible players: the training environment simulates an NPC to determine how the NPC would act if the pseudo-visible player were invisible, and penalizes the NPC for acting differently. NPCs are thereby trained to selectively ignore pseudo-visible players, except when they judge that the reaction penalty is an acceptable tradeoff (e.g., a guard might accept (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26. AI-aesthetics and the Anthropocentric Myth of Creativity.Emanuele Arielli & Lev Manovich - 2022 - NODES 1 (19-20).
    Since the beginning of the 21st century, technologies like neural networks, deep learning and “artificial intelligence” (AI) have gradually entered the artistic realm. We witness the development of systems that aim to assess, evaluate and appreciate artifacts according to artistic and aesthetic criteria or by observing people’s preferences. In addition to that, AI is now used to generate new synthetic artifacts. When a machine paints a Rembrandt, composes a Bach sonata, or completes a Beethoven symphony, we say that this is (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Extending the Is-ought Problem to Top-down Artificial Moral Agents.Robert James M. Boyles - 2022 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 9 (2):171–189.
    This paper further cashes out the notion that particular types of intelligent systems are susceptible to the is-ought problem, which espouses the thesis that no evaluative conclusions may be inferred from factual premises alone. Specifically, it focuses on top-down artificial moral agents, providing ancillary support to the view that these kinds of artifacts are not capable of producing genuine moral judgements. Such is the case given that machines built via the classical programming approach are always composed of two parts, namely: (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  28. The Bias Dilemma: The Ethics of Algorithmic Bias in Natural-Language Processing.Oisín Deery & Katherine Bailey - 2022 - Feminist Philosophy Quarterly 8 (3).
    Addressing biases in natural-language processing (NLP) systems presents an underappreciated ethical dilemma, which we think underlies recent debates about bias in NLP models. In brief, even if we could eliminate bias from language models or their outputs, we would thereby often withhold descriptively or ethically useful information, despite avoiding perpetuating or amplifying bias. Yet if we do not debias, we can perpetuate or amplify bias, even if we retain relevant descriptively or ethically useful information. Understanding this dilemma provides for a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Why is Information Retrieval a Scientific Discipline?Robert W. P. Luk - 2022 - Foundations of Science 27 (2):427-453.
    It is relatively easy to state that information retrieval is a scientific discipline but it is rather difficult to understand why it is science because what is science is still under debate in the philosophy of science. To be able to convince others that IR is science, our ability to explain why is crucial. To explain why IR is a scientific discipline, we use a theory and a model of scientific study, which were proposed recently. The explanation involves mapping the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  30. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not make them (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31. A plea for integrated empirical and philosophical research on the impacts of feminized AI workers.Hannah Read, Javier Gomez-Lavin, Andrea Beltrama & Lisa Miracchi Titus - 2022 - Analysis (1):89-97.
    Feminist philosophers have long emphasized the ways in which women’s oppression takes a variety of forms depending on complex combinations of factors. These include women’s objectification, dehumanization and unjust gendered divisions of labour caused in part by sexist ideologies regarding women’s social role. This paper argues that feminized artificial intelligence (feminized AI) poses new and important challenges to these perennial feminist philosophical issues. Despite the recent surge in theoretical and empirical attention paid to the ethics of AI in general, a (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  32. Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics.Vlasta Sikimić & Sandro Radovanović - 2022 - European Journal for Philosophy of Science 12 (3):1-21.
    As more objections have been raised against grant peer-review for being costly and time-consuming, the legitimate question arises whether machine learning algorithms could help assess the epistemic efficiency of the proposed projects. As a case study, we investigated whether project efficiency in high energy physics can be algorithmically predicted based on the data from the proposal. To analyze the potential of algorithmic prediction in HEP, we conducted a study on data about the structure and outcomes of HEP experiments with the (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Metaphysics , Meaning, and Morality: A Theological Reflection on A.I.Jordan Joseph Wales - 2022 - Journal of Moral Theology 11 (Special Issue 1):157-181.
    Theologians often reflect on the ethical uses and impacts of artificial intelligence, but when it comes to artificial intelligence techniques themselves, some have questioned whether much exists to discuss in the first place. If the significance of computational operations is attributed rather than intrinsic, what are we to say about them? Ancient thinkers—namely Augustine of Hippo (lived 354–430)—break the impasse, enabling us to draw forth the moral and metaphysical significance of current developments like the “deep neural networks” that are responsible (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  34. On a Possible Basis for Metaphysical Self-development in Natural and Artificial Systems.Jeffrey White - 2022 - Filozofia i Nauka. Studia Filozoficzne I Interdyscyplinarne 10:71-100.
    Recent research into the nature of self in artificial and biological systems raises interest in a uniquely determining immutable sense of self, a “metaphysical ‘I’” associated with inviolable personal values and moral convictions that remain constant in the face of environmental change, distinguished from an object “me” that changes with its environment. Complementary research portrays processes associated with self as multimodal routines selectively enacted on the basis of contextual cues informing predictive self or world models, with the notion of the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Measuring Intelligence and Growth Rate: Variations on Hibbard's Intelligence Measure.Samuel Alexander & Bill Hibbard - 2021 - Journal of Artificial General Intelligence 12 (1):1-25.
    In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Machine Learning and the Future of Scientific Explanation.Florian J. Boge & Michael Poznic - 2021 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 52 (1):171-176.
    The workshop “Machine Learning: Prediction Without Explanation?” brought together philosophers of science and scholars from various fields who study and employ Machine Learning (ML) techniques, in order to discuss the changing face of science in the light of ML's constantly growing use. One major focus of the workshop was on the impact of ML on the concept and value of scientific explanation. One may speculate whether ML’s increased use in science exemplifies a paradigmatic turn towards mere pattern recognition and prediction (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  37. Short-circuiting the definition of mathematical knowledge for an Artificial General Intelligence.Samuel Alexander - 2020 - Cifma.
    We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Measuring the intelligence of an idealized mechanical knowing agent.Samuel Alexander - 2020 - Lecture Notes in Computer Science 12226.
    We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  39. Self-referential theories.Samuel A. Alexander - 2020 - Journal of Symbolic Logic 85 (4):1687-1716.
    We study the structure of families of theories in the language of arithmetic extended to allow these families to refer to one another and to themselves. If a theory contains schemata expressing its own truth and expressing a specific Turing index for itself, and contains some other mild axioms, then that theory is untrue. We exhibit some families of true self-referential theories that barely avoid this forbidden pattern.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. The Archimedean trap: Why traditional reinforcement learning will probably not yield AGI.Samuel Allen Alexander - 2020 - Journal of Artificial General Intelligence 11 (1):70-85.
    After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  41. CG-Art. Una discusión estética sobre la relación entre creatividad artística y computación.Leonardo Arriagada - 2020 - In Jorge Mauricio Molina Mejía, Pablo Valdivia Martin & René Alejandro Venegas Velásquez (eds.), Actas III Congreso Internacional de Lingüística Computacional y de Corpus - CILCC 2020. Universidad de Antioquía y University of Groningen. pp. 261-264.
    En era de la inteligencia artificial (IA) no han sido pocos los que se han preguntado si una máquina puede crear arte. En este sentido, la investigadora cognitiva Margaret Boden (2011) ha definido un tipo especial de arte al relacionar los conceptos "creatividad" y "computación". Así, el arte generado por computador (computer-generated art) es “the artwork results from some computer program being left to run by itself, with minimal or zero interference from a human being” (p. 141). Uno de los (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. A Critical Reflection on Automated Science: Will Science Remain Human?Marta Bertolaso & Fabio Sterpetti (eds.) - 2020 - Cham: Springer.
    This book provides a critical reflection on automated science and addresses the question whether the computational tools we developed in last decades are changing the way we humans do science. More concretely: Can machines replace scientists in crucial aspects of scientific practice? The contributors to this book rethink and refine some of the main concepts by which science is understood, drawing a fascinating picture of the developments we expect over the next decades of human-machine co-evolution. The volume covers examples from (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43. WG-A: A Framework for Exploring Analogical Generalization and Argumentation.Michael Cooper, Lindsey Fields, Marc Gabriel Badilla & John Licato - 2020 - CogSci 2020.
    Reasoning about analogical arguments is known to be subject to a variety of cognitive biases, and a lack of clarity about which factors can be considered strengths or weaknesses of an analogical argument. This can make it difficult both to design empirical experiments to study how people reason about analogical arguments, and to develop scalable tutoring tools for teaching how to reason and analyze analogical arguments. To address these concerns, we describe WG-A (Warrant Game — Analogy), a framework for people (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  44. Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models.Christopher Grimsley, Elijah Mayfield & Julia Bursten - 2020 - Proceedings of the 12th Conference on Language Resources and Evaluation.
    As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we assert the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Intelligence via ultrafilters: structural properties of some intelligence comparators of deterministic Legg-Hutter agents.Samuel Alexander - 2019 - Journal of Artificial General Intelligence 10 (1):24-45.
    Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  46. Legg-Hutter universal intelligence implies classical music is better than pop music for intellectual training.Samuel Alexander - 2019 - The Reasoner 13 (11):71-72.
    In their thought-provoking paper, Legg and Hutter consider a certain abstrac- tion of an intelligent agent, and define a universal intelligence measure, which assigns every such agent a numerical intelligence rating. We will briefly summarize Legg and Hutter’s paper, and then give a tongue-in-cheek argument that if one’s goal is to become more intelligent by cultivating music appreciation, then it is bet- ter to use classical music (such as Bach, Mozart, and Beethoven) than to use more recent pop music. The (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. Considerazioni sull’infosfera. S&F : a colloquio con Luciano Floridi.Luciano Floridi & Christian Fuschetto - 2019 - Scientia et Fides 22:131–136.
    New developments in the field of communication and information technology will profoundly reshape the answers to questions of deep interest for humanity and philosophy. Who are we and what kind of relationship we establish among us? The boundaries between real life and virtual life tend to evanish. We are progressively becoming part of a global “infosphere”. This candid interview with professor Floridi try to shed some light on these issues, by considering the philosophical framework developed by the “infosphere philosopher”.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. Gods of Transhumanism.Alex V. Halapsis - 2019 - Anthropological Measurements of Philosophical Research 16:78-90.
    Purpose of the article is to identify the religious factor in the teaching of transhumanism, to determine its role in the ideology of this flow of thought and to identify the possible limits of technology interference in human nature. Theoretical basis. The methodological basis of the article is the idea of transhumanism. Originality. In the foreseeable future, robots will be able to pass the Turing test, become “electronic personalities” and gain political rights, although the question of the possibility of machine (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49. The Pragmatic Turn in Explainable Artificial Intelligence (XAI).Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  50. Machine Decisions and Human Consequences.Teresa Scantamburlo, Andrew Charlesworth & Nello Cristianini - 2019 - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford: Oxford University Press.
    As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 170