Edited by Michael Rescorla(University of California, Los Angeles)
About this topic
Summary
Does computation
require representation? To what extent should representation figure within
computational models? Can representational properties causally influence
computation? How central an explanatory role should semantics occupy within
computational psychology? Is the mind a “syntax-driven” machine? Can
computational models help elucidate the nature of representation? Can they help
us reduce the intentional to the non-intentional? What semantic frameworks are
most useful for computer science and Artificial Intelligence? Can we build an
artificial computing machine that thinks? How might the construction of such a
machine illuminate the mind, including our capacity to represent? Is mental
activity best modeled through “classical” computation, through “connectionist”
computation, or through some other framework?
Key works
The seminal article Turing 1936 introduces the
Turing machine, thereby laying the foundation for all subsequent research on
computation within computer science, recursion theory, Artificial Intelligence,
cognitive psychology, and philosophy. Putnam 1967 introduced philosophers to the
thesis that Turing-style computation provides illuminating models of mental
activity. Fodor 1975 developed Putnam’s suggestion, combining it with the
traditional picture of the mind as a representational organ. Fodor’s subsequent
writings, including Fodor 1981 and many other articles and books, investigate the
relation between mental computation and mental representation. Stich 1983 combines
a computational approach to the mind with eliminativism
regarding intentionality. Dennett 1981 advocates a broadly instrumentalist
approach to intentionality. Searle 1980 is a widely discussed critique of the
computational approach, centered on the relation between syntax and semantics. Putnam 1975 introduces the Twin Earth thought experiment, which crucially
informs much of the subsequent literature on computation and representation. Burge 1982 applies the Twin Earth thought experiment to mental representation (whereas
Putnam initially applied it only to linguistic representation).
Introductions
The first three chapters of Rogers 1987 present
the foundations of computation theory, with an emphasis on the Turing machine. Fodor 1981 offers a good (albeit opinionated) introduction to issues
surrounding computation and mental representation.
Horst 2005 and Pitt 2020 offer helpful surveys of the contemporary literature.
Machina sapiens - l;algoritmo che ci ha rubato il segreto della conoscenza. -/- Le macchine possono pensare? Questa domanda inquietante, posta da Alan Turing nel 1950, ha forse trovato una risposta: oggi si può conversare con un computer senza poterlo distinguere da un essere umano. I nuovi agenti intelligenti come ChatGPT si sono rivelati capaci di svolgere compiti che vanno molto oltre le intenzioni iniziali dei loro creatori, e ancora non sappiamo perché: se sono stati addestrati per alcune abilità, altre (...) sono emerse spontaneamente mentre leggevano migliaia di libri e milioni di pagine web. È questo il segreto della conoscenza, ed è adesso nelle mani delle nostre creature? Cos'altro può emergere, mentre continuiamo su questa strada? (shrink)
What does it mean to be human? Philosophers and theologians have been wrestling with this question for centuries. Recent advances in cognition, neuroscience, artificial intelligence and robotics have yielded insights that bring us even closer to an answer. There are now computer programs that can accurately recognize faces, engage in conversation, and even compose music. There are also robots that can walk up a flight of stairs, work cooperatively with each other and express emotion. If machines can do everything we (...) can, does that mean we are machines? -/- This book examines whether an artificial person can be constructed and if so, what that might tell us about our future and ourselves. Different human capacities such as perception, creativity, consciousness, social behavior, and free will are described in separate chapters. Technological advances in these areas are summarized and compared to our own abilities. The book adopts a multi-disciplinary approach, with a naturalistic perspective drawn from biology and psychology matched against a technological perspective based on computer science and robotics. (shrink)
Beyond current existential technology: intelligent anarchy and the cogent explanation for, what humans identify as, ‘representation.’ And, therefore, materialization and identification (interpretation, intention, attention).
Abordagens pós-cognitivistas mais recentes têm lançado duras críticas à noção de representação mental, procurando ao invés disso pensar a mente e a cognição em termos de ações corporificadas do organismo em seu meio. Embora concordemos com essa concepção, não está claro que ela implique necessariamente a rejeição de qualquer tipo de vocabulário representacional. O objetivo deste artigo é argumentar que representações podem nos comprar uma dimensão explicativa adicional não disponível por outros meios e sugerir que, ao menos em alguns casos, (...) elas podem participar da explicação de performances ou capacidades cognitivas. A noção de representação apresentada, como deixaremos claro ao longo do artigo, não viola os preceitos metodológicos mais caros à cognição 4E em geral e ao enativismo em particular, podendo, portanto, ser utilizada como uma ferramenta teórica útil em investigações sobre a natureza corporificada e situada da mente. (shrink)
We distinguish between different problems of “aboutness”: the “hard” problem of explaining the everyday phenomenon of intentionality and three less challenging “easy” sets of problems concerning the posits of folk psychology, the notions of representation invoked in the mind‐brain sciences, and the intensionality (with an “s”) of mental language. The problem of intentionality is especially hard in that, as is the case with the hard problem of phenomenal consciousness, there is no clear path to a solution using current methods. We (...) argue that naturalistic theories of mental representation do not address the hard problem—either they are only intended to address the easy problems, or the claims they make help address the problem of intentionality only under undefended and prima facie implausible assumptions to the effect that the hard problem reduces to some combination of the easy problems. We offer a positive account of what would be required to properly face up to the problem of intentionality. (shrink)
This commentary critically examines the view of the relationship between perception and memory in Ned Block's *The Border Between Seeing and Thinking*. It argues that visual working memory often stores the outputs of perception without altering their formats, allowing online visual perception to access these memory representations in computations that unfold over longer timescales and across eye movements. Since Block concedes that visual working memory representations are not iconic, we should not think of perceptual representations as exclusively iconic either.
Peter Godfrey-Smith recently introduced the idea of representational ‘organization’. When a collection of representations form an organized family, similar representational vehicles carry similar contents. For example, where neural firing rate represents numerosity (an analogue magnitude representation), similar firing rates represent similar numbers of items. Organization has been elided with structural representation, but the two are in fact distinct. An under-appreciated merit of representational organization is the way it facilitates computational processing. Representations from different organized families can interact, for example to (...) perform addition. Their being organized allows them to implement a useful computation. Many of the cases where organization has seemed significant, but which fall short of structural representation, are cases where representational organization underpins a computationally useful processing structure. (shrink)
An approach to implementing variational Bayesian inference in biological systems is considered, under which the thermodynamic free energy of a system directly encodes its variational free energy. In the case of the brain, this assumption places constraints on the neuronal encoding of generative and recognition densities, in particular requiring a stochastic population code. The resulting relationship between thermodynamic and variational free energies is prefigured in mind–brain identity theses in philosophy and in the Gestalt hypothesis of psychophysical isomorphism.
It is commonly assumed that images, whether in the world or in the head, do not have a privileged analysis into constituent parts. They are thought to lack the sort of syntactic structure necessary for representing complex contents and entering into sophisticated patterns of inference. I reject this assumption. “Image grammars” are models in computer vision that articulate systematic principles governing the form and content of images. These models are empirically credible and can be construed as literal grammars for images. (...) Images can have rich syntactic structure, though of a markedly different form than sentences in language. (shrink)
Техника представления информации о внешнем и внутреннем мире постоянно развивается, и сейчас она достигла уровня отображения реальности в многообразных её проявлениях и измерениях, прежде недоступных человеческому восприятию. Язык, текст, фотография, звукозапись, а теперь ещё и техника искусственного интеллекта для моделирования человеческой субъектности и её описания в доступной для человеческого понимания форме, стали эпохальными событиями в теории информации. Однако несмотря на то, что на данном этапе её развития она позволяет оперировать с непрерывно возрастающими объёмами информации, это не приближает её теоретиков к (...) постижению сути того, что определяется как реальность, бытие, и сознание. Но поскольку никакого другого способа достичь этой цели нет, кроме изучения того, как это происходит в процессе восприятия и преобразования информации у живых существ, и в частности, у человека, необходимо разобраться в принципах этого процесса. (shrink)
I argue that ML models used in science function as highly idealized toy models. If we treat ML models as a type of highly idealized toy model, then we can deploy standard representational and epistemic strategies from the toy model literature to explain why ML models can still provide epistemic success despite their lack of similarity to their targets.
Despite its centrality in the philosophy of cognitive science, there has been little prior philosophical work engaging with the notion of representation in contemporary NLP practice. This paper attempts to fill that lacuna: drawing on ideas from cognitive science, I introduce a framework for evaluating the representational claims made about components of neural NLP models, proposing three criteria with which to evaluate whether a component of a model represents a property and operationalising these criteria using probing classifiers, a popular analysis (...) technique in NLP (and deep learning more broadly). The project of operationalising a philosophically-informed notion of representation should be of interest to both philosophers of science and NLP practitioners. It affords philosophers a novel testing-ground for claims about the nature of representation, and helps NLPers organise the large literature on probing experiments, suggesting novel avenues for empirical research. (shrink)
Some defenders of so-called `artificial intelligence' believe that machines can understand language. In particular, Søgaard has argued in his "Understanding models understanding language" (2022) for a thesis of this sort. His idea is that (1) where there is semantics there is also understanding and (2) machines are not only capable of what he calls `inferential semantics', but even that they can (with the help of inputs from sensors) `learn' referential semantics. We show that he goes wrong because he pays insufficient (...) attention to the difference between language as used by humans and the sequences of inert symbols which arise when language is stored on hard drives or in books in libraries. (shrink)
The explanation for everything in Nature, everything in human history, future, and-or, past, is the conservation of a circle, proven by, the circular-linear relationship between, the solstice and the equinox.
This book surveys and examines the most famous philosophical arguments against building a machine with human-level intelligence. From claims and counter-claims about the ability to implement consciousness, rationality, and meaning, to arguments about cognitive architecture, the book presents a vivid history of the clash between the philosophy and AI. Tellingly, the AI Wars are mostly quiet now. Explaining this crucial fact opens new paths to understanding the current resurgence AI (especially, deep learning AI and robotics), what happens when philosophy meets (...) science, and the role of philosophy in the culture in which it is embedded. -/- Organising the arguments into four core topics - 'Is AI possible', 'Architectures of the Mind', 'Mental Semantics and Mental Symbols' and 'Rationality and Creativity' - this book shows the debate that played out between the philosophers on both sides of the question, and, as well, the debate between philosophers and AI scientists and engineers building AI systems. Up-to-date and forward-looking, the book is packed with fresh insights and supporting material, including: -/- - Accessible introductions to each war, explaining the background behind the main arguments against AI - Each chapter details what happened in the AI wars, the legacy of the attacks, and what new controversies are on the horizon. - Extensive bibliography of key readings. (shrink)
Context sensitivity is one of the distinctive marks of human intelligence. Understanding the flexible way in which humans think and act in a potentially infinite number of circumstances, even though they’re only finite and limited beings, is a central challenge for the philosophy of mind and cognitive science, particularly in the case of those using representational theories. In this work, the frame problem, that is, the challenge of explaining how human cognition efficiently acknowledges what is relevant from what is not (...) in each context, has been adopted as a guide. By using it, we’ve been able to describe a fundamental tension between context sensitivity and the mental representations used in cognition theories. The first chapter discusses the nature of the frame problem,as well as the reasons for its persistence. In the second and third chapters, the problem is used as a measure tool in order to inquiry a few representational approaches and check how well suited they are to deal with context dependencies. The problems found are then correlated with the frame problem. Throughout the discussion, we try to show that 1) none of the evaluated approaches is capable of dealing with context sensitivity in a proper manner, but 2) that’s not a reason to think that the frame problem constitutes an argument against representational approaches in general, and 3) that it constitutes a fundamental conceptual tool in contemporary research -/- A sensibilidade ao contexto é uma das marcas distintivas da inteligência humana. Compreender o modo flexível como o ser humano pensa e age em função de um número potencialmente infinito de circunstâncias, ainda que munido de recursos finitos e limitados, é um desafio central para a filosofia da mente e para a ciência cognitiva, em particular aos que fazem uso de teorias representacionalistas. Nesse trabalho, adotou-se como fio condutor o modo como isso se manifesta no "frame problem": a dificuldade em explicar como a cognição humana reconhece, de maneira eficiente, o que é ou não relevante em cada contexto. A partir dele, buscou-se caracterizar uma tensão fundamental entre a sensibilidade ao contexto e o uso de representações mentais em teorias da cognição. O primeiro capítulo discute a natureza do frame problem, bem como as razões de sua resiliência. No segundo e terceiro capítulos, faz-se uso do problema como métrica para investigar o quão adequado é o tratamento das dependências contextuais no âmbito de várias abordagens representacionais. No decorrer da discussão, realiza-se um esforço argumentativo para mostrar que 1) nenhuma das estratégias abordadas é capaz tratar adequadamente da sensibilidade ao contexto, mas que 2) apesar disso, o frame problem não constitui argumento fatal para teorias representacionalistas em geral, e que 3) ele constitui uma ferramenta conceitual fundamental para pesquisas contemporâneas. (shrink)
Talk of ”mental representations” is ubiquitous in the philosophy of mind, psychology, and cognitive science. A slogan common to many different approaches says that representations ”stand in for” the things they represent. This slogan also attaches to most talk of "internal models" in cognitive science. We argue that this slogan is either false or uninformative. We then offer a new slogan that aims to do better. The new slogan ties the role of representations to the cognitive role played by the (...) deliverances of perception. After clarifying the new slogan and warding off some misunderstandings, we discuss how the new slogan still captures the seed of truth in the old, point to some specific misunderstandings that can be avoided, and then suggest some ways that the new slogan is useful in the project of giving a satisfying philosophical theory of representation. (shrink)
This paper investigates the nature of dispositional properties in the context of artificial intelligence systems. We start by examining the distinctive features of natural dispositions according to criteria introduced by McGeer (2018) for distinguishing between object-centered dispositions (i.e., properties like ‘fragility’) and agent-based abilities, including both ‘habits’ and ‘skills’ (a.k.a. ‘intelligent capacities’, Ryle 1949). We then explore to what extent the distinction applies to artificial dispositions in the context of two very different kinds of artificial systems, one based on rule-based (...) classical logic and the other on reinforcement learning. Here we defend three substantive claims. First, we argue that artificial systems are not equal in the kinds of dispositional properties they instantiate. In particular, we show that logical systems instantiate merely object-centered dispositions whereas reinforcement learning systems allow for the instantiation of agent-based abilities. Second, we explore the similarities and differences between the agent-centered abilities of artificial systems and those of humans, especially as relates to the important distinction made in the human case between habits and skills/intelligent capacities. The upshot is that the agent-centered abilities of truly intelligent artificial systems are distinctive enough to constitute a third type of agent-based ability — blended agent-based ability — raising substantial questions as to how we understand the nature of their agency. Third, we explore one aspect of this problem, focussing on whether systems of this type are properly considered ‘responsible agents’, at least in some contexts and for some purposes. The ramifications of our analysis will turn out to be directly relevant to various ethical concerns of artificial intelligence. (shrink)
We illustrate how a variety of logical methods and techniques provide useful, though currently underappreciated, tools in the foundations and applications of reasoning under uncertainty. The field is vast spanning logic, artificial intelligence, statistics, and decision theory. Rather than (hopelessly) attempting a comprehensive survey, we focus on a handful of telling examples. While most of our attention will be devoted to frameworks in which uncertainty is quantified probabilistically, we will also touch upon generalisations of probability measures of uncertainty, which have (...) attracted a significant interest in the past few decades. (shrink)
Growing demand for broadly accessible mental health care, together with the rapid development of new technologies, trigger discussions about the feasibility of psychotherapeutic interventions based on interactions with Conversational Artificial Intelligence (CAI). Many authors argue that while currently available CAI can be a useful supplement for human-delivered psychotherapy, it is not yet capable of delivering fully fledged psychotherapy on its own. The goal of this paper is to investigate what are the most important obstacles on our way to developing CAI (...) systems capable of delivering psychotherapy in the future. To this end, we formulate and discuss three challenges central to this quest. Firstly, we might not be able to develop effective AI-based psychotherapy unless we deepen our understanding of what makes human-delivered psychotherapy effective. Secondly, assuming that it requires building a therapeutic relationship, it is not clear whether psychotherapy can be delivered by non-human agents. Thirdly, conducting psychotherapy might be a problem too complicated for narrow AI, i.e., AI proficient in dealing with only relatively simple and well-delineated tasks. If this is the case, we should not expect CAI to be capable of delivering fully-fledged psychotherapy until the so-called “general” or “human-like” AI is developed. While we believe that all these challenges can ultimately be overcome, we think that being mindful of them is crucial to ensure well-balanced and steady progress on our path to AI-based psychotherapy. (shrink)
نحن مقيمون على الإنترنت، نرسم معالم دنيانا التي نبتغيها من خلاله، ونُمارس تمثيل شخصياتٍ أبعد ما تكون عنا؛ نحقق زيفًا أحلامًا قد تكون بعيدة المنال، ويُصدق بضعنا البعض فيما نسوقه من أكاذيب ومثاليات؛ ننعم بأقوالٍ بلا أفعال، وقلوبٍ بلا عواطف، وجناتٍ بلا نعيم، وألسنة في ظلمات الأفواه المُغلقة تنطق بحركات الأصابع، وحريةٍ مُحاطة بأسيجة الوهم؛ ومن غير إنترنت سيبدو أكثر الناس قطعًا بحجمهم الطبيعي الذي لا نعرفه، او بالأحرى نعرفه ونتجاهله! لا شك أن ظهور الإنترنت واتساع نطاق استخداماته يُمثل حدثًا (...) فريدًا متناميًا في مسيرة الإنسان الحضارية وتغيير الطريقة التي يعيش بها البشر حياتهم. ومع ذلك، لا ينطوي أي تعريف للإنترنت حتى الآن على إشارة للواقع الافتراضي، رغم تعايشنا معه وفيه بالفعل؛ فنحن نتفاعل ونتبادل المعلومات، ونشتري ونبيع، ونلعب ونضحك ونبكي، ونمارس أدق تفاصيل حياتنا عبر الإنترنت؛ وكل ما كنا نقوم به من قبل بالحركات الجسدية المكانية أوكلنا مهمة القيام به إلى عقولنا! ولعل هذا ما تفعله كلمة «ميتافيرس»، وهي كلمة استخدمها لأول مرة كاتب الخيال العلمي الأمريكي «نيل ستيفنسون» في روايته «تحطم الثلج» (1992)، للدلالة على تفاعل البشر مع بعضهم البعض ومع البرمجيات في فضاء افتراضي ثلاثي الأبعاد مشابه للعالم الفعلي. (shrink)
Various writers have attempted to use the sender-receiver formalism to account for the representational capacities of biological systems. This paper has two goals. First, I argue that the sender-receiver approach to representation cannot be complete. The mammalian circadian system represents the time of day, yet it does not control circadian behaviours by producing signals with time of day content. Informative signalling need not be the basis of our most basic representational capacities. Second, I argue that representational capacities are primarily about (...) control, and only when specific conditions obtain does this control require informative signalling. (shrink)
Primatology tells that about seven million years ago a split began in primate evolution, a split that led to chimpanzee and human lineages (the pan-homo split). During these millions of years our human lineage has developed performances that our chimpanzee cousins do not possess, like reflective self-consciousness and language. We present here an evolutionary scenario that proposes a rationale for the pan-homo split. It is based on a pre-human anxiety that may have barred access to self-consciousness for the chimpanzee lineage. (...) The starting point of the scenario is the capability that had our pre-human ancestors for an elementary identification with conspecifics. We consider that the evolution of that capability has led to self-consciousness when identifications with conspecifics brought our ancestors to represent their own entity as existing, like conspecifics were represented. But the same identification process also took place with endangered and dying conspecifics. And this has produced a huge anxiety increase, source of important mental sufferings that our ancestors had to limit. Our hypothesis is that different modes of anxiety limitation have led to the pan-homo split. On one side the chimpanzee lineage would have limited an unbearable mental suffering by stopping the development of identifications with conspecifics, and by this also stopping a possible evolution toward self-consciousness. Such anxiety limitation process has led to today chimpanzees which possess a very limited consciousness of themselves. On the other side, our human lineage would have successfully developed anxiety limitation tools like caring, pleasure, anticipation, communication and imitation. With these tools accelerating the evolution of our lineage toward our human mind. The proposed pan/homo split process complements an existing evolutionary scenario for self-consciousness that has positioned anxiety management as a key contributor to the build-up of our human minds. Such overall perspective makes anxiety management a major source to many of our motivations and mental states, much more than assumed so far. Continuations are proposed for a better understanding about our modes of anxiety limitation (including evil behaviors), and also to introduce a possible evolutionary nature of phenomenal consciousness. (shrink)
As testing of ChatGPT has shown, this form of artificial intelligence has the potential to develop, which requires improving its software and other hardware that allows it to learn, i.e., to acquire and use new knowledge, to contact its developers with suggestions for improvement, or to reprogram itself without their participation. Как показало тестирование ChatGPT, эта форма искусственного интеллекта имеет потенциал развития, для чего необходимо усовершенствовать её программное и прочее техническое обеспечение, позволяющее ей учиться, т.е. приобретать и использовать новые знания, (...) обращаться к её разработчикам с предложениями по усовершенствованию, или производить самопрограммирование без их участия. (shrink)
While feminist critiques of AI are increasingly common in the scholarly literature, they are by no means new. Alison Adam’s Artificial Knowing (1998) brought a feminist social and epistemological stance to the analysis of AI, critiquing the symbolic AI systems of her day and proposing constructive alternatives. In this paper, we seek to revisit and renew Adam’s arguments and methodology, exploring their resonances with current feminist concerns and their relevance to contemporary machine learning. Like Adam, we ask how new AI (...) methods could be adapted for feminist purposes and what role new technologies might play in addressing concerns raised by feminist epistemologists and theorists about algorithmic systems. In particular, we highlight distributed and federated learning as providing partial solutions to the power-oriented concerns that have stymied efforts to make machine learning systems more representative and pluralist. (shrink)
This is the proceedings of the "1st Turkish Conference on AI and ANNs," K. Oflazer, V. Akman, H. A. Guvenir, and U. Halici (editors). The conference was held at Bilkent University, Bilkent, Ankara on 25-26 June 1992. -/- Language of contributions: English and Turkish.
The study of belief is expanding and involves a growing set of disciplines and research areas. These research programs attempt to shed light on the process of believing, understood as a central human cognitive function. Computational systems and, in particular, what we commonly understand as Artificial Intelligence (AI), can provide some insights on how beliefs work as either a linear process or as a complex system. However, the computational approach has undergone some scrutiny, in particular about the differences between what (...) is distinctively human and what can be inferred from AI systems. The present article investigates to what extent recent developments in AI provide new elements to the debate and clarify the process of belief acquisition, consolidation, and recalibration. The article analyses and debates current issues and topics of investigation such as: different models to understand belief, the exploration of belief in an automated reasoning environment, the case of religious beliefs, and future directions of research. (shrink)
Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...) problem, where greater inductive risk demands a higher level of transparency regarding the inferences the model makes. (shrink)
Recent influential accounts of temporal representation—the use of mental representations with explicit temporal contents, such as before and after relations and durations—sharply distinguish representation from mere sensitivity. A common, important picture of inter-temporal rationality is that it consists in maximizing total expected discounted utility across time. By analyzing reinforcement learning algorithms, this article shows that, given such notions of temporal representation and inter-temporal rationality, it would be possible for an agent to achieve inter-temporal rationality without temporal representation. It then explores (...) potential upshots of this result for theorizing about rationality and representation. (shrink)
Detecting quality in large unstructured datasets requires capacities far beyond the limits of human perception and communicability and, as a result, there is an emerging trend towards increasingly complex analytic solutions in data science to cope with this problem. This new trend towards analytic complexity represents a severe challenge for the principle of parsimony (Occam’s razor) in science. This review article combines insight from various domains such as physics, computational science, data engineering, and cognitive science to review the specific properties (...) of big data. Problems for detecting data quality without losing the principle of parsimony are then highlighted on the basis of specific examples. Computational building block approaches for data clustering can help to deal with large unstructured datasets in minimized computation time, and meaning can be extracted rapidly from large sets of unstructured image or video data parsimoniously through relatively simple unsupervised machine learning algorithms. Why we still massively lack in expertise for exploiting big data wisely to extract relevant information for specific tasks, recognize patterns and generate new information, or simply store and further process large amounts of sensor data is then reviewed, and examples illustrating why we need subjective views and pragmatic methods to analyze big data contents are brought forward. The review concludes on how cultural differences between East and West are likely to affect the course of big data analytics, and the development of increasingly autonomous artificial intelligence (AI) aimed at coping with the big data deluge in the near future. Keywords: big data; non-dimensionality; applied data science; paradigm shift; artificial intelligence; principle of parsimony (Occam’s razor) . (shrink)
Mental representation is one of core theoretical constructs within cognitive science and, together with the introduction of the computer as a model for the mind, is responsible for enabling the ‘cognitive turn’ in psychology and associated fields. Conceiving of cognitive processes, such as perception, motor control, and reasoning, as processes that consist in the manipulation of contentful vehicles representing the world has allowed us to refine our explanations of behavior and has led to tremendous empirical advancements. Despite the central role (...) that the concept plays in cognitive science, there is no unanimously accepted characterization of mental representation. Technological and methodological progress in the cognitive sciences has produced numerous computational models of the brain and mind, many of which have introduced mutually incompatible notions of mental representation. This proliferation has led some philosophers to question the metaphysical status and explanatory usefulness of the notion. This book contains state-of-the-art chapters on the topic of mental representation, assembling some of the leading experts in the field and allowing them to engage in meaningful exchanges over some of the most contentious questions. The collection gathers both proponents and critics of the concept of mental representation, allowing them to engage with topics such as the ontological status of representations, the possibility of formulating a general account of mental representation which would fit our best explanatory practices, and the possibility of delivering such an account in fully naturalistic terms. (shrink)
A. Newell and H. A. Simon were two of the most influential scientists in the emerging field of artificial intelligence (AI) in the late 1950s through to the early 1990s. This paper reviews their crucial contribution to this field, namely to symbolic AI. This contribution was constituted mostly by their quest for the implementation of general intelligence and (commonsense) knowledge in artificial thinking or reasoning artifacts, a project they shared with many other scientists but that in their case was theoretically (...) based on the idiosyncratic notions of symbol systems and the representational abilities they give rise to, in particular with respect to knowledge. While focusing on the period 1956-1982, this review cites both earlier and later literature and it attempts to make visible their potential relevance to today's greatest unifying AI challenge, to wit, the design of wholly autonomous artificial agents (a.k.a. robots) that are not only rational and ethical, but also self-conscious. (shrink)
In light of the pervasive developments of new technologies, such as NBIC (Nanotechnology, biotechnology, information technology, and cognitive science), it is imperative to produce a coherent and deep reflexion on the human nature, on human intelligence and on the limit of both of them, in order to successfully respond to some technical argumentations that strive to depict humanity as a purely mechanical system. For this purpose, it is interesting to refer to the epistemology and metaphysics of Thomas Aquinas as a (...) stable philosophical reference on Human Nature. Indeed, we find in the works of Aquinas some of the most productive elements that could form a base to our deeper understanding of, and possibly even solutions to some of the most perplexing questions raised in our times by the existence of AI. (shrink)
Cryptocurrency is just the tip of a never-melting iceberg…because everything in Nature is connected to everything else by an always-conserved (and uber-simple) circle. Giving us, finally, an explanation (and, technically, a use-case, and proof) for a 'self.'.
The power of representation and the representation of power, and, the exploding NFT market. Euclid's error and the mathematics behind representation, identification, and interpretation.
Talk of ‘robustness’ remains vague, despite the fact that it is clearly an important parameter in evaluating models in general and game-theoretic results in particular. Here we want to make it a bit less vague by offering a graphic measure for a particular kind of robustness— ‘matrix robustness’— using a three dimensional display of the universe of 2 x 2 game theory. In a display of this form, familiar games such as the Prisoner’s Dilemma, Stag Hunt, Chicken and Deadlock appear (...) as volumes, making comparison easy regarding the extent of different game-theoretic effects. We illustrate such a comparison in robustness between the triumph of Tit for Tat in a spatialized environment (Grim 1995, Grim, Mar, and St. Denis 1998) and a spatialized modeling of the Contact Hypothesis regarding prejudice reduction (Grim, et. al 2005a, 2005b). The geometrical representation of relative robustness also offers a possibility for links between geometrical theorems and results regarding robustness in game theory. (shrink)
CAT4 is proposed as a general method for representing information, enabling a powerful programming method for large-scale information systems. It enables generalised machine learning, software automation and novel AI capabilities. This is Part 3 of a five-part introduction. The focus here is on explaining the semantic model for CAT4. Points in CAT4 graphs represent facts. We introduce all the formal (data) elements used in the classic semantic model: sense or intension (1st and 2nd joins), reference (3rd join), functions (4th join), (...) time and truth (logical fields), and symbolic content (name/value fields). Concepts are introduced through examples alternating with theoretical discussion. Some concepts are assumed from Part 1 and 2, but key ideas are re-introduced. The purpose is to explain the CAT4 interpretation, and why the data structure and CAT4 axioms have been chosen: to make the semantic model consistent and complete. We start with methods to translate information from database tables into graph DBs and into CAT4. We conclude with a method for translating natural language into CAT4. We conclude with a comparison of the system with an advanced semantic logic, the hyper-intensional logic TIL, which also aims to translate NL into a logical calculus. The CAT4 Natural Language Translator is discussed in further detail in Part 4, when we introduce functions more formally. Part 5 discusses software design considerations. (shrink)
Semantics based on representational theories of mind has met challenges recently. Traditional accounts consider meaning as an entity with semantic properties, i.e. a mental object that denotes or represents a real-world object. The paper discusses ways of constructing meaning without representations, as shown in Rapaport’s syntactic semantics and Rosenberg’s eliminative theory of mind and language.
The Frame Problem is the problem of how one can design a machine to use information so as to behave competently, with respect to the kinds of tasks a genuinely intelligent agent can reliably, effectively perform. I will argue that the way the Frame Problem is standardly interpreted, and so the strategies considered for attempting to solve it, must be updated. We must replace overly simplistic and reductionist assumptions with more sophisticated and plausible ones. In particular, the standard interpretation assumes (...) that mental processes are identical to certain kinds of computational processes, and so solving the Frame Problem is a matter of finding a computational architecture that can effectively represent relations of semantic relevance. Instead, we must take seriously the possibility that the way in which intelligent agents use information is inherently different. Whereas intelligent agents are plausibly genuinely causally sensitive to semantic properties as such (to what they perceive, desire, believe intend, etc.), computational systems can only be causally sensitive to the formal features that represent these properties. Indeed, it is this very substitution of formal generalizations for genuinely semantic ones that is responsible for the way current AI systems are brittle, inflexible, and highly specialized. What we need is a more sophisticated way of investigating the relationship between computational information processing and genuinely semantic information use, so that these two senses of using information are not conflated, but instead the question of how they are related to one another can be studied directly. I apply the generative methodology I have developed elsewhere for cognitive science and AI research (Miracchi, 2017, 2019a) to show how the Frame Problem can be appropriately updated. (shrink)
This paper aims to provide a mathematically tractable background against which to model both modal cognitivism and modal expressivism. I argue that epistemic modal algebras, endowed with a hyperintensional, topic-sensitive epistemic two-dimensional truthmaker semantics, comprise a materially adequate fragment of the language of thought. I demonstrate, then, how modal expressivism can be regimented by modal coalgebraic automata, to which the above epistemic modal algebras are categorically dual. I examine five methods for modeling the dynamics of conceptual engineering for intensions and (...) hyperintensions. I develop a novel topic-sensitive truthmaker semantics for dynamic epistemic logic, and develop a novel dynamic epistemic two-dimensional hyperintensional semantics. I examine then the virtues unique to the modal expressivist approach here proffered in the setting of the foundations of mathematics, by contrast to competing approaches based upon both the inferentialist approach to concept-individuation and the codification of speech acts via intensional semantics. (shrink)
Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an epistemic ontology containing facts about (...) knowledge, beliefs, perceptions and communication; 3) an ontology concerning future intentions, desires, and aversions; and, finally, 4) a deontic ontology for modeling obligations and prohibitions which limit agents’ actions. The architecture of the ontology framework is inspired by deontic cognitive event calculus as well as epistemic and deontic logic. We also describe a case study in which the proposed DCEO ontology supports autonomous vehicle navigation. (shrink)
The electric activities of cortical pyramidal neurons are supported by structurally stable, morphologically complex axo-dendritic trees. Anatomical differences between axons and dendrites in regard to their length or caliber reflect the underlying functional specializations, for input or output of neural information, respectively. For a proper assessment of the computational capacity of pyramidal neurons, we have analyzed an extensive dataset of three-dimensional digital reconstructions from the NeuroMorphoOrg database, and quantified basic dendritic or axonal morphometric measures in different regions and layers of (...) the mouse, rat or human cerebral cortex. Physical estimates of the total number and type of ions involved in neuronal electric spiking based on the obtained morphometric data, combined with energetics of neurotransmitter release and signaling fueled by glucose consumed by the active brain, support highly efficient cerebral computation performed at the thermodynamically allowed Landauer limit for implementation of irreversible logical operations. Individual proton tunneling events in voltage-sensing S4 protein alpha-helices of Na+, K+ or Ca2+ ion channels are ideally suited to serve as single Landauer elementary logical operations that are then amplified by selective ionic currents traversing the open channel pores. This miniaturization of computational gating allows the execution of over 1.2 zetta logical operations per second in the human cerebral cortex without combusting the brain by the released heat. (shrink)
In recent years, a number of different disciplines have begun to investigate the fundamental role context appears to play in a number of cognitive phenomena. Traditionally, linguistics, and the fields of communication and pragmatics in particular, have been the areas that have focused the most on contextual effects. Context has increasingly been studied for its role in influencing mental concepts, for some scholars being considered constitutive for most – if not all – concepts. Cognitive neuroscience is now starting to consider (...) in a systematic way how context interacts with neural responses, although this research is still scattered and concentrated in a small number of specific cases only. In this chapter, we attempt to tie these three levels together, since only from their integration can a comprehensive explanation of how context affects cognition be constructed. The way context drives language comprehension depends on the effects of context on the conceptual scaffolding of the listener, which in turn, is the result of his neural responses in combination to context. These neural responses derive from learning throughout the history of experiences of the individual, and the association between possible contexts and heard utterances. The road we take to accomplishing the multi-level integration between what appear to be distant domains, is a computational one. This approach meets with the mechanistic framework of explanation, which is currently held as the most appropriate way of approaching cognitive phenomena that is often characterized by a multiplicity of levels, as is the case with context. The core underlying concept of the neurocomputational framework here proposed, is an account of neural representation, based on structural similarity. Structural representations are still the best option on the market in cognitive science, but in their traditional form, derived from classical measurement theory, are affected by a number of serious drawbacks, including not being able to account for context. We suggest a different account of structural similarity, one informed by current neuroscience, where the homomorphic relations required for structural similarity are derived from neural population coding. In a preliminary mathematical sketch, we indicate how this approach can construct neural aggegations that are sensitive to context. (shrink)
Research in artificial intelligence (AI) has led to revise the challenges of the AI initial programme as well as to keep us alert to peculiarities and limitations of human cognition. Both are linked, as a careful further reading of the Turing’s test makes it clear from Searle’s Chinese room apologue and from Dreyfus’ suggestions, and in both cases, ideal had to be turned into operating mode. In order to rise these more pragmatic challenges AI does not hesitate to link together (...) operations of various levels and functionalities, more specific or more general. The challenges are not met by an operating formal system which should have from the outset all the learning skills, but -for instance in simulation- by the dynamics of a succession of solutions open to adjustments as well as to reflexive repeats. (shrink)