The present paper looks at one of the most thorough articles on the intelligence of GPT, research conducted by engineers at Microsoft. Although there is a great deal of value in their work, I will argue that, for familiar philosophical reasons, their methodology, ‘Black-box Interpretability’ is wrongheaded. But there is a better way. There is an exciting and emerging discipline of ‘Inner Interpretability’ (also sometimes called ‘White-box Interpretability’) that aims to uncover the internal activations and weights of models in order (...) to understand what they represent and the algorithms they implement. In my view, a crucial mistake in Black-box Interpretability is the failure to appreciate that how processes are carried out matters when it comes to intelligence and understanding. I can’t pretend to have a full story that provides both necessary and sufficient conditions for being intelligent, but I do think that Inner Interpretability dovetails nicely with plausible philosophical views of what intelligence requires. So the conclusion is modest, but the important point in my view is seeing how to get the research on the right track. Towards the end of the paper, I will show how some of the philosophical concepts can be used to further refine how Inner Interpretability is approached, so the paper helps draw out a profitable, future two-way exchange between Philosophers and Computer Scientists. (shrink)
The Algorithm described in this short paper is a simplified formal representation of consciousness that may be applied in the fields of Psychology and Artificial Intelligence. -/- Click on the download link to read full essay...
Information and Meaning are present everywhere around us and within ourselves. Specific studies have been implemented in order to link information and meaning: - Semiotics - Phenomenology - Analytic Philosophy - Psychology No general coverage is available for the notion of meaning. We propose to complement this lack by a systemic approach to meaning generation.
Information and meaning are present everywhere around us and within ourselves. Specific studies have been implemented to link information and meaning (Linguistic, Biosemiotic, Psychology, Psychiatry, Cognition, Artificial Intelligence... ). No general coverage is available for the notion of meaning. We propose to complement this lack by a system approach to meaning generation in an evolutionary background. That short paper is a summary of the system approach where a Meaning Generator System (MGS) based on internal constraint satisfaction has been introduced. The (...) MGS can be used for animals (with “stay alive” related constraints), for humans (with “look for happiness” type constraints) and for artificial agents with programmed constraints. Definitions for agency and autonomy are made available based on internal constraint satisfaction. Usage of the MGS with the Turing Test shows why today computers cannot think like humans do. The MGS also allows to introduces evolutionary scenarios for cognition, intentionality and self-consciousness, with an entry point to a human specific anxiety. Continuations are proposed. (shrink)
Linguistic outputs generated by modern machine-learning neural net AI systems seem to have the same contents—i.e., meaning, semantic value, etc.—as the corresponding human-generated utterances and texts. Building upon this essential premise, Herman Cappelen and Josh Dever's Making AI Intelligible sets for itself the task of addressing the question of how AI-generated outputs have the contents that they seem to have (henceforth, ‘the question of AI Content’). In pursuing this ambitious task, the book makes several high-level, framework observations about how a (...) meta-semantic account of the content of AI-generated outputs should proceed. (shrink)
Quilty-Dunn et al. argue that deep convolutional neural networks (DCNNs) optimized for image classification exemplify structural disanalogies to human vision. A different kind of artificial vision – found in reinforcement-learning agents navigating artificial three-dimensional environments – can be expected to be more human-like. Recent work suggests that language-like representations substantially improves these agents’ performance, lending some indirect support to the language-of-thought hypothesis (LoTH).
his article will focus on the mechanistic origins of the computer metaphor, which forms the conceptual framework for the methodology of the cognitive sciences, some areas of artificial intelligence and the philosophy of mind. The connection between the history of computing technology, epistemology and the philosophy of mind is expressed through the metaphorical dictionaries of the philosophical discourse of a particular era. The conceptual clarification of this connection and the substantiation of the mechanistic components of the computer metaphor is the (...) main goal of this article. The statement is substantiated that the invention of mechanical computing devices, having a long history in the European engineering tradition, formed the prerequisites for the emergence of machine functionalism in the modern philosophy of mind. The idea of multiple implementation stems from the principle that a formal symbol system prescribes rules for the use of rational abstractions through the physical architecture of a computational engine. The article considers the reasons for the conceptual shift and reveals the semantic foundations for the metaphorical transfer of the properties of abstract objects from the theory of automata to the field of modern philosophy of mind. The criticism and ways of protecting the philosophical program of machine functionalism are analyzed by changing the content of the metaphor “Mind as machine”. The reasons for the stability of the information-computer approach in cognitive sciences are also disclosed and explained. (shrink)
Le déficit d’explicabilité des techniques d’apprentissage machine (AM) pose des problèmes opérationnels, juridiques et éthiques. Un des principaux objectifs de notre projet est de fournir des explications éthiques des sorties générées par une application fondée sur de l’AM, considérée comme une boîte noire. La première étape de ce projet, présentée dans cet article, consiste à montrer que la validation de ces boîtes noires diffère épistémologiquement de celle mise en place dans le cadre d’une modélisation mathéma- tique et causale d’un phénomène (...) physique. La différence majeure est qu’une méthode d’AM ne prétend pas représenter une causalité entre les paramètres d’entrées et ceux de sortie. Après avoir proposé une clarification et une adaptation des notions d’inter- prétabilité et d’explicabilité telles qu’on les rencontre dans la littérature déjà abondante sur le sujet, nous montrons dans cet article l’intérêt de mettre en œuvre les distinctions épistémologiques entre les différentes fonctions épistémiques d’un modèle, d’une part, et entre la fonction épistémique et l’usage d’un modèle, d’autre part. Enfin, la dernière partie de cet article présente nos travaux en cours sur l’évaluation d’une explication, qui peut être plus persuasive qu’informative, ce qui peut ainsi causer des problèmes d’ordre éthique. (shrink)
Hahmottelen tässä artikkelissa tekoälyn historiaa varhaismodernin filosofian aikakaudella 1600–1700-luvuilla. Esittelemäni aiheet ovat hieman erillisiä toisistaan, mutta yhteistä niille on ajatus komputaatiosta tai automaatiosta, eräänlaisesta mekaanisesta laskemisesta tai toiminnasta, jota voi pitää tekoälyn varhaisena lähtökohtana. -/- On kuitenkin huomattava, että pelkkä komputaatio eli informaation käsittely sinänsä ei riitä tekoälylle – kaikkia näitä pyrkimyksiä leimaa tietynlainen epistemologinen optimismi: automatisoidun ajattelun avulla uskotaan saatavan enemmän laadukasta tietoa ja kenties myös uudenlaisia ajatuksia, kun ajatteluprosessi tulee sujuvammaksi. Tekoälyn varhaishistoria liittyy siis nimenomaan inhimillisen ajattelun mekanisoimiseen (...) ja uskoon siitä, että sen avulla voidaan lieventää ihmismielen rajoituksia ja ajatella paremmin. Yhteistä sekä tekoälyn että tietokoneen historialle on kuitenkin laskennan kehitys. (shrink)
Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...) so on forever: that any sequence of organisms (each one a child of the previous) must contain occasional multi-parent organisms, or must terminate. By proving that a certain measure (arguably an intelligence measure) decreases when an idealized parent AGI single-handedly creates a child AGI, we argue that a similar Law holds for AGIs. (shrink)
After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...) traditional reinforcement learning could be altered to remove this roadblock. (shrink)
This article is a knowledge technology case study of DNAOS, a distributed platform for Knowledge Resource Entitlement, Modeling, Management, and Sharing (KREMMS). Some historical aspects of its design, development, and release are briefly discussed, after which the DNAOS technology is commented upon from the specific viewpoint of KREMMS. At the core of this platform is the conception of knowledge as a natural phenomenon, which conception is reflected in the ontology of this technology: Fundamental knowledge structures and structuring principles, believed to (...) be “natural,” including qualification, resources, and relationships, are considered theoretically and modeled computationally. Finally, with the help of images, the reader is briefly introduced to some DNAOS capabilities, touching on transaction, application, data, model, and interface. (shrink)
Vor einigen, Jahren habe ich den Punkt erreicht, an dem ich normalerweise aus dem Titel eines Buches oder zumindest aus den Kapiteltiteln erzähle, welche philosophischen Fehler gemacht werden und wie häufig. Bei nominell wissenschaftlichen Arbeiten können diese weitgehend auf bestimmte Kapitel beschränkt sein, die philosophisch werden oder versuchen, allgemeine Schlussfolgerungen über die Bedeutung oder langfristige-Bedeutung des Werkes zuziehen. Normalerweise sind die wissenschaftlichen Fakten jedoch großzügig mit philosophischem Kauderwelsch darüber, was diese Tatsachen bedeuten, verwogen. Die klaren Unterscheidungen, die Wittgenstein vor etwa (...) 80 Jahren zwischen wissenschaftlichen Fragen und deren Beschreibungen durch verschiedene Sprachspiele beschrieb, werden selten berücksichtigt, so dass man abwechselnd von der Wissenschaft begeistert und von ihrer inkohärenten Analyse bestürzt ist. So ist es mit diesem Band. Wenn man einen Geist mehr oder weniger wie unseren erschaffen soll, braucht man eine logische Struktur für Rationalität und ein Verständnis der beiden Denksysteme (Dual-Prozess-Theorie). Wenn man darüber philosophieren will, muss man die Unterscheidung zwischen wissenschaftlichen Faktenfragen und der philosophischen Frage verstehen, wie Sprache im streitigen Kontext funktioniert und wie man die Fallstricke von Reduktionismus und Scientismus vermeiden kann, aber Kurzweil ist, wie die meisten Verhaltensstudenten, weitgehend ahnungslos. Er ist verzaubert von Modellen, Theorien und Konzepten und dem Erklärungsdrang, während Wittgenstein uns gezeigt hat, dass wir nur beschreiben müssen, und dass Theorien, Konzepte usw. nur Wege der Verwendung von Sprache (Sprachspiele) sind, die nur einen Wert haben, da sie einen klaren Test haben (klare Wahrheitsmacher, oder wie John Searle (der berühmteste Kritiker der AI) gerne sagt, klare Bedingungen der Zufriedenheit (COS). Ich habe versucht, in meinen jüngsten Schriften damit einen Anfang zu machen. Wer aus der modernen zweisystems-Sichteinen umfassenden, aktuellen Rahmen für menschliches Verhalten wünscht, kann mein Buch "The Logical Structure of Philosophy, Psychology, Mindand Language in Ludwig Wittgenstein and John Searle' 2nd ed (2019) konsultieren. Diejenigen,die sich für mehr meiner Schriften interessieren, können 'Talking Monkeys--Philosophie, Psychologie, Wissenschaft, Religion und Politik auf einem verdammten Planeten --Artikel und Rezensionen 2006-2019 3rd ed (2019) und Suicidal Utopian Delusions in the 21st Century 4th ed (2019) und andere sehen. (shrink)
몇 년전, 저는 보통 책의 제목이나 적어도 장 제목에서 어떤 종류의 철학적 실수를 저지르고 얼마나 자주 알 수 있는지 를 알 수 있는 지점에 도달했습니다. 명목상 과학적 작품의 경우, 이들은 크게 철학적 왁스 또는 의미 또는 긴에 대한 일반적인 결론을 그리려는 특정 장으로 제한 될 수있다-작업의기간 의의. 그러나 일반적으로 사실의 과학적 문제는 이러한 사실이 무엇을 의미하는지에 관해서는 철학적 횡설수설과 관대하게 얽혀있다. Wittgenstein이 약 80 년 전에 과학 문제와 다양한 언어 게임에 의한 설명 사이에 설명 한 명확한 차이점은 거의 고려되지 않으므로 (...) 과학에 의해 번갈아 열광하고 일관되지 않은 분석에 의해 실망합니다. 그래서이 볼륨입니다. 우리처럼 마음을 만들려면 합리성을 위한 논리적 구조와 두 가지 사상 체계(이중 프로세스 이론)에 대한 이해가 필요합니다. 이것에 대해 철학적으로 생각한다면, 사실의 과학적 문제와 언어가 문제의 맥락에서 어떻게 작동하는지에 대한 철학적 문제, 그리고 환원주의와 사이언티즘의 함정을 피하는 방법의 차이를 이해해야하지만, 커즈와일은 대부분의 행동 학생들처럼 대체로 단서가 없다. 그는 모델, 이론, 개념, 그리고 설명 하는 충 동에 의해 매료, Wittgenstein 우리가 설명 하는 데 필요한 것을 보여 주었다 하는 동안, 그리고 이론, 개념 등, 그들은 명확한 테스트 (명확한 진실 제작자, 또는 존 Searle (AI의 가장 유명한 비평가)를 좋아하는, 명확한 조건(명확한 진실 제작자)를 가지고 있는 한, 가치있는 언어 (언어 게임)를 사용하는 단지 방법입니다. 나는 나의 최근 글에서 이것에 대한 시작을 제공하기 위해 시도했다. 현대 의 두 시스템 보기에서인간의 행동에 대한 포괄적 인 최신 프레임 워크를 원하는 사람들은 내 책을 참조 할 수 있습니다'철학의 논리적 구조, 심리학, 민d와 루드비히 비트겐슈타인과 존 Searle의언어' 2nd ed (2019). 내 글의 더 많은 관심있는 사람들은 '이야기 원숭이를 볼 수 있습니다-철학, 심리학, 과학, 종교와 운명 행성에 정치 - 기사 및 리뷰 2006-2019 3 rd 에드 (2019) 및 21st 세기 4번째 에드 (2019) 및 기타에서 자살 유토피아 망상. 또한, AI/로봇 공학의 '사실' 계정에서 평소와 같이 그는 우리의 개인 정보 보호, 안전, 심지어 다른 저자 (Bostrum, 호킹등)에서 눈에 띄는 사회의 증가 '안드로이드화'에서 심지어 생존에 매우 실제 위협에 시간을 제공하지 않습니다, 그래서 나는 '좋은'안드로이드, 휴머노이드, 인공 지능 (AI), 인공 지능의 매우 자살 유토피아 망상에 대한 몇 가지 의견을 합니다. 저는 전자, 로봇 공학 및 AI의 기술적 발전이 사회에 중대한 변화를 초래할 것이라는 점을 당연하게 생각합니다. 그러나, 나는 유전 공학에서 오는 변경은 우리가 우리가 누구인지 완전히 바꿀 수 있게 해 줄 것이기 때문에 적어도 크고 잠재적으로 훨씬 더 크다고 생각합니다. 그리고 우리의 유전자 또는 다른 원숭이의 유전자를 수정하여 슈퍼 스마트 / 슈퍼 강한 종을 만드는 것이 가능할 것입니다. 다른 기술과 마찬가지로, 저항하는 모든 국가는 뒤에 남아있을 것입니다. 그러나 대규모로 바이오봇이나 초인간을 구현하는 것이 사회적, 경제적으로 실현 가능할까요? 그리고 그렇다 하더라도 인구 과잉, 자원 고갈, 기후 변화, 그리고 아마도 중국을 통치하는 일곱 사회경로의 독재적 통치에 의한 산업 문명의 파괴를 막기 위해 경제적, 사회적으로, 는 가능성이없어 보인다. 그래서, 이 볼륨의 철학적 실수를 무시하고, 과학에만 우리의 관심을 지시, 우리가 여기에 있는 것은 기본 생물학, 심리학 및 인간의 생태학, 미국과 세계를 파괴하는 동일한 망상을 파악하는 실패에 뿌리를 둔 또 다른 자살 유토피아 망상입니다. 나는 세계가 구원 될 수있는 원격 가능성을 볼 수 있지만, AI / 로봇 공학, CRISPR, 또는 신마르크스주의, 다양성과 평등에 의해. (shrink)
Alcuni anni fa, ho raggiunto il punto in cui di solito posso dire dal titolo di un libro, o almeno dai titoli dei capitoli, quali tipi di errori filosofici saranno fatti e con quale frequenza. Nel caso di opere nominalmente scientifiche queste possono essere in gran parte limitate a determinati capitoli che sono filosofici o cercanodi trarre conclusioni generali sul significato o sul significato a lungoterminedell'opera. Normalmente però le questioni scientifiche di fatto sono generosamente intrecciate con incomprodellami filosofici su ciò (...) che questi fatti significano. Le chiare distinzioni che Wittgenstein ha descritto circa 80 anni fa tra le questioni scientifiche e le loro descrizioni da parte di vari giochi linguistici sono raramente prese in considerazione, e quindi si è alternativamente stupiti dalla scienza e costerghi per la sua analisi incoerente. Così è con questo volume. Se si vuole creare una mente più o meno come la nostra, è necessario avere una struttura logica per la razionalità e la comprensione dei due sistemi di pensiero (teoria del doppio processo). Se si vuole filosofare su questo, bisogna capire la distinzione tra le questioni scientifiche di fatto e la questione filosofica di come funziona il linguaggio nel contesto in questione, e di come evitare le insidie del riduzionismo e dello scientismo, ma Kurzweil, come la maggior parte degli studenti di comportamento, è in gran parte all'oscuro. Egli è incantato da modelli, teorie e concetti, e dalla voglia di spiegare, mentre Wittgenstein ci ha mostrato che dobbiamo solo descrivere, e che le teorie, i concetti, ecc., sono solo modi di usare il linguaggio (giochi linguistici) che hanno valore solo nella misura in cui hanno un test chiaro (clear truthmakers, o come John Searle (il critico più famoso di AI) ama dire, chiare condizioni di soddisfazione (COS)). Ho cercato di dare un inizio su questo nei miei scritti recenti. Coloro che desiderano un quadro aggiornato completo per il comportamento umano dalla moderna vista a due systems possono consultare il mio libro 'La struttura logica della filosofia, psicologia, mente e il linguaggio in Ludwig Wittgenstein e John Searle' 2nd ed (2019). Coloro che sono interessati a più dei miei scritti possono vedere 'Talking Monkeys--Filosofia, Psicologia, Scienza, Religione e Politica su un Pianeta Condannato--Articoli e Recensioni 2006-2019 3rd ed (2019) e Suicidal Utopian Delusions in the 21st Century 5th ed (2019) . (shrink)
In this volume, a range of high-profile researchers in philosophy of mind, philosophy of cognitive science, and empirical cognitive science, critically engage with Clark's work across the themes of: Extended, Embodied, Embedded, Enactive, and Affective Minds; Natural Born Cyborgs; and Perception, Action, and Prediction. Daniel Dennett provides a foreword on the significance of Clark's work, and Clark replies to each section of the book, thus advancing current literature with original contributions that will form the basis for new discussions, debates and (...) directions in the discipline. (shrink)
Alguns anos atrás, cheguei ao ponto onde eu normalmente pode dizer a partir do título de um livro, ou pelo menos a partir dos títulos do capítulo, que tipos de erros filosóficos serão feitas e com que freqüência. No caso de obras nominalmente científicas, estas podem ser largamente restritas a certos capítulos que enceram filosóficos ou tentam tirar conclusões gerais sobre o significado ou significado a longo prazo do trabalho. Normalmente entretanto as matérias científicas do fato são misturado generosa com (...) o jargão filosófico a respeito do que estes fatos significam. As distinções claras que Wittgenstein descreveu cerca de 80 anos atrás entre questões científicas e suas descrições por vários jogos de linguagem são raramente levados em consideração, e assim um é alternadamente impressionados pela ciência e desanimado por sua incoerente Análise. Assim é com este volume. -/- Se alguém é para criar uma mente mais ou menos como a nossa, é preciso ter uma estrutura lógica para a racionalidade e uma compreensão dos dois sistemas de pensamento (teoria do processo dual). Se uma delas é filosofar sobre isso, é preciso entender a distinção entre questões científicas de fato e a questão filosófica de como a linguagem funciona no contexto em questão, e de como evitar as armadilhas do reducionismo e do cientismo, mas Kurzweil, como mais estudantes de comportamento, é em grande parte c sem noção. Ele está encantado com modelos, teorias e conceitos, e o impulso de explicar, enquanto Wittgenstein nos mostrou que só precisamos descrever, e que as teorias, conceitos etc, são apenas maneiras de usar a linguagem (jogos de linguagem) que têm valor apenas na medida em que eles têm uma clara teste (claro que os verdadeiros, ou como John Searle (crítico mais famoso da AI) gosta de dizer, claro condições de satisfação (COS)). Eu tentei fornecer um começo nisto em meus escritos recentes. -/- Aqueles que desejam um quadro até à data detalhado para o comportamento humano da opinião moderna dos dois sistemas consultar meu livros Falando Macacos 3ª Ed (2019), A Estrutura Lógica da Filosofia, Psicologia, Mente e Linguagem em Ludwig Wittgenstein e John Searle 2a Ed (2019), Suicídio Pela Democracia,4aEd(2019), Entendendo as Conexões entre Ciência, Filosofia, Psicologia, Religião, Política e Economia Artigos e Análises 2006-2019 (2019), Ilusões Utópicas Suicidas no 21St século 5a Ed (2019), A Estrutura Lógica do Comportamento Humano (2019), e A Estrutura Lógica da Consciência (2019) y outros. (shrink)
The neural vehicles of mental representation play an explanatory role in cognitive psychology that their realizers do not. In this paper, I argue that the individuation of realizers as vehicles of representation restricts the sorts of explanations in which they can participate. I illustrate this with reference to Rupert’s (2011) claim that representational vehicles can play an explanatory role in psychology in virtue of their quantity or proportion. I propose that such quantity-based explanatory claims can apply only to realizers and (...) not to vehicles, in virtue of the particular causal role that vehicles play in psychological explanations. (shrink)
The paper introduces an extension of the proposal according to which conceptual representations in cognitive agents should be intended as heterogeneous proxytypes. The main contribution of this paper is in that it details how to reconcile, under a heterogeneous representational perspective, different theories of typicality about conceptual representation and reasoning. In particular, it provides a novel theoretical hypothesis - as well as a novel categorization algorithm called DELTA - showing how to integrate the representational and reasoning assumptions of the theory-theory (...) of concepts with the those ascribed to the prototype and exemplars-based theories. (shrink)
In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge. We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build arti cial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of the (...) comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the fi nal part of the paper further directions of research will be explored, trying to address current limitations and future challenges. (shrink)
This paper presents a theoretical study of the binary oppositions underlying the mechanisms of natural computation understood as dynamical processes on natural information morphologies. Of special interest are the oppositions of discrete vs. continuous, structure vs. process, and differentiation vs. integration. The framework used is that of computing nature, where all natural processes at different levels of organisation are computations over informational structures. The interactions at different levels of granularity/organisation in nature, and the character of the phenomena that unfold through (...) those interactions, are modeled from the perspective of an observing agent. This brings us to the movement from binary oppositions to dynamic networks built upon mutually related binary oppositions, where each node has several properties. (shrink)
In this article we present an advanced version of Dual-PECCS, a cognitively-inspired knowledge representation and reasoning system aimed at extending the capabilities of artificial systems in conceptual categorization tasks. It combines different sorts of common-sense categorization (prototypical and exemplars-based categorization) with standard monotonic categorization procedures. These different types of inferential procedures are reconciled according to the tenets coming from the dual process theory of reasoning. On the other hand, from a representational perspective, the system relies on the hypothesis of conceptual (...) structures represented as heterogeneous proxytypes. Dual-PECCS has been experimentally assessed in a task of conceptual categorization where a target concept illustrated by a simple common-sense linguistic description had to be identified by resorting to a mix of categorization strategies, and its output has been compared to human responses. The obtained results suggest that our approach can be beneficial to improve the representational and reasoning conceptual capabilities of standard cognitive artificial systems, and –in addition– that it may be plausibly applied to different general computational models of cognition. The current version of the system, in fact, extends our previous work, in that Dual-PECCS is now integrated and tested into two cognitive architectures, ACT-R and CLARION, implementing different assumptions on the underlying invariant structures governing human cognition. Such integration allowed us to extend our previous evaluation. (shrink)
In this paper, I show how semantic factors constrain the understanding of the computational phenomena to be explained so that they help build better mechanistic models. In particular, understanding what cognitive systems may refer to is important in building better models of cognitive processes. For that purpose, a recent study of some phenomena in rats that are capable of ‘entertaining’ future paths (Pfeiffer and Foster 2013) is analyzed. The case shows that the mechanistic account of physical computation may be complemented (...) with semantic considerations, and in many cases, it actually should. (shrink)
In this paper, the role of the environment and physical embodiment of computational systems for explanatory purposes will be analyzed. In particular, the focus will be on cognitive computational systems, understood in terms of mechanisms that manipulate semantic information. It will be argued that the role of the environment has long been appreciated, in particular in the work of Herbert A. Simon, which has inspired the mechanistic view on explanation. From Simon’s perspective, the embodied view on cognition seems natural but (...) it is nowhere near as critical as its proponents suggest. The only point of difference between Simon and embodied cognition is the significance of body-based off-line cognition; however, it will be argued that it is notoriously over-appreciated in the current debate. The new mechanistic view on explanation suggests that even if it is critical to situate a mechanism in its environment and study its physical composition, or realization, it is also stressed that not all detail counts, and that some bodily features of cognitive systems should be left out from explanations. (shrink)
This article focuses on the methodological basis for the criticism of the computationalism and “computer metaphor” in the philosophy of cognitive sciences. We suppose that the computational paradigm is the direct consequence of the theoretical confusion of phenomenal and cognitive kinds of experience. Cognitive processes, considered as the forms of computational description, are available for computer modelling. That implies the strong position of the computer metaphor in the neuroscience. In our opinion the key problem is the vague ontological nature of (...) the symbols which form the computational operations in the cognitive procedures. Despite the successful development of neuroscience, it is still impossible to explain the meaning of the content of mental states. The article provides the detailed analysis of the critical approaches to the computational models of consciousness. The special attention is given to the comparison of data integration in the artificial intellectual systems with semantic aspects of the phenomenal consciousness. In the first case the foundations of output are the hierarchy of classes, the rules protocols and applying heuristics and strategies. In the second case the knowledge is formed by qualia, metaphorical conceptualization and pragmatic level of communication. Natural principles of knowledge forming are unachievable for machine intellectual procedures. (shrink)
As it emerged from philosophical analyses and cognitive research, most concepts exhibit typicality effects, and resist to the efforts of defining them in terms of necessary and sufficient conditions. This holds also in the case of many medical concepts. This is a problem for the design of computer science ontologies, since knowledge representation formalisms commonly adopted in this field (such as, in the first place, the Web Ontology Language - OWL) do not allow for the representation of concepts in terms (...) of typical traits. The need of representing concepts in terms of typical traits concerns almost every domain of real world knowledge, including medical domains. In particular, in this article we take into account the domain of mental disorders, starting from the DSM-5 descriptions of some specific disorders. We favour a hybrid approach to concept representation, in which ontology oriented formalisms are combined to a geometric representation of knowledge based on conceptual space. As a preliminary step to apply our proposal to mental disorder concepts, we started to develop an OWL ontology of the schizophrenia spectrum, which is as close as possible to the DSM-5 descriptions. (shrink)
Androides são "autômatos com forma humana" . Enquanto robôs são "aparelhos automáticos capazes de manipular objetos ou executar operações segundo um programa". Assim podemos dizer que um androide pode ser considerado um robô, mas nem todo robô é um androide. Devido à diversidade de gêneros, foi criado o termo ginóide, separando-se assim os androides de aparência masculina (andros) da feminina (ginos). A propagação destes se deu à ficção cientifica, em livros de Isaac Asimov, em seriados para televisão como Jornada nas (...) Estrelas, em filmes para o cinema, como Guerra nas Estrelas e A.I.: Inteligência Artificial, e diversos outros meios de cultura e entretenimento, teatro inclusive. As obras citadas possuem androides que sentem; que percebem o mundo, mesmo à sua maneira, e com isso eles interagem com o ambiente, e, logicamente, com os objetos à sua volta. Para este ensaio, o principal "androide de estudo" será Data, de Jornada nas Estrelas, A Nova Geração 3 4 , e também alguns filmes posteriores. Assim, este pequeno ensaio tem como objetivo verificar se, a percepção, a extensão sensorial e porque não, a relação entre Corpo e Espírito de Bergson, podem ser aplicados a robôs e androides da Ficção, e talvez os da não ficção. Para tanto, precisaremos percorre por todos os sentidos, pela memória, e talvez até pelos sonhos, pois como já perguntava Phillip K Dick em seu livro que inspirou Blade Runner: "Androides Sonham com Ovelhas Elétricas?". (shrink)
This book is a collection of specially commissioned chapters from philosophers, economists, political and behavioral economists, cognitive and organizational psychologists, computer scientists, sociologists and permutations thereof as befits the polymathic subject of this book – Herbert Simon.
Subjective probability plays an increasingly important role in many fields concerned with human cognition and behavior. Yet there have been significant criticisms of the idea that probabilities could actually be represented in the mind. This paper presents and elaborates a view of subjective probability as a kind of sampling propensity associated with internally represented generative models. The resulting view answers to some of the most well known criticisms of subjective probability, and is also supported by empirical work in neuroscience and (...) behavioral psychology. The repercussions of the view for how we conceive of many ordinary instances of subjective probability, and how it relates to more traditional conceptions of subjective probability, are discussed in some detail. (shrink)
This chapter argues that Simon anticipated what has emerged as the consensus view about human cognition: embodied functionalism. According to embodied functionalism, cognitive processes appear at a distinctively cognitive level; types of cognitive processes (such as proving a theorem) are not identical to kinds of neural processes, because the former can take various physical forms in various individual thinkers. Nevertheless, the distinctive characteristics of such processes — their causal structures — are determined by fine-grained properties shared by various, often especially (...) bodily related, physical processes that realize them. Simon’s apparently anti-embodiment views are surveyed and are shown to be consistent with his many claims that lend themselves to an embodied interpretation and that, to a significant extent, helped to lay the groundwork for an embodied cognitive science. (shrink)
The question ‘What is computation?’ might seem a trivial one to many, but this is far from being in consensus in philosophy of mind, cognitive science and even in physics. The lack of consensus leads to some interesting, yet contentious, claims, such as that cognition or even the universe is computational. Some have argued, though, that computation is a subjective phenomenon: whether or not a physical system is computational, and if so, which computation it performs, is entirely a matter of (...) an observer choosing to view it as such. According to one view, which we dub bold anti-realist pancomputationalism, every physical object computes every computer program. According to another, more modest view, some computational systems can be ascribed multiple computational descriptions. We argue that the first view is misguided, and that the second view need not entail observer-relativity of computation. At least to a large extent, computation is an objective phenomenon. Construed as a form of information processing, we argue that information-processing considerations determine what type of computation takes place in physical systems. (shrink)
This article addresses an open problem in the area of cognitive systems and architectures: namely the problem of handling (in terms of processing and reasoning capabilities) complex knowledge structures that can be at least plausibly comparable, both in terms of size and of typology of the encoded information, to the knowledge that humans process daily for executing everyday activities. Handling a huge amount of knowledge, and selectively retrieve it according to the needs emerging in different situational scenarios, is an important (...) aspect of human intelligence. For this task, in fact, humans adopt a wide range of heuristics (Gigerenzer & Todd) due to their “bounded rationality” (Simon, 1957). In this perspective, one of the requirements that should be considered for the design, the realization and the evaluation of intelligent cognitively-inspired systems should be rep- resented by their ability of heuristically identify and retrieve, from the general knowledge stored in their artificial Long Term Memory (LTM), that one which is synthetically and contextually relevant. This requirement, however, is often neglected. Currently, artificial cognitive systems and architectures are not able, de facto, to deal with complex knowledge structures that can be even slightly comparable to the knowledge heuris- tically managed by humans. In this paper I will argue that this is not only a technological problem but also an epistemological one and I will briefly sketch a proposal for a possible solution. (shrink)
Perception is a first-person internal sensation induced within the nervous system at the time of arrival of sensory stimuli from objects in the environment. Lack of access to the first-person properties has limited viewing perception as an emergent property and it is currently being studied using third-person observed findings from various levels. One feasible approach to understand its mechanism is to build a hypothesis for the specific conditions and required circuit features of the nodal points where the mechanistic operation of (...) perception take place for one type of sensation in one species and to verify it for the presence of comparable circuit properties for perceiving a different sensation in a different species. The present work explains visual perception in mammalian nervous system from a first-person frame of reference and provides explanations for the homogeneity of perception of visual stimuli above flicker fusion frequency, the perception of objects at locations different from their actual position, the smooth pursuit and saccadic eye movements, the perception of object borders, and perception of pressure phosphenes. Using results from temporal resolution studies and the known details of visual cortical circuitry, explanations are provided for (a) the perception of rapidly changing visual stimuli, (b) how the perception of objects occurs in the correct orientation even though, according to the third-person view, activity from the visual stimulus reaches the cortices in an inverted manner and (c) the functional significance of well-conserved columnar organization of the visual cortex. A comparable circuitry detected in a different nervous system in a remote species-the olfactory circuitry of the fruit fly Drosophila melanogaster-provides an opportunity to explore circuit functions using genetic manipulations, which, along with high-resolution microscopic techniques and lipid membrane interaction studies, will be able to verify the structure-function details of the presented mechanism of perception. (shrink)
In this paper a possible general framework for the representation of concepts in cognitive artificial systems and cognitive architectures is proposed. The framework is inspired by the so called proxytype theory of concepts and combines it with the heterogeneity approach to concept representations, according to which concepts do not constitute a unitary phenomenon. The contribution of the paper is twofold: on one hand, it aims at providing a novel theoretical hypothesis for the debate about concepts in cognitive sciences by providing (...) unexplored connections between different theories; on the other hand it is aimed at sketching a computational characterization of the problem of concept representation in cognitively inspired artificial systems and in cognitive architectures. (shrink)
In the last thirty years, a relatively large group of cognitive scientists have begun characterising the mind in terms of two distinct, relatively autonomous systems. To account for paradoxes in empirical results of studies mainly on reasoning, Dual Process Theories were developed. Such Dual Process Theories generally agree that System 1 is rapid, automatic, parallel, and heuristic-based and System 2 is slow, capacity-demanding, sequential, and related to consciousness. While System 2 can still be decently understood from a traditional cognitivist approach, (...) I will argue that it is essential for System 1 processing to be comprehended in an Embodied Embedded approach to Cognition. (shrink)
Concept representation is still an open problem in the field of ontology engineering and, more generally, of knowledge representation. In particular, the issue of representing “non classical” concepts, i.e. concepts that cannot be defined in terms of necessary and sufficient conditions, remains unresolved. In this paper we review empirical evidence from cognitive psychology, according to which concept representation is not a unitary phenomenon. On this basis, we sketch some proposals for concept representation, taking into account suggestions from psychological research. In (...) particular, it seems that human beings employ both prototype-based and exemplar-based representations in order to represent non classical concepts. We suggest that a similar, hybrid prototype-exemplar based approach could also prove useful in the field of knowledge representation technology. Finally, we propose conceptual spaces as a suitable framework for developing some aspects of this proposal. (shrink)
This essay describes computational semantic networks for a philosophical audience and surveys several approaches to semantic-network semantics. In particular, propositional semantic networks are discussed; it is argued that only a fully intensional, Meinongian semantics is appropriate for them; and several Meinongian systems are presented.
Recent research in computational neuroscience has demonstrated that we now possess the ability to simulate neural systems in significant detail and on a large scale. Simulations on the scale of a human brain have recently been reported. The ability to simulate entire brains (or significant portions thereof) would be a revolutionary scientific advance, with substantial benefits for brain science. However, the prospect of whole-brain simulation comes with a set of new and unique ethical questions. In the present paper, we briefly (...) outline certain of those problems and emphasize the need to begin considering the ethical aspects of computational neuroscience. (shrink)
One feature of vague predicates is that, as far as appearances go, they lack sharp application boundaries. I argue that we would not be able to locate boundaries even if vague predicates had sharp boundaries. I do so by developing an idealized cognitive model of a categorization faculty which has mobile and dynamic sortals (`classes', `concepts' or `categories') and formally prove that the degree of precision with which boundaries of such sortals can be located is inversely constrained by their flexibility. (...) Given the literature, it is plausible that we are appropriately like the model. Hence, an inability to locate sharp boundaries is not necessarily because there are none; boundaries could be sharp and it is plausible that we would nevertheless be unable to locate them. (shrink)
Abstract Mental representations, Swiatczak (Minds Mach 21:19–32, 2011) argues, are fundamentally biochemical and their operations depend on consciousness; hence the computational theory of mind, based as it is on multiple realisability and purely syntactic operations, must be wrong. Swiatczak, however, is mistaken. Computation, properly understood, can afford descriptions/explanations of any physical process, and since Swiatczak accepts that consciousness has a physical basis, his argument against computationalism must fail. Of course, we may not have much idea how consciousness (itself a rather (...) unclear plurality of notions) might be implemented, but we do have a hypothesis—that all of our mental life, including consciousness, is the result of computational processes and so not tied to a biochemical substrate. Like it or not, the computational theory of mind remains the only game in town. Content Type Journal Article Pages 1-8 DOI 10.1007/s11023-012-9271-5 Authors David Davenport, Computer Engineering Department, Bilkent University, 06800 Ankara, Turkey Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495. (shrink)
The proponents of machine consciousness predicate the mental life of a machine, if any, exclusively on its formal, organizational structure, rather than on its physical composition. Given that matter is organized on a range of levels in time and space, this generic stance must be further constrained by a principled choice of levels on which the posited structure is supposed to reside. Indeed, not only must the formal structure fit well the physical system that realizes it, but it must do (...) so in a manner that is determined by the system itself, simply because the mental life of a machine cannot be up to an external observer. To illustrate just how tall this order is, we carefully analyze the scenario in which a digital computer simulates a network of neurons. We show that the formal correspondence between the two systems thereby established is at best partial, and, furthermore, that it is fundamentally incapable of realizing both some of the essential properties of actual neuronal systems and some of the fundamental properties of experience. Our analysis suggests that, if machine consciousness is at all possible, conscious experience can only be instantiated in a class of machines that are entirely different from digital computers, namely, time-continuous, open, analog dynamical systems. (shrink)
The problem of concept representation is relevant for many sub-fields of cognitive research, including psychology and philosophy, as well as artificial intelligence. In particular, in recent years it has received a great deal of attention within the field of knowledge representation, due to its relevance for both knowledge engineering as well as ontology-based technologies. However, the notion of a concept itself turns out to be highly disputed and problematic. In our opinion, one of the causes of this state of affairs (...) is that the notion of a concept is, to some extent, heterogeneous, and encompasses different cognitive phenomena. This results in a strain between conflicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In some ways artificial intelligence research shows traces of this situation. In this paper, we propose an analysis of this current state of affairs. Since it is our opinion that a mature methodology with which to approach knowledge representation and knowledge engineering should also take advantage of the empirical results of cognitive psychology concerning human abilities, we outline some proposals for concept representation in formal ontologies, which take into account suggestions from psychological research. Our basic assumption is that knowledge representation systems whose design takes into account evidence from experimental psychology may therefore give better results in many applications. (shrink)
We argued [Since this argument appeared in other journals, I am reprising it here, almost verbatim.] (Fulda in J Law Info Sci 2:230–232, 1991/AI & Soc 8(4):357–359, 1994) that the paradox of the preface suggests a reason why machines cannot, will not, and should not be allowed to judge criminal cases. The argument merely shows that they cannot now and will not soon or easily be so allowed. The author, in fact, now believes that when—and only when—they are ready they (...) actually should be so allowed, in the interests of justice. Both the original argument applied and this detailed reconsideration applies exclusively to trial courts, and both specifically exclude(d) sentencing. The argument highlights some key relevant differences between minds and machines and attempts, also, to explain why automation is of far greater import for the first-level justice system (trial courts) than for higher courts. A final section discusses why sentencing was, is, and should be excluded. (shrink)
Research is starting to identify correlations between consciousness and some of the spatiotemporal patterns in the physical brain. For theoretical and practical reasons, the results of experiments on the correlates of consciousness have ambiguous interpretations. At any point in time a number of hypotheses co-exist about and the correlates of consciousness in the brain, which are all compatible with the current experimental results. This paper argues that consciousness should be attributed to any system that exhibits spatiotemporal physical patterns that match (...) the hypotheses about the correlates of consciousness that are compatible with the current experimental results. Some computers running some programs should be attributed consciousness because they produce spatiotemporal patterns in the physical world that match those that are potentially linked with consciousness in the human brain. (shrink)
Two of the most important concepts in contemporary philosophy of mind are computation and consciousness. This paper explores whether there is a strong relationship between these concepts in the following sense: is a computational theory of consciousness possible? That is, is the right kind of computation sufficient for the instantiation of consciousness. In this paper, I argue that the abstract nature of computational processes precludes computations from instantiating the concrete properties constitutive of consciousness. If this is correct, then not only (...) is there no viable computational theory of consciousness, the Human Mental State Multiple Realizability in Silicon Thesis is almost certainly false. (shrink)
John Pollock (1940?2009) was an influential American philosopher who made important contributions to various fields, including epistemology and cognitive science. In the last 25 years of his life, he also contributed to the computational study of defeasible reasoning and practical cognition in artificial intelligence. He developed one of the first formal systems for argumentation-based inference and he put many issues on the research agenda that are still relevant for the argumentation community today. This paper presents an appreciation of Pollock's work (...) on defeasible reasoning and its relevance for the computational study of argument. In our opinion, Pollock deserves to be remembered as one of the founding fathers of the field of computational argument, while, moreover, his work contains important lessons for current research in this field, reminding us of the richness of its object of study. (shrink)
Many cognitive scientists, having discovered that some computational-level characterization f of a cognitive capacity φ is intractable, invoke heuristics as algorithmic-level explanations of how cognizers compute f. We argue that such explanations are actually dysfunctional, and rebut five possible objections. We then propose computational-level theory revision as a principled and workable alternative.