In this article, I make a novel argument for scientific antirealism. My argument is as follows: (1) the best human chess players would lose to the best computer chess programs; (2) if the best human chess players would lose to the best computer chess programs, then there is good reason to think that the best human chess players do not understand how to make winning moves; (3) if there is good reason to think that the best human chess players do (...) not understand how to make winning moves, then there is good reason to think that the best human theories about unobservables are wrong; therefore, (4) there is good reason to think that the best human theories about unobservables are wrong. The article is divided into three sections. In the first, I outline the backdrop for my argument. In the second, I explain my argument. In the third, I consider some objections. (shrink)
Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning—often subsumed colloquially under the label “deepfakes”—have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of synthetic audiovisual (...) media, its place within a broader taxonomy of audiovisual media, and how deep learning techniques differ from more traditional approaches to media synthesis. After reviewing important etiological features of deep learning pipelines for media manipulation and generation, I argue that “deepfakes” and related synthetic media produced with such pipelines do not merely offer incremental improvements over previous methods, but challenge traditional taxonomical distinctions, and pave the way for genuinely novel kinds of audiovisual media. (shrink)
This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out (...) aspects pertaining to marketing or sociology of research, and the remaining aspects seem to certify a genuine value of deep learning, calling for explanation. The alleged two main propelling factors for deep learning, namely computing hardware performance and neuroscience findings, are scrutinized, and evaluated as relevant but insufficient for a comprehensive explanation. We review various attempts that have been made to provide mathematical foundations able to justify the efficiency of deep learning, and we deem this is the most promising road to follow, even if the current achievements are too scattered and relevant for very limited classes of deep neural models. The authors’ take is that most of what can explain the very nature of why deep learning works at all and even very well across so many domains of application is still to be understood and further research, which addresses the theoretical foundation of artificial learning, is still very much needed. (shrink)
Organizational learning can be described as a transfer of individuals’ cognitive mental models to shared mental models. Employees, seeking the same colleagues for advice, are structurally equivalent, and the aim of the paper is to study if the concept can act as a conduit for organizational learning. It is argued that the mimicking of colleagues’ advice seeking structures will induce structural equivalence and transfer the accuracy of individuals’ cognitive mental models to shared mental models. Taking a dyadic level of analysis (...) authors revisit a classical case and present novel data analyses.The empirical results indicate that the mimicking of advice seeking structures can alter cognitive accuracy. It is discussed the findings’ implications for organization learning theory and practice, addressed the study’s limitations, and suggested avenues for future research. (shrink)
A comprehensive introduction to the Language of Though Hypothesis (LOTH) accessible to general audiences. LOTH is an empirical thesis about thought and thinking. For their explication, it postulates a physically realized system of representations that have a combinatorial syntax (and semantics) such that operations on representations are causally sensitive only to the syntactic properties of representations. According to LOTH, thought is, roughly, the tokening of a representation that has a syntactic (constituent) structure with an appropriate semantics. Thinking thus consists in (...) syntactic operations defined over representations. Most of the arguments for LOTH derive their strength from their ability to explain certain empirical phenomena like productivity, systematicity of thought and thinking. (shrink)
This paper surveys applications of logical methods in the cognitive sciences. Special attention is paid to non-monotonic logics and complexity theory. We argue that these particular tools have been useful in clarifying the debate between symbolic and connectionist models of cognition.
The following three theses are inconsistent: (1) (Paradigmatic) connectionist systems perform computations. (2) Performing computations requires executing programs. (3) Connectionist systems do not execute programs. Many authors embrace (2). This leads them to a dilemma: either connectionist systems execute programs or they don't compute. Accordingly, some authors attempt to deny (1), while others attempt to deny (3). But as I will argue, there are compelling reasons to accept both (1) and (3). So, we should replace (2) with a more satisfactory (...) account of computation. Once we do, we can see more clearly what is peculiar to connectionist computation. (shrink)
Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its _computational_ credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we examine (...) what might be regarded as the “conventional” account of connectionist computation. We show why this account is inadequate and hence fosters the suspicion that connectionist networks aren’t genuinely computational. Lastly, we turn to the principal task of the paper: the development of a more robust portrait of connectionist computation. The basis of this portrait is an explanation of the representational capacities of connection weights, supported by an analysis of the weight configurations of a series of simulated neural networks. (shrink)
In this paper I describe basic features of traditional (British) emergentism and Popper’s emergentist theory of consciousness and compare them to the contemporary versions of emergentism present in connectionist approach in cognitive sciences. I argue that despite their similarities, the traditional form, as well as Popper’s theory belong to strong causal emergentism and yield radically different ontological consequences compared to the weaker, contemporary version present in cognitive science. Strong causal emergentism denies the causal closure of the physical domain and introduces (...) genuine new mental causal powers and genuine downward causation, while weak emergentism provides new insights in understanding the mechanisms and explanation that is compatible with physicalism. (shrink)
Newell proposed that cognitive theories be developed in an effort to satisfy multiple criteria and to avoid theoretical myopia. He provided two overlapping lists of 13 criteria that the human cognitive architecture would have to satisfy in order to be functional. We have distilled these into 12 criteria: flexible behavior, real-time performance, adaptive behavior, vast knowledge base, dynamic behavior, knowledge integration, natural language, learning, development, evolution, and brain realization. There would be greater theoretical progress if we evaluated theories by a (...) broad set of criteria such as these and attended to the weaknesses such evaluations revealed. To illustrate how theories can be evaluated we apply these criteria to both classical connectionism and the ACT-R theory. The strengths of classical connectionism on this test derive from its intense effort in addressing empirical phenomena in such domains as language and cognitive development. Its weaknesses derive from its failure to acknowledge a symbolic level to thought. In contrast, ACT-R includes both symbolic and subsymbolic components. The strengths of the ACT-R theory derive from its tight integration of the symbolic component with the subsymbolic component. Its weaknesses largely derive from its failure, as yet, to adequately engage in intensive analyses of issues related to certain criteria on Newell's list. Key Words: cognitive architecture; connectionism; hybrid systems; language; learning; symbolic systems. (shrink)
In this article, we highlight three questions: (1) Does human cognition rely on structured internal representations? (2) How should theories, models and data relate? (3) In what ways might embodiment, action and dynamics matter for understanding the mind and the brain?
In this paper we defend a position we call radical connectionism. Radical connectionism claims that cognition _never_ implicates an internal symbolic medium, not even when natural language plays a part in our thought processes. On the face of it, such a position renders the human capacity for abstract thought quite mysterious. However, we argue that connectionism is committed to an analog conception of neural computation, and that representation of the abstract is no more problematic for a system of analog vehicles (...) than for a symbol system. Natural language is therefore not required as a representational medium for abstract thought. Since natural language is arguably not a representational medium _at all_, but a conventionally governed scheme of communicative signals, we suggest that the role of internalised (i.e., self- directed) language is best conceived in terms of the coordination and control of cognitive activities within the brain. (shrink)
Connectionist networks have been used to model a wide range of cognitivephenomena, including developmental, neuropsychological and normal adultbehaviours. They have offered radical alternatives to traditional accounts ofwell-established facts about cognition. The primary source of the success ofthese models is their sensitivity to statistical regularities in their trainingenvironment. This paper provides a brief description of the connectionisttoolbox and how this has developed over the past 2 decades, with particularreference to the problem of reading aloud.
In Book I, part iv, section 2 of the Treatise, "Of scepticism with regard to the senses," Hume presents two different answers to the question of how we come to believe in the continued existence of unperceived objects. He rejects his first answer shortly after its formulation, and the remainder of the section articulates an alternative account of the development of the belief. The account that Hume adopts, however, is susceptible to a number of insurmountable objections, which motivates a reassessment (...) of his original proposal. This paper defends a version of Hume's initial explanation of the belief in continued existence and examines some of its philosophical implications. (shrink)
In the late 1980s, there were many who heralded the emergence of connectionism as a new paradigm – one which would eventually displace the classically symbolic methods then dominant in AI and Cognitive Science. At present, there remain influential connectionists who continue to defend connectionism as a more realistic paradigm for modeling cognition, at all levels of abstraction, than the classical methods of AI. Not infrequently, one encounters arguments along these lines: given what we know about neurophysiology, it is just (...) not plausible to suppose that our brains are digital computers. Thus, they could not support a classical architecture. I argue here for a middle ground between connectionism and classicism. I assume, for argument's sake, that some form(s) of connectionism can provide reasonably approximate models – at least for lower-level cognitive processes. Given this assumption, I argue on theoretical and empirical grounds that most human mental skills must reside in separate connectionist modules or sub-networks. Ultimately, it is argued that the basic tenets of connectionism, in conjunction with the fact that humans often employ novel combinations of skill modules in rule following and problem solving, lead to the plausible conclusion that, in certain domains, high level cognition requires some form of classical architecture. During the course of argument, it emerges that only an architecture with classical structure could support the novel patterns of information flow and interaction that would exist among the relevant set of modules. Such a classical architecture might very well reside in the abstract levels of a hybrid system whose lower-level modules are purely connectionist. (shrink)
Green offers us two options: either connectionist models are literal models of brain activity or they are mere instruments, with little or no ontological significance. According to Green, only the first option renders connectionist models genuinely explanatory. I think there is a third possibility. Connectionist models are not literal models of brain activity, but neither are they mere instruments. They are abstract, IDEALISED models of the brain that are capable of providing genuine explanations of cognitive phenomena.
It is widely assumed that common sense psychological explanations of human action are a species of causal explanation. I argue against this construal, drawing on Ramsey et al.'s paper, “Connectionism, eliminativism, and the future of folk psychology”. I argue that if certain connec-tionist models are correct, then mental states cannot be identified with functionally discrete causes of behavior, and I respond to some recent attempts to deny this claim. However, I further contend that our common sense psychological practices are not (...) committed to the falsity of such connectionist models. The paper concludes that common sense psychology is not committed to the identification of mental states with functionally discrete causes of behavior, and hence that common sense psychology is not committed to the causal account of action explanation. (shrink)
There is a distinction between locality and modularity. These two terms have often been used interchangeably in the target article and commentary. Using this distinction we argue in favor of a modularity. In addition we also argue that both PDP-type networks and box-and-arrow models have their own strengths and pitfalls.
This volume provides a critical assessment of the wide spectrum of Hayek's celebrated work as economist and social philosopher. Included are papers on Hayek's early writings in the field of monetary economics, on which his later campaign against inflation, his controversial proposal for competing currencies, and his negative view of the impact of trade unions on the economy are based. Hayek's social philosophy, often regarded as the centre piece of his famous work, and the fundamental findings about human thinking, society, (...) the market system and social rules of conduct it is based on, is evaluated by leading contemporary social philosophers. The volume leaves little doubt as to the considerable impact of Hayek's thinking on economic policy and social philosophy. (shrink)
This is an overview of recent philosophical discussion about connectionism and the foundations of cognitive science. Connectionist modeling in cognitive science is described. Three broad conceptions of the mind are characterized, and their comparative strengths and weaknesses are discussed: (1) the classical computation conception in cognitive science; (2) a popular foundational interpretation of connectionism that John Tienson and I call “non‐sentential computationalism”; and (3) an alternative interpretation of connectionism we call “dynamical cognition.” Also discussed are two recent philosophical attempts to (...) enlist connectionism in defense of eliminativism about folk psychology. (shrink)
I explain why, within the nonclassical framework for cognitive science we describe in the book, cognitive-state transitions can fail to be tractably computable even if they are subserved by a discrete dynamical system whose mathematical-state transitions are tractably computable. I distinguish two ways that cognitive processing might conform to programmable rules in which all operations that apply to representation-level structure are primitive, and two corresponding constraints on models of cognition. Although Litch is correct in maintaining that classical cognitive science is (...) not committed to the first constraint, it is committed to the second. This fact constitutes an illuminating gloss on our claim that one foundational assumption of classicism is that human cognition conforms to programmable, representation-level, rules. (shrink)
Any analysis of the concept of computation as it occurs in the context of a discussion of the computational model of the mind must be consonant with the philosophic burden traditionally carried by that concept as providing a bridge between a physical and a psychological description of an agent. With this analysis in hand, one may ask the question: are connectionist-based systems consistent with the computational model of the mind? The answer depends upon which of several versions of connectionism one (...) presupposes: non-learning connectionist-based systems as simulated on digital computers are consistent with the computational model of the mind, whereas connectionist-based systems (/dynamical systems) qua analog systems are not. (shrink)
The reemergence of connectionism2 has profoundly altered the philosophy of mind. Paul Churchland has argued that it should equally transform the philosophy of science. He proposes that connectionism offers radical and useful new ways of understanding theories and explanations.
It is not widely realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like elements connected together into networks in a largely random manner. Turing called his networks unorganised machines. By the application of what he described as appropriate interference, mimicking education an unorganised machine can be trained to perform any task that a Turing machine can carry out, provided the number of neurons is sufficient. Turing proposed simulating both the behaviour of the (...) network and the training process by means of a computer program. We outline Turing's connectionist project of 1948. (shrink)
I sketch a theory of cognitive representation from recent "connectionist" cognitive science. I then argue that (i) this theory is reducible to neuroscientific theories, yet (ii) its kinds are multiply realized at a neurobiological level. This argument demonstrates that multiple realizability alone is no barrier to the reducibility of psychological theories. I conclude that the multiple realizability argument, the most influential argument against psychophysical reductionism, should be abandoned.
Recent work in the methodology of connectionist explanation has I'ocrrsccl on the notion of levels of explanation. Specific issucs in conncctionisrn hcrc intersect with rvider areas of debate in the philosophy of psychology and thc philosophy of science generally. The issues I raise in this chapter, then, are not unique to cognitive science; but they arise in new and important contexts when connectionism is taken seriously as a model of cognition. The general questions are the relation between levels and the (...) status of levels which have no obvious relation to others. In speaking of levels, what is the connection, if there is one, between explanation and ontology? Which, if any, conccpt of reduction is applicable to connectionist systems? What kind of legitinrtcy can the constructs of common sense psychology, or of that vclsion ol intentional realism represented by classical symbol-systems n I, hirvc irr ir full-scale connectionist theory of mind? (shrink)
Fundamental assumptions behind qualitative modelling are critically considered and some inherent problems in that modelling approach are outlined. The problems outlined are due to the assumption that a sufficient set of symbols representing the fundamental features of the physical world exists. That assumption causes serious problems when modelling continuous systems. An alternative for intelligent system building for cases not suitable for qualitative modelling is proposed. The proposed alternative combines neural networks and quantitative modelling.
This paper has a two-fold aim. First, it reinforces a version of the "syntactic argument" given in Aizawa (1994). This argument shows that connectionist networks do not provide a means of implementing representations without rules. Horgan and Tlenson have responded to the syntactic argument in their book and in another paper (Horgan & Tlenson, 1993), but their responses do not meet the challenge posed by my formulation of the syntactic argument. My second aim is to describe a kind of cognitive (...) architecture. This architecture might be called a computational architecture, but it is not a rules and representations architecture nor the representations without rules architecture that Horgan and Tlenson wish to endorse. (shrink)
Connectionism—also known as parallel distributed processing, or neural network modeling—offers promise as a framework to unite clinical and cognitive psychology, and as a tool for studying conscious and unconscious mental activity. This paper describes a neural network model of the case study of Lucy R., from Freud and Breuer's Studies on Hysteria. Though very simple in architecture, the network spontaneously displays analogues of repression and hallucination, corresponding to Lucy R.'s symptoms. Salient elements of Lucy's conscious experience are represented in the (...) model by the activations of neuronlike processors in a fully interconnected network, without hidden units. The model learns to associate elements of experience that were associated in Lucy's case history. Some of these configurations of elements were traumatic for Lucy; trauma is modeled in the network by "emphatic learning," learning accomplished at an abnormally high learning rate. The model suggests that changing associations among conscious elements are sufficient to generate the symptoms Freud observed: apparent repression, hallucination, and recovery through therapy. In the case of Lucy R., Freud's theoretical inference regarding active but unconscious thought is not required by his data. Instead, the unconscious can be understood as a set of complex dispositions embodied in connections between elements of conscious experience. (shrink)