In the late summer of 1998, the authors, a cognitive scientist and a logician, started talking about the relevance of modern mathematical logic to the study of human reasoning, and we have been talking ever since. This book is an interim report of that conversation. It argues that results such as those on the Wason selection task, purportedly showing the irrelevance of formal logic to actual human reasoning, have been widely misinterpreted, mainly because the picture of logic current in psychology (...) and cognitive science is completely mistaken. We aim to give the reader a more accurate picture of mathematical logic and, in doing so, hope to show that logic, properly conceived, is still a very helpful tool in cognitive science. The main thrust of the book is therefore constructive. We give a number of examples in which logical theorizing helps in understanding and modeling observed behavior in reasoning tasks, deviations of that behavior in a psychiatric disorder (autism), and even the roots of that behavior in the evolution of the brain. (shrink)
We review the various explanations that have been offered toaccount for subjects'' behaviour in Wason ''s famous selection task. Weargue that one element that is lacking is a good understanding ofsubjects'' semantics for the key expressions involved, and anunderstanding of how this semantics is affected by the demands the taskputs upon the subject''s cognitive system. We make novel proposals inthese terms for explaining the major content effects of deonticmaterials. Throughout we illustrate with excerpts from tutorialdialogues which motivate the kinds of (...) analysis proposed. Our long termgoal is an integration of the various insights about conditionalreasoning on offer from different cognitive science methodologies. Thepurpose of this paper is to try to draw the attention of logicians andsemanticists to this area, since we believe that empirical investigationof the cognitive processes involved could benefit from semanticanalyses. (shrink)
Although Kant (1998) envisaged a prominent role for logic in the argumentative structure of his Critique of Pure Reason, logicians and philosophers have generally judged Kantgeneralformaltranscendental logics is a logic in the strict formal sense, albeit with a semantics and a definition of validity that are vastly more complex than that of first-order logic. The main technical application of the formalism developed here is a formal proof that Kants logic is after all a distinguished subsystem of first-order logic, namely what (...) is known as geometric logic. (shrink)
Compositionality remains effective as an explanation of cases in which processing complexity increases due to syntactic factors only. It falls short of accounting for situations in which complexity arises from interactions with the sentence or discourse context, perceptual cues, and stored knowledge. The idea of compositionality as a methodological principle is appealing, but imputing the complexity to one component of the grammar or another, instead of enriching the notion of composition, is not always an innocuous move, leading to fully equivalent (...) theories. Compositionality sets an upper bound on the degree of informational encapsulation that can be posited by modular or component-based theories of language: simple composition ties in with a strongly modular take on meaning assembly, which is seen as sealed off from information streams other than the lexicon and the syntax. (shrink)
We present a faithful axiomatization of von Mises' notion of a random sequence, using an abstract independence relation. A byproduct is a quantifier elimination theorem for Friedman's "almost all" quantifier in terms of this independence relation.
We sketch four applications of Marr's levels‐of‐analysis methodology to the relations between logic and experimental data in the cognitive neuroscience of language and reasoning. The first part of the paper illustrates the explanatory power of computational level theories based on logic. We show that a Bayesian treatment of the suppression task in reasoning with conditionals is ruled out by EEG data, supporting instead an analysis based on defeasible logic. Further, we describe how results from an EEG study on temporal prepositions (...) can be reanalyzed using formal semantics, addressing a potential confound. The second part of the article demonstrates the predictive power of logical theories drawing on EEG data on processing progressive constructions and on behavioral data on conditional reasoning in people with autism. Logical theories can constrain processing hypotheses all the way down to neurophysiology, and conversely neuroscience data can guide the selection of alternative computational level models of cognition. (shrink)
We present a critical discussion of the claim (most forcefully propounded by Chaitin) that algorithmic information theory sheds new light on Godel's first incompleteness theorem.
We investigate various ways of introducing axioms for randomness in set theory. The results show that these axioms, when added to ZF, imply the failure of AC. But the axiom of extensionality plays an essential role in the derivation, and a deeper analysis may ultimately show that randomness is incompatible with extensionality.
In this paper we present a semantic analysis of the imperfective paradox based on the Event Calculus, a planning formalism characterizing a class of models which can be computed by connectionist networks. We report the results of a questionnaire that support the semantic theory and suggest that different aspectual classes of VPs in the progressive give rise to different entailment patterns. Further, a processing model is outlined, combining the semantic analysis with the psycholinguistic principle of immediacy in the framework of (...) recurrent networks. The model is used to derive predictions concerning the electrophysiological correlates of the computations described by the Event Calculus. (shrink)
We review briefly the attempts to define random sequences $$ . These attempts suggest two theorems: one concerning the number of subsequence selection procedures that transform a random sequence into a random sequence ; the other concerning the relationship between definitions of randomness based on subsequence selection and those based on statistical tests $$.
Executive function has become an important concept in explanations of psychiatric disorders, but we currently lack comprehensive models of normal executive function and of its malfunctions. Here we illustrate how defeasible logical analysis can aid progress in this area. We illustrate using autism and attention deficit hyperactivity disorder (ADHD) as example disorders, and show how logical analysis reveals commonalities between linguistic and non-linguistic behaviours within each disorder, and how contrasting sub-components of executive function are involved across disorders. This analysis reveals (...) how logical analysis is as applicable to fast, automatic and unconscious reasoning as it is to slow deliberate cogitation. (shrink)
This reply to Oaksford and Chater’s ’s critical discussion of our use of logic programming to model and predict patterns of conditional reasoning will frame the dispute in terms of the semantics of the conditional. We begin by outlining some common features of LP and probabilistic conditionals in knowledge-rich reasoning over long-term memory knowledge bases. For both, context determines causal strength; there are inferences from the absence of certain evidence; and both have analogues of the Ramsey test. Some current work (...) shows how a combination of counting defeaters and statistics from network monitoring can provide the information for graded responses from LP reasoning. With this much introduction, we then respond to O&C’s specific criticisms and misunderstandings. (shrink)
We aim to show that Kant’s theory of time is consistent by providing axioms whose models validate all synthetic a priori principles for time proposed in the Critique of Pure Reason. In this paper we focus on the distinction between time as form of intuition and time as formal intuition, for which Kant’s own explanations are all too brief. We provide axioms that allow us to construct ‘time as formal intuition’ as a pair of continua, corresponding to time as ‘inner (...) sense’ and the external representation of time as a line Both continua are replete with infinitesimals, which we use to elucidate an enigmatic discussion of ‘rest’ in the Metaphysical foundations of natural science. Our main formal tools are Alexandroff topologies, inverse systems and the ring of dual numbers. (shrink)
In this article we provide a mathematical model of Kant?s temporal continuum that satisfies the (not obviously consistent) synthetic a priori principles for time that Kant lists in the Critique of pure Reason (CPR), the Metaphysical Foundations of Natural Science (MFNS), the Opus Postumum and the notes and frag- ments published after his death. The continuum so obtained has some affinities with the Brouwerian continuum, but it also has ‘infinitesimal intervals’ consisting of nilpotent infinitesimals, which capture Kant’s theory of rest (...) and motion in MFNS. While constructing the model, we establish a concordance between the informal notions of Kant?s theory of the temporal continuum, and formal correlates to these notions in the mathematical theory. Our mathematical reconstruction of Kant?s theory of time allows us to understand what ?faculties and functions? must be in place for time to satisfy all the synthetic a priori principles for time mentioned. We have presented here a mathematically precise account of Kant?s transcendental argument for time in the CPR and of the rela- tion between the categories, the synthetic a priori principles for time, and the unity of apperception; the most precise account of this relation to date. We focus our exposition on a mathematical analysis of Kant’s informal terminology, but for reasons of space, most theorems are explained but not formally proven; formal proofs are available in (Pinosio, 2017). The analysis presented in this paper is related to the more general project of developing a formalization of Kant’s critical philosophy (Achourioti & van Lambalgen, 2011). A formal approach can shed light on the most controversial concepts of Kant’s theoretical philosophy, and is a valuable exegetical tool in its own right. However, we wish to make clear that mathematical formalization cannot displace traditional exegetical methods, but that it is rather an exegetical tool in its own right, which works best when it is coupled with a keen awareness of the subtleties involved in understanding the philosophical issues at hand. In this case, a virtuous ?hermeneutic circle? between mathematical formalization and philosophical discourse arises. (shrink)
This essay attempts to develop a psychologically informed semantics of perception reports, whose predictions match with the linguistic data. As suggested by the quotation from Miller and Johnson-Laird, we take a hallmark of perception to be its fallible nature; the resulting semantics thus necessarily differs from situation semantics. On the psychological side, our main inspiration is Marr's (1982) theory of vision, which can easily accomodate fallible perception. In Marr's theory, vision is a multi-layered process. The different layers have filters of (...) different gradation, which makes vision at each of them approximate. On the logical side, our task is therefore twofold - to formalise the layers and the ways in which they may refine each other, and. (shrink)
We show how sequent calculi for some generalized quantifiers can be obtained by generalizing the Herbrand approach to ordinary first order proof theory. Typical of the Herbrand approach, as compared to plain sequent calculus, is increased control over relations of dependence between variables. In the case of generalized quantifiers, explicit attention to relations of dependence becomes indispensible for setting up proof systems. It is shown that this can be done by turning variables into structured objects, governed by various types of (...) structural rules. These structured variables are interpreted semantically by means of a dependence relation. This relation is an analogue of the accessibility relation in modal logic. We then isolate a class of axioms for generalized quantifiers which correspond to first-order conditions on the dependence relation. (shrink)
The paper traces some of the assumptions that have informed conservative naturalism in linguistic theory, critically examines their justification, and proposes a more liberal alternative.
Oaksford & Chater (O&C) advocate Bayesian probability as a way to deal formally with the pervasive nonmonotonicity of common sense reasoning. We show that some forms of nonmonotonicity cannot be treated by Bayesian methods.
The paper addresses the way in which modern linguistics, − in particular, but not exclusively, the generative tradition − , has constructed its core concepts. It argues that a particular form of construction, reminiscent of, but crucially different from, abstrac- tion, which is dubbed ‘idealisation’, plays a central role here. The resemblances and differences between abstractions and idealisations are investigated, and consequences of the reliance on idealisations are reviewed.
In this paper we provide a mathematical model of Kant’s temporal continuum that yields formal correlates for Kant’s informal treatment of this concept in theCritique of Pure Reasonand in other works of his critical period. We show that the formal model satisfies Kant’s synthetic a priori principles for time and that it even illuminates what “faculties and functions” must be in place, as “conditions for the possibility of experience”, for time to satisfy such principles. We then present a mathematically precise (...) account of Kant’s transcendental theory of time—the most precise account to date.Moreover, we show that the Kantian continuum which we obtain has some affinities with the Brouwerian continuum but that it also has “infinitesimal intervals” consisting of nilpotent infinitesimals; these allow us to capture Kant’s theory of rest and motion in theMetaphysical Foundations of Natural Science.While our focus is on Kant’s theory of time the material in this paper is more generally relevant for the problem of developing a rigorous theory of the phenomenological continuum, in the tradition of Whitehead, Russell, and Weyl among others. (shrink)
This article aims to achieve two goals: to show that probability is not the only way of dealing with uncertainty ; and to provide evidence that logic-based methods can well support reasoning with uncertainty. For the latter claim, two paradigmatic examples are presented: logic programming with Kleene semantics for modelling reasoning from information in a discourse, to an interpretation of the state of affairs of the intended model, and a neural-symbolic implementation of input/output logic for dealing with uncertainty in dynamic (...) normative contexts. (shrink)
Sieg has proposed axioms for computability whose models can be reduced to Turing machines. This lecture will investigate to what extent these axioms hold for reasoning. In particular we focus on the requirement that the configurations that a computing agent (whether human or machine) operates on must be ’immediately recognisable’. If one thinks of reasoning as derivation in a calculus, this requirement is satisfied; but even in contexts which are only slightly less formal, the requirement cannot be met. Our main (...) example will be the Wason selection task, a propositional reasoning task in which in a typical (undergraduate) subject group only around 5% arrive at the answer dictated by classical logic. The instructions for this task (as well as other standard tasks in the psychology of reasoning, such as syllogisms) do not contain any ’immediately recognisable’ configurations. The subject must try to find an interpretation of the task by making the various elements in the instructions cohere, in effect solving a difficult constraint satisfaction problem, which has no unique solution. The subject has given a complete interpretation of the task if she can formulate the problem posed in the task as a theorem to be proved. The complexity of such theorems can be quite high; e.g. for the propositional Wason selection task the theorem can be in Σ1 3 . This sounds implausible, but we’ll present experimental data confirming this point. (shrink)
ADHD is a psychiatric disorder characterised by persistent and developmentally inappropriate levels of inattention, impulsivity and hyperactivity. It is known that children with ADHD tend to produce incoherent discourses, e.g. by narrating events out of sequence. Here the aetiology of ADHD becomes of interest. One prominent theory is that ADHD is an executive function disorder, showing deficiencies of planning. Given the close link between planning, verb tense and discourse coherence postulated in van Lambalgen and Hamm (The proper treatment of events, (...) 2004), we predicted specific deviations in the verb tenses produced by children with ADHD. Here we report on an experiment corroborating these predictions. (shrink)
Gaisi Takeuti has recently proposed a new operation on orthomodular lattices L, ⫫: $\scr{P}\rightarrow L$ . The properties of ⫫ suggest that the value of ⫫ $$ corresponds to the degree in which the elements of A behave classically. To make this idea precise, we investigate the connection between structural properties of orthomodular lattices L and the existence of two-valued homomorphisms on L.