SummaryLooking at proof theory as an attempt to ‘code’ the general pattern of the logical steps of a mathematical proof, the question of what kind of rules can make the meaning of a logical connective completely explicit does not seem to have been answered satisfactorily. The lambda calculus seems to have been more coherent simply because the use of ‘λ’ together with its projection 'apply' is specified by what can be called a 'reduction' rule: β‐conversion. We attempt to analyse the (...) role of proof rules, making use of a set of formal rules designed to capture both the notions of proof theory and those of the lambda‐calculus: Martin‐Löf's Intuitionistic Type Theory.RésuméSi on considère la théorie de la démonstration comme une tentative de ‘codifier’ les pas logiques d'une démonstration mathématique, on se rendra compte qu'on n'a pas répondu de façon satisfaisante à la question suivante: quelles sont les règies qui rendent complètement explicite le sens d'un connecteur logique? Le lambda‐calcul a été apparemment plus cohérent, tout simplement parce que l'utilisation du‘λ’ avec sa projection 'apply' est spécifiée par une regie de 'réduction': β‐conversion. Nous essayons d'analyser le rôle des règies de démonstration, en utilisant un système formel de règies conçu pour englober à la fois les notions de la théorie de la démonstration et celles du lambda‐calcul: la Théorie Intuitionniste des Types de Martin‐Löf.ZusammenfassungWenn man Beweistheorie als einen Versuch, ein allgemeines Muster der logischen Schritte in einem mathematischen Beweis zu kodifizieren, betrachtet, so scheint die Frage nach der Art von Regeln , die die Bedeutung von logischen Operatoren vollständig beschreiben, nicht zufriedenstellend beantwortet. Der Lambda‐Kalkül erscheint, kohärenter, einfach deshalb weil der Gebrauch von ‘λ’ zusammen mit dessen Projektion 'apply' durch Regeln bestimmt wird, die man 'Reduktions'‐Regeln nennen kann: β‐Konversion. Wir versuchen, die Rolle von Beweisregeln zu analysieren, indem wir ein Regelsystem verwenden, das sowohl die Begriffe der Beweistheorie als auch diejenigen des Lambda‐Kalküls erfasst, nämlich die Martin‐Löfsche Typentheorie. (shrink)
The intention here is that of giving a formal underpinning to the idea of ‘meaning-is-use’ which, even if based on proofs, it is rather different from proof-theoretic semantics as in the Dummett–Prawitz tradition. Instead, it is based on the idea that the meaning of logical constants are given by the explanation of immediate consequences, which in formalistic terms means the effect of elimination rules on the result of introduction rules, i.e. the so-called reduction rules. For that we suggest an extension (...) to the Curry– Howard interpretation which draws on the idea of labelled deduction, and brings back Frege’s device of variable-abstraction to operate on the labels (i.e., proof-terms) alongside formulas of predicate logic. (shrink)
The intention here is that of giving a formal underpinning to the idea of 'meaning-is-use' which, even if based on proofs, it is rather different from proof-theoretic semantics as in the Dummett-Prawitz tradition. Instead, it is based on the idea that the meaning of logical constants are given by the explanation of immediate consequences, which in formalistic terms means the effect of elimination rules on the result of introduction rules, i. e. the so-called reduction rules. For that we suggest an (...) extension to the Curry-Howard interpretation which draws on the idea of labelled deduction, and brings back Frege's device of variable-abstraction to operate on the labels (i. e., proof-terms) alongside formulas of predicate logic. (shrink)
We are concerned with showing how ‘labelled’ Natural Deduction presentation systems based on an extension of the so-called Curry-Howard functional interpretation can help us understand and generalise most of the deduction calculi designed to deal with the logical notion of existential quantification. We present the labelling mechanism for ‘’ using what we call ‘ɛ-terms’, which have the form of ‘a’) in a dual form to the ‘Ax.f’ terms of in the sense that the ‘witness’ is chosen at the time of (...) assertion/introduction.With the intention of demonstrating the generality of our framework, we provide a brief comparison with some other calculi of existentials, including the model-theoretic interpretations of Skolem and Herbrand, indicating the way in which some of the classical results on prenex normal forms can be extended to a wide range of logics. The essential ingredient is to look at the conceptual framework of labelled natural deduction as an attempted reconstruction of Frege's functional calculus where the devices of the Begriffsschrift work separately from, yet in harmony with, those of the Grundgesetze 1. (shrink)
Edited in collaboration with FoLLI, the Association of Logic, Language and Information, this book constitutes the 4th volume of the FoLLI LNAI subline; containing the refereed proceedings of the 15th International Workshop on Logic, Language, Information and Computation, WoLLIC 2008, held in Edinburgh, UK, in July 2008. The 21 revised full papers presented together with the abstracts of 7 tutorials and invited lectures were carefully reviewed and selected from numerous submissions. The papers cover all pertinent subjects in computer science with (...) particular interest in cross-disciplinary topics. Typical areas of interest are: foundations of computing and programming; novel computation models and paradigms; broad notions of proof and belief; formal methods in software and hardware development; logical approach to natural language and reasoning; logics of programs, actions and resources; foundational aspects of information organization, search, flow, sharing, and protection. (shrink)
Edited in collaboration with FoLLI, the Association of Logic, Language and Information this book constitutes the refereed proceedings of the 27th Workshop on Logic, Language, Information and Communication, WoLLIC 2021, Virtual Event, in October 2021. The 25 full papers presented included 6 invited lectures were fully reviewed and selected from 50 submissions. The idea is to have a forum which is large enough in the number of possible interactions between logic and the sciences related to information and computation.
Dividing chains have been used as conditions to isolate adequate subclasses of simple theories. In the first part of this paper we present an introduction to the area. We give an overview on fundamental notions and present proofs of some of the basic and well-known facts related to dividing chains in simple theories. In the second part we discuss various characterizations of the subclass of low theories. Our main theorem generalizes and slightly extends a well-known fact about the connection between (...) dividing chains and Morley sequences (in our case: independent sequences). Moreover, we are able to give a proof that is shorter than the original one. This result motivates us to introduce a special property of formulas concerning independent dividing chains: For any dividing chain there exists an independent dividing chain of the same length. We study this property in the context of low, short and ω -categorical simple theories, outline some examples and define subclasses of low and short theories, which imply this property. The results give rise to further studies of the relationships between some subclasses of simple theories. (shrink)
The lambda calculus is a universal programming language. It can represent the computable functions, and such offers a formal counterpart to the point of view of functions as rules. Terms represent functions and this allows for the application of a term/function to any other term/function, including itself. The calculus can be seen as a formal theory with certain pre-established axioms and inference rules, which can be interpreted by models. Dana Scott proposed the first non-trivial model of the extensional lambda calculus, (...) known as $ D_{\infty }$, to represent the $\lambda $-terms as the typical functions of set theory, where it is not allowed to apply a function to itself. Here we propose a construction of an $\infty $-groupoid from any lambda model endowed with a topology. We apply this construction for the particular case $D_{\infty }$, and we see that the Scott topology does not provide enough information about the relationship between higher homotopies. This motivates a new line of research focused on the exploration of $\lambda $-models with the structure of a non-trivial $\infty $-groupoid to generalize the proofs of term conversion to higher-proofs in $\lambda $-calculus. (shrink)