Bob Hale has defended a new conception of properties that is broadly Fregean in two key respects. First, like Frege, Hale insists that every property can be defined by an open formula. Second, like Frege, but unlike later definabilists, Hale seeks to justify full impredicative property comprehension. The most innovative part of his defense, we think, is a “definability constraint” that can serve as an implicit definition of the domain of properties. We make this constraint formally precise and prove that (...) it fails to characterize the domain uniquely. Thus, we conclude, there is no easy road to impredicative definabilism. (shrink)
This is a magazine article discussing the philosophy of mathematics and arguing for mathematical conventionalism, written for a non-academic audience. (As often happens with popular articles, the editors made some changes that I'm not completely happy with, e.g., the titled section headings and sub-title.).
In a recent series of articles, Beall has developed the view that FDE is the formal system most deserving of the honorific “Logic”. The Simple Argument for this view is a cost-benefit analysis: the view that FDE is Logic has no drawbacks and it has some benefits when compared with any of its rivals. In this paper, I argue that both premises of the Simple Argument are mistaken. I use this as an opportunity to further reflect on how such arguments (...) can be bolstered to provide more substantial and productive support for revisionary theses about Logic. (shrink)
The book provides the first full length exploration of fuzzy computability. It describes the notion of fuzziness and present the foundation of computability theory. It then presents the various approaches to fuzzy computability. This text provides a glimpse into the different approaches in this area, which is important for researchers in order to have a clear view of the field. It contains a detailed literature review and the author includes all proofs to make the presentation accessible. Ideas for future research (...) and explorations are also provided. Students and researchers in computer science and mathematics will benefit from this work. (shrink)
Mathematical pluralism can take one of three forms: (1) every consistent mathematical theory consists of truths about its own domain of individuals and relations; (2) every mathematical theory, consistent or inconsistent, consists of truths about its own (possibly uninteresting) domain of individuals and relations; and (3) the principal philosophies of mathematics are each based upon an insight or truth about the nature of mathematics that can be validated. (1) includes the multiverse approach to set theory. (2) helps us to understand (...) the significance of the distinguished non-logical individual and relation terms of even inconsistent theories. (3) is a metaphilosophical form of mathematical pluralism and hasn't been discussed in the literature. In what follows, I show how the analysis of theoretical mathematics in object theory exhibits all three forms of mathematical pluralism. (shrink)
It is well known that the set of algebraic numbers (let us call it A) is countable. In this paper, instead of the usage of the classical terminology of cardinals proposed by Cantor, a recently introduced methodology using ①-based infinite numbers is applied to measure the set A (where the number ① is called grossone). Our interest to this methodology is explained by the fact that in certain cases where cardinals allow one to say only whether a set is countable (...) or it has the cardinality of the continuum, the ①-based methodology can provide a more accurate measurement of infinite sets. In this article, lower and upper estimates of the number of elements of A are obtained. Both estimates are expressed in ①-based numbers. (shrink)
The Russellian argument against the possibility of absolutely unrestricted quantification can be answered by the partisan of that quantification in an apparently easy way, namely, arguing that the objects used in the argument do not exist because they are defined in a viciously circular fashion. We show that taking this contention along as a premise and relying on an extremely intuitive Principle of Determinacy, it is possible to devise a reductio of the possibility of absolutely unrestricted quantification. Therefore, there are (...) intuitive reasons to believe that the counter-argument fails to support the possibility of absolutely unrestricted quantification. (shrink)
Cognition involves physical stimulation, neural coding, mental conception, and conscious perception. Beyond the neural coding of physical stimuli, it is not clear how exactly these component processes constitute cognition. Within mathematical sciences, category theory provides tools such as category, functor, and adjointness, which are indispensable in the explication of the mathematical calculations involved in acquiring mathematical knowledge. More speci cally, functorial semantics, in showing that theories and models can be construed as categories and functors, respectively, and in establishing the adjointness (...) between abstraction (of theories) and interpretation (to obtain models), mathematically accounts for knowing-within-mathematics. Here we show that mathematical knowing recapitulates--in an elementary form--ordinary cognition. The process of going from particulars (physical stimuli) to their concrete models (conscious percepts) via abstract theories (mental concepts) and measured properties (neural coding) is common to both mathematical knowing and ordinary cognition. Our investigation of the similarity between knowing-within-mathematics and knowing-in-general leads us to make a case for the development of the basic science of cognition in terms of the functorial semantics of mathematical knowing. (shrink)
We call for a change-of-attitude towards reviews of scientific literature. We begin with an acknowledgement of reviews as pathways for the advancement of our scientific understanding of reality. The significance of the scientific struggle propelling the putting together of pieces of knowledge into parts of a cohesive body of understanding is recognized, and yet undervalued, especially in empirical sciences. Here we propose a nudge, which is prefacing the insights gained in reviewing the literature with: 'Our review reveals' (or an equivalent (...) phrase), that can bring about the desired cultural shift in the practice of science. The resulting elevation of the status of reviews to that of original findings would also bring about the desirable smoothening of the undesirable schism between theorists and experimentalists. (shrink)
FOURTH EUROPEAN CONGRESS OF MATHEMATICS STOCKHOLM,SWEDEN JUNE27 - JULY 2, 2004 Contributed papers L. Carleson’s celebrated theorem of 1965  asserts the pointwise convergence of the partial Fourier sums of square integrable functions. The Fourier transform has a formulation on each of the Euclidean groups R , Z and Τ .Carleson’s original proof worked on Τ . Fefferman’s proof translates very easily to R . M´at´e  extended Carleson’s proof to Z . Each of the statements of the theorem (...) can be stated in terms of a maximal Fourier multiplier theorem . Inequalities for such operators can be transferred between these three Euclidean groups, and was done P. Auscher and M.J. Carro . But L. Carleson’s original proof and another proofs very long and very complicated. We give a very short and very “simple” proof of this fact. Our proof uses PNSA technique only, developed in part I, and does not uses complicated technical formations unavoidable by the using of purely standard approach to the present problems. In contradiction to Carleson’s method, which is based on profound properties of trigonometric series, the proposed approach is quite general and allows to research a wide class of analogous problems for the general orthogonal series. (shrink)
The systems of arithmetic discussed in this work are non-elementary theories. In this paper, natural numbers are characterized axiomatically in two di erent ways. We begin by recalling the classical set P of axioms of Peano’s arithmetic of natural numbers proposed in 1889 (including such primitive notions as: set of natural numbers, zero, successor of natural number) and compare it with the set W of axioms of this arithmetic (including the primitive notions like: set of natural numbers and relation of (...) inequality) proposed by Witold Wilkosz, a Polish logician, philosopher and mathematician, in 1932. The axioms W are those of ordered sets without largest element, in which every non-empty set has a least element, and every set bounded from above has a greatest element. We show that P and W are equivalent and also that the systems of arithmetic based on W or on P, are categorical and consistent. There follows a set of intuitive axioms PI of integers arithmetic, modelled on P and proposed by B. Iwanuś, as well as a set of axioms WI of this arithmetic, modelled on the W axioms, PI and WI being also equivalent, categorical and consistent. We also discuss the problem of independence of sets of axioms, which were dealt with earlier. (shrink)
In this survey, a recent computational methodology paying a special attention to the separation of mathematical objects from numeral systems involved in their representation is described. It has been introduced with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework in all the situations requiring these notions. The methodology does not contradict Cantor’s and non-standard analysis views and is based on the Euclid’s Common Notion no. 5 “The whole is greater than the (...) part” applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The methodology uses a computational device called the Infinity Computer (patented in USA and EU) working numerically (recall that traditional theories work with infinities and infinitesimals only symbolically) with infinite and infinitesimal numbers that can be written in a positional numeral system with an infinite radix. It is argued that numeral systems involved in computations limit our capabilities to compute and lead to ambiguities in theoretical assertions, as well. The introduced methodology gives the possibility to use the same numeral system for measuring infinite sets, working with divergent series, probability, fractals, optimization problems, numerical differentiation, ODEs, etc. (recall that traditionally different numerals lemniscate; Aleph zero, etc. are used in different situations related to infinity). Numerous numerical examples and theoretical illustrations are given. The accuracy of the achieved results is continuously compared with those obtained by traditional tools used to work with infinities and infinitesimals. In particular, it is shown that the new approach allows one to observe mathematical objects involved in the Hypotheses of Continuum and the Riemann zeta function with a higher accuracy than it is done by traditional tools. It is stressed that the hardness of both problems is not related to their nature but is a consequence of the weakness of traditional numeral systems used to study them. It is shown that the introduced methodology and numeral system change our perception of the mathematical objects studied in the two problems. (shrink)
In this paper Russell’s definition of number is criticized. Russell’s assertion that a number is a particular kind of set implies that number has the properties of a set. It is argued that this would imply that a number contains elements and that this does not conform to our intuitive notion of number. An alternative definition is presented in which number is not seen as an object, but rather as a process and is related to the act of counting and (...) is tightly bound up with the idea of time. Working from the idea that the description of a thing is not the thing itself, it is argued that a function should not be seen as a subset of the Cartesian product of two sets but can be described in this way. Number is then defined as a particular type of bijective function rather than a set. Definitions of equality and addition are developed. In defining addition an interesting error in Russell’s definition of addition is corrected. (shrink)
This book is a welcome contribution to the literature on Kant's philosophy of mathematics in two particular respects. First, the author systematically traces the development of Kant's thought on mathematics from the very early pre-Critical writings through to the Critical philosophy. Secondly, it puts forward a challenge to contemporary Anglo-Saxon commentators on Kant's philosophy of mathematics which merits consideration.A central theme of the book is that an adequate understanding of Kant's pronouncements on mathematics must begin with the recognition that mathematics (...) in Kant's time was poised at the beginning of what Pierobon calls the ‘algebraic revolution’ of the nineteenth century. For Kant, Euclidean geometry, with its heavy reliance on the geometric image, was the paradigm of certainty. The algebraic revolution of the nineteenth century replaced that paradigm with an algebraic formalism, thereby freeing mathematics from any connection to the geometric image, and also severing the link to intuition. Pierobon describes this as the ‘divergence between the image and writing [l'écriture]’. So great was the shift, Pierobon suggests, that, after the developments of the nineteenth century, it became difficult to find any sense in Kant's conception of mathematics as sensible knowledge. This, certainly, was the view of Russell, who notoriously claimed in Mysticism and Logic that modern developments in logic dealt a ‘fatal blow to the Kantian philosophy’ and that ‘the whole doctrine of a priori intuitions, by which Kant explained the possibility of pure mathematics, is wholly inapplicable to mathematics in its present form’.1 Pierobon claims, though, that much of Anglo-Saxon commentary on Kant's philosophy of mathematics begins from this ‘rationalist and logicist’ position, reading Kant's philosophy of mathematics from a post-algebraic-revolution perspective. This book attempts to offer a corrective to that position by offering a Kantian conception of mathematics …. (shrink)
The present paper is concerned with a ramified type theory (cf. (Lorenzen 1955), (Russell), (Schütte), (Weyl), e.g.,) in a cumulative version. §0 deals with reasoning in first order languages. is introduced as a first order set.
Published in 1903, this book was the first comprehensive treatise on the logical foundations of mathematics written in English. It sets forth, as far as possible without mathematical and logical symbolism, the grounds in favour of the view that mathematics and logic are identical. It proposes simply that what is commonly called mathematics are merely later deductions from logical premises. It provided the thesis for which _Principia Mathematica_ provided the detailed proof, and introduced the work of Frege to a wider (...) audience. In addition to the new introduction by John Slater, this edition contains Russell's introduction to the 1937 edition in which he defends his position against his formalist and intuitionist critics. (shrink)
This largely expository lecture deals with aspects of traditional solid geometry suitable for applications in logic courses. Polygons are plane or two-dimensional; the simplest are triangles. Polyhedra [or polyhedrons] are solid or three-dimensional; the simplest are tetrahedra [or triangular pyramids, made of four triangles]. -/- A regular polygon has equal sides and equal angles. A polyhedron having congruent faces and congruent [polyhedral] angles is not called regular, as some might expect; rather they are said to be subregular—a word coined for (...) this lecture. To repeat, a subregular polyhedron has congruent faces and congruent [polyhedral] angles. A subregular polyhedron whose faces are all regular polygons is regular—using standard terminology. -/- Geometers before Euclid showed that there are “essentially” only five regular polyhedra: every regular polyhedron is a tetrahedron (4 faces), a hexahedron or cube (6 faces), an octahedron (8 faces), a dodecahedron (12 faces), or an icosahedron (20 faces). -/- The first question is whether there are subregular polyhedra that are not regular. For example, are there tetrahedra having congruent angles and congruent triangular faces but whose faces are not equilateral triangles? -/- Another question is the classification of subregular polyhedra if they exist. For example, considering the fact that the regular tetrahedra all have equilateral triangles as faces, we ask which triangles other than equilaterals are faces of subregular tetrahedra. Similarly, considering the fact that the regular hexahedra all have squares as faces, we ask which quadrangles other than squares are faces of subregular hexahedra. -/- After introductory remarks that include historical and philosophical points, we concentrate on tetrahedra. A triangle that is congruent to each of the four faces of a tetrahedron is called a generator of the tetrahedron. The main result proved is that every acute triangle is a generator of a subregular tetrahedron. The proof includes an algorithm –implementable with scissors and paper –that constructs from any given acute triangle a subregular tetrahedron whose faces are congruent to the given triangle. -/- Algorithm: Given any acute triangle. Construct a similar triangle whose sides are double the sides of the given triangle. Draw the three lines connecting the three midpoints of the sides (making four triangles congruent to the given triangle—a central triangle surrounded by three peripheral triangles). Make three “hinges” along the lines connecting the midpoints. “Fold” the peripheral triangles together (into a tetrahedron). [LIGHTLY EDITED VERSION OF PRINTED ABSTRACT] Acknowledgements: William Lawvere, Colin McLarty, Irvin Miller, Frango Nabrasa, Lawrence Spector, Roberto Torretti, and Richard Vesley. -/- . (shrink)
This thesis is an examination of Frege's logicism, and of a number of objections which are widely viewed as refutations of the logicist thesis. In the view offered here, logicism is designed to provide answers to two questions: that of the nature of arithmetical truth, and that of the source of arithmetical knowledge. ;The first objection dealt with here is the view that logicism is not an epistemologically significant thesis, due to the fact that the epistemological status of logic itself (...) is not well understood. I argue to the contrary that on Frege's conception of logic, logicism is of clear epistemological importance. ;The second objection examined is the claim that Godel's first incompleteness theorem falsifies logicism. I argue that the incompleteness theorem has no impact on logicism unless the logicist is compelled to hold that logic is recursively enumerable. I argue, further, that there is no reason to impose this requirement on logicism. ;The third objection concerns Russell's paradox. I argue that the paradox is devastating to Frege's conception of numbers, but not to his logicist project. I suggest that the appropriate course for a post-Fregean logicist to follow is one which divorces itself from Frege's platonism. ;The conclusion of this thesis is that logicism has of late been too easily dismissed. Though several critical aspects of Frege's logicism must be altered in light of recent results, the central Fregean thesis is still an important and promising view about the nature of arithmetic and arithmetical knowledge. (shrink)
In this thesis, I discuss the philosophical foundations of Hilbert's Consistency Programme of the 1920's, in the light of the incompleteness theorems of Godel. ;I begin by locating the Consistency Programme within Hilbert's broader foundational project. I show that Hilbert's main aim was to establish that classical mathematics, and in particular classical analysis, is a conservative extension of finitary mathematics. Accepting the standard identification of finitary mathematics with primitive recursive arithmetic, and classical analysis with second order arithmetic, I report upon (...) some recent work which shows that Hilbert's aim can almost be realized. ;I then discuss the philosophical significance of this startling fact. I describe Hilbert as seeking a middle way between two mathematically revisionary positions in the philosophy of mathematics--a kind of proto-intuitionism, and an extreme realism, associated with the views of Kronecker and Frege respectively. I outline a Hilbertian alternative to these positions. The result is a moderate realism that owes much to Quine. I defend it against certain objections, and display its virtues in a series of comparisons with alternatives currently influential in the literature. ;In Chapter Two, I discuss the special status the Hilbertian gives to finitary mathematics. I argue that two ways of justifying this special status--by claiming that finitary mathematics is ontologically special, since it is committed only to expressions, and by claiming that finitary mathematics is epistemologically special, since its results are especially evident--are in fact hopeless. I then defend an alternative justification, drawing in part on Godel's well known discussion of mathematical intuition. ;In Chapter Three, I discuss the implications of incompleteness for the Hilbertian philosophy of mathematics. I argue, against some recent work by Michael Detlefsen, that the incompleteness theorems show definitively that Hilbert's Programme cannot be carried out in full generality. Drawing on recent work by Warren Goldfarb, I show that this conclusion follows from the First Incompleteness Theorem, and can be established without any controversial appeal to the semantic value of undecidable sentences. However, I argue that the fact of incompleteness adds to, rather than detracts from, the attractiveness of the basic Hilbertian position on the nature of mathematics. (shrink)
The dissertation studies the mathematical strength of strict constructivism, a finitistic fragment of Bishop's constructivism, and explores its implications in the philosophy of mathematics. ;It consists of two chapters and four appendixes. Chapter 1 presents strict constructivism, shows that it is within the spirit of finitism, and explains how to represent sets, functions and elementary calculus in strict constructivism. Appendix A proves that the essentials of Bishop and Bridges' book Constructive Analysis can be developed within strict constructivism. Appendix B further (...) develops, within strict constructivism, the essentials of the functional analysis applied in quantum mechanics, including the spectral theorem, Stone's theorem, and the self-adjointness of some common quantum mechanical operators. Some comparisons with other related work, in particular, a comparison with S. Simpson's partial realization of Hilbert's program, and a discussion of the relevance of M. B. Pour-El and J. I. Richards' negative results in recursive analysis are given in Appendix C. ;Chapter 2 explores the possible philosophical implications of these technical results. It first suggests a fictionalistic account for the ontology of pure mathematics. This leaves a puzzle about how truths about fictional mathematical entities are applicable to science. The chapter then explains that for those applications of mathematics that can be reduced to applications of strict constructivism, fictional entities can be eliminated in the applications and the puzzle of applicability can be resolved. Therefore, if strict constructivism were essentially sufficient for all scientific applications, the applicability of mathematics of mathematics in science would be accountable. The chapter then argues that the reduction of mathematics to strict constructivism also reduces the epistemological question about mathematics to that about elementary arithmetic. The dissertation ends with a suggestion that a proper epistemological basis for arithmetic is perhaps a mixture of Mill's empiricism and the Kantian views. (shrink)
The thesis critically examines the question of the philosophical coherence of finitism, the view which seeks to interpret mathematics without postulating an actual infinity of mathematical objects. It is argued that a widely accepted characterization of finitism, most recently expounded by Tait, is inadequate, and a new characterization based on the notion of elementary abstraction is proposed. It is further argued that the notion of elementary abstraction better explains the bearing of Godel's incompleteness theorems on the issue of the coherence (...) of finitism. By abstraction is meant a procedure by which one recognizes or establishes that some infinite process exhibits a distinctive uniformity. If the procedure does not require one to conceive of the integers in any other way than as the result of simple iterations, we call such abstraction elementary. Some six different formal models exploring a variety of different ways in which this notion can be made precise are proposed as possible formal characterizations of finitistic truth. It is proved that all these different models are equivalent with respect to finitistically meaningful sentences. This is taken as strong evidence that the notion of elementary abstraction is determinate enough to provide a basis for a coherent philosophical formulation of finitism. (shrink)
The aim of the authors is to present a comprehensive study of the basis of intuitionistic mathematics by means of modern meta-mathematical devices. The first author, for whom this book is a capstone of twenty years' work on the subject, contributes three chapters on a formal system of intuitionistic analysis, notions of realizability, and order in the continuum; the second provides an analysis of the intuitionistic continuum. An extensive bibliography which includes references to almost every article on the subject makes (...) this book especially valuable; for those interested in intuitionism and its relations to classical mathematics this work will be essential.—P. J. M. (shrink)
Revision sequences are a kind of transfinite sequences which were introduced by Herzberger and Gupta in 1982 as the main mathematical tool for developing their respective revision theories of truth. We generalise revision sequences to the notion of cofinally invariant sequences, showing that several known facts about Herzberger’s and Gupta’s theories also hold for this more abstract kind of sequences and providing new and more informative proofs of the old results.
The ArgumentL. E. J. Brouwer and David Hubert, two titans of twentieth-century mathematics, clashed dramatically in the 1920s. Though they were both Kantian constructivists, their notoriousGrundlagenstreitcentered on sharp differences about the foundations of mathematics: Brouwer was prepared to revise the content and methods of mathematics (his “Intuitionism” did just that radically), while Hilbert's Program was designed to preserve and constructively secure all of classical mathematics.Hilbert's interests and polemics at the time led to at least three misconstruals of intuitionism, misconstruals which (...) last to our own time: Current literature often portrays popular views of intuitionism as the product of Brouwer's idiosyncratic subjectivism; modern logicians view intuitionism as simply applying a non-standard formal logic to mathematics; and contemporary philosophers see that logic as based upon a pure assertabilist theory of meaning. These pictures stem from the way Hilbert structured the controversy.Even though Brouwer's own work and behavior occasionally reinforce these pictures, they are nevertheless inaccurate accounts of his approach to mathematics. However, the framework provided by the Brouwer-Hilbert debate itself does not supply an adequate correction of these inaccuracies. For, even if we eliminate these mistakes within that framework, Brouwer's position would still appear fragmented and internally inconsistent. I propose a Kantian framework — not from Kant's philosophy of mathematics but from his general metaphysics — which does show the coherence and consistency of Brouwer's views. I also suggest that expanding the context of the controversy in this way will illuminate Hilbert's views as well and will even shed light upon Kant's philosophy. (shrink)