We study the logical and computational properties of basic theorems of uncountable mathematics, including the Cousin and Lindelöf lemma published in 1895 and 1903. Historically, these lemmas were among the first formulations of open-cover compactness and the Lindelöf property, respectively. These notions are of great conceptual importance: the former is commonly viewed as a way of treating uncountable sets like e.g. [Formula: see text] as “almost finite”, while the latter allows one to treat uncountable sets like e.g. [Formula: see text] (...) as “almost countable”. This reduction of the uncountable to the finite/countable turns out to have a considerable logical and computational cost: we show that the aforementioned lemmas, and many related theorems, are extremely hard to prove, while the associated sub-covers are extremely hard to compute. Indeed, in terms of the standard scale (based on comprehension axioms), a proof of these lemmas requires at least the full extent of second-order arithmetic, a system originating from Hilbert–Bernays’ Grundlagen der Mathematik. This observation has far-reaching implications for the Grundlagen’s spiritual successor, the program of Reverse Mathematics, and the associated Gödel hierarchy. We also show that the Cousin lemma is essential for the development of the gauge integral, a generalization of the Lebesgue and improper Riemann integrals that also uniquely provides a direct formalization of Feynman’s path integral. (shrink)
We investigate the connections between computability theory and Nonstandard Analysis. In particular, we investigate the two following topics and show that they are intimately related. A basic property of Cantor space$2^ $ is Heine–Borel compactness: for any open covering of $2^ $, there is a finite subcovering. A natural question is: How hard is it to compute such a finite subcovering? We make this precise by analysing the complexity of so-called fan functionals that given any $G:2^ \to $, output a (...) finite sequence $\langle f_0, \ldots,f_n \rangle $ in $2^ $ such that the neighbourhoods defined from $\overline {f_i } G\left$ for $i \le n$ form a covering of $2^ $. A basic property of Cantor space in Nonstandard Analysis is Abraham Robinson’s nonstandard compactness, i.e., that every binary sequence is “infinitely close” to a standard binary sequence. We analyse the strength of this nonstandard compactness property of Cantor space, compared to the other axioms of Nonstandard Analysis and usual mathematics.Our study of yields exotic objects in computability theory, while leads to surprising results in Reverse Mathematics. We stress that and are highly intertwined, i.e., our study is holistic in nature in that results in computability theory yield results in Nonstandard Analysis and vice versa. (shrink)
In his remarkable paper Formalism 64, Robinson defends his eponymous position concerning the foundations of mathematics, as follows:Any mention of infinite totalities is literally meaningless.We should act as if infinite totalities really existed. Being the originator of Nonstandard Analysis, it stands to reason that Robinson would have often been faced with the opposing position that ‘some infinite totalities are more meaningful than others’, the textbook example being that of infinitesimals. For instance, Bishop and Connes have made such claims regarding infinitesimals, (...) and Nonstandard Analysis in general, going as far as calling the latter respectively a debasement of meaning and virtual, while accepting as meaningful other infinite totalities and the associated mathematical framework. We shall study the critique of Nonstandard Analysis by Bishop and Connes, and observe that these authors equate ‘meaning’ and ‘computational content’, though their interpretations of said content vary. As we will see, Bishop and Connes claim that the presence of ideal objects in Nonstandard Analysis yields the absence of meaning. We will debunk the Bishop–Connes critique by establishing the contrary, namely that the presence of ideal objects in Nonstandard Analysis yields the ubiquitous presence of computational content. In particular, infinitesimals provide an elegant shorthand for expressing computational content. To this end, we introduce a direct translation between a large class of theorems of Nonstandard Analysis and theorems rich in computational content, similar to the ‘reversals’ from the foundational program Reverse Mathematics. The latter also plays an important role in gauging the scope of this translation. (shrink)
We study the logical and computational properties of basic theorems of uncountable mathematics, in particular Pincherle's theorem, published in 1882. This theorem states that a locally bounded function is bounded on certain domains, i.e. one of the first ‘local-to-global’ principles. It is well-known that such principles in analysis are intimately connected to (open-cover) compactness, but we nonetheless exhibit fundamental differences between compactness and Pincherle's theorem. For instance, the main question of Reverse Mathematics, namely which set existence axioms are necessary to (...) prove Pincherle's theorem, does not have an unique or unambiguous answer, in contrast to compactness. We establish similar differences for the computational properties of compactness and Pincherle's theorem. We establish the same differences for other local-to-global principles, even going back to Weierstrass. We also greatly sharpen the known computational power of compactness, for the most shared with Pincherle's theorem however. Finally, countable choice plays an important role in the previous, we therefore study this axiom together with the intimately related Lindelöf lemma. (shrink)
Reverse mathematics is a program in the foundations of mathematics founded by Friedman and developed extensively by Simpson and others. The aim of RM is to find the minimal axioms needed to prove a theorem of ordinary, that is, non-set-theoretic, mathematics. As suggested by the title, this paper deals with two RM-phenomena, namely, splittings and disjunctions. As to splittings, there are some examples in RM of theorems A, B, C such that A↔, that is, A can be split into two (...) independent parts B and C. As to disjunctions, there are examples in RM of theorems D, E, F such that D↔, that is, D can be written as the disjunction of two independent parts E and F. By contrast, we show in this paper that there is a plethora of splittings and disjunctions in Kohlenbach’s higher-order RM. (shrink)
Cantor’s first set theory paper (1874) establishes the uncountability of ${\mathbb R}$. We study this most basic mathematical fact formulated in the language of higher-order arithmetic. In particular, we investigate the logical and computational properties of ${\mathsf {NIN}}$ (resp. ${\mathsf {NBI}}$ ), i.e., the third-order statement there is no injection resp. bijection from $[0,1]$ to ${\mathbb N}$. Working in Kohlenbach’s higher-order Reverse Mathematics, we show that ${\mathsf {NIN}}$ and ${\mathsf {NBI}}$ are hard to prove in terms of (conventional) comprehension axioms, (...) while many basic theorems, like Arzelà’s convergence theorem for the Riemann integral (1885), are shown to imply ${\mathsf {NIN}}$ and/or ${\mathsf {NBI}}$. Working in Kleene’s higher-order computability theory based on S1–S9, we show that the following fourth-order process based on ${\mathsf {NIN}}$ is similarly hard to compute: for a given $[0,1]\rightarrow {\mathbb N}$ -function, find reals in the unit interval that map to the same natural number. (shrink)
Elementary Recursive Nonstandard Analysis, in short ERNA, is a constructive system of nonstandard analysis proposed around 1995 by Patrick Suppes and Richard Sommer, who also proved its consistency inside PRA. It is based on an earlier system developed by Rolando Chuaqui and Patrick Suppes, of which Michal Rössler and Emil Jeřábek have recently proposed a weakened version. We add a Π₁-transfer principle to ERNA and prove the consistency of the extended theory inside PRA. In this extension of ERNA a σ₁-supremum (...) principle 'up-to-infinitesimals', and some well-known calculus results for sequences are deduced. Finally, we prove that transfer is 'too strong' for finitism by reconsidering Rössler and Jeřábek's conclusions. (shrink)
Elementary Recursive Nonstandard Analysis, in short ERNA, is a constructive system of nonstandard analysis with a PRA consistency proof, proposed around 1995 by Patrick Suppes and Richard Sommer. Recently, the author showed the consistency of ERNA with several transfer principles and proved results of nonstandard analysis in the resulting theories (see [12] and [13]). Here, we show that Weak König's lemma (WKL) and many of its equivalent formulations over RCA₀ from Reverse Mathematics (see [21] and [22]) can be 'pushed down' (...) into the weak theory ERNA, while preserving the equivalences, but at the price of replacing equality with equality 'up to infinitesimals'. It turns out that ERNA plays the role of RCA₀ and that transfer for universal formulas corresponds to WKL. (shrink)
Elementary Recursive Nonstandard Analysis, in short ERNA, is a constructive system of nonstandard analysis with a PRA consistency proof, proposed in around 1995 by Patrick Suppes and Richard Sommer. It is based on an earlier system developed by Rolando Chuaqui and Patrick Suppes. Here, we discuss the inherent problems and limitations of the classical nonstandard framework and propose a much-needed refinement of ERNA, called , in the spirit of Karel Hrbacek’s stratified set theory. We study the metamathematics of and its (...) extensions. In particular, we consider several transfer principles, both classical and ‘stratified’, which turn out to be related. Finally, we show that the resulting theory allows for a truly general, elegant and elementary treatment of basic analysis. (shrink)
The program of Reverse Mathematics (Simpson 2009) has provided us with the insight that most theorems of ordinary mathematics are either equivalent to one of a select few logical principles, or provable in a weak base theory. In this paper, we study the properties of the Dirac delta function (Dirac 1927; Schwartz 1951) in two settings of Reverse Mathematics. In particular, we consider the Dirac Delta Theorem, which formalizes the well-known property ${\int_\mathbb{R}f(x)\delta(x)\,dx=f(0)}$ of the Dirac delta function. We show that (...) the Dirac Delta Theorem is equivalent to weak König’s Lemma (see Yu and Simpson in Arch Math Log 30(3):171–180, 1990) in classical Reverse Mathematics. This further validates the status of WWKL0 as one of the ‘Big’ systems of Reverse Mathematics. In the context of ERNA’s Reverse Mathematics (Sanders in J Symb Log 76(2):637–664, 2011), we show that the Dirac Delta Theorem is equivalent to the Universal Transfer Principle. Since the Universal Transfer Principle corresponds to WKL, it seems that, in ERNA’s Reverse Mathematics, the principles corresponding to WKL and WWKL coincide. Hence, ERNA’s Reverse Mathematics is actually coarser than classical Reverse Mathematics, although the base theory has lower first-order strength. (shrink)
An important open problem in Reverse Mathematics is the reduction of the first-order strength of the base theory from IΣ1IΣ1 to IΔ0+expIΔ0+exp. The system ERNA, a version of Nonstandard Analysis based on the system IΔ0+expIΔ0+exp, provides a partial solution to this problem. Indeed, weak Königʼs lemma and many of its equivalent formulations from Reverse Mathematics can be ‘pushed down’ into ERNA, while preserving the equivalences, but at the price of replacing equality with ‘≈’, i.e. infinitesimal proximity . The logical principle (...) corresponding to weak Königʼs lemma is the universal transfer principle from Nonstandard Analysis. Here, we consider the intermediate and mean value theorem and their uniform generalizations. We show that ERNAʼs Reverse Mathematics mirrors the situation in classical Reverse Mathematics. This further supports our claim from Sanders [19] that the Reverse Mathematics of ERNA plus universal transfer is a copy up to infinitesimals of that of WKL0. We discuss some of the philosophical implications of our results. (shrink)
Reverse Mathematics (RM hereafter) is a program in the foundations of mathematics where the aim is to identify the minimal axioms needed to prove a given theorem from ordinary, i.e., non-set theoretic, mathematics. This program has unveiled surprising regularities: the minimal axioms are very often equivalent to the theorem over the base theory, a weak system of ‘computable mathematics’, while most theorems are either provable in this base theory, or equivalent to one of only four logical systems. The latter plus (...) the base theory are called the ‘Big Five’ and the associated equivalences are robust following Montalbán, i.e., stable under small variations of the theorems at hand. Working in Kohlenbach’s higher-order RM, we obtain two new and long series of equivalences based on theorems due to Bolzano, Weierstrass, Jordan, and Cantor; these equivalences are extremely robust and have no counterpart among the Big Five systems. Thus, higher-order RM is much richer than its second-order cousin, boasting at least two extra ‘Big’ systems. (shrink)
Constructive Analysis and Nonstandard Analysis are often characterized as completely antipodal approaches to analysis. We discuss the possibility of capturing the central notion of Constructive Analysis (i.e. algorithm, finite procedure or explicit construction) by a simple concept inside Nonstandard Analysis. To this end, we introduce Omega-invariance and argue that it partially satisfies our goal. Our results provide a dual approach to Erik Palmgren's development of Nonstandard Analysis inside constructive mathematics.
Reverse mathematics is a program in the foundations of mathematics. It provides an elegant classification in which the majority of theorems of ordinary mathematics fall into only five categories, based on the “big five” logical systems. Recently, a lot of effort has been directed toward finding exceptional theorems, that is, those which fall outside the big five. The so-called reverse mathematics zoo is a collection of such exceptional theorems. It was previously shown that a number of uniform versions of the (...) zoo theorems, that is, where a functional computes the objects stated to exist, fall in the third big five category, arithmetical comprehension, inside Kohlenbach’s higher-order reverse mathematics. In this paper, we extend and refine these previous results. In particular, we establish analogous results for recent additions to the reverse mathematics zoo, thus establishing that the latter disappear at the uniform level. Furthermore, we show that the aforementioned equivalences can be proved using only intuitionistic logic. Perhaps most surprisingly, these explicit equivalences are extracted from nonstandard equivalences in Nelson’s internal set theory, and we show that the nonstandard equivalence can be recovered from the explicit ones. Finally, the following zoo theorems are studied in this paper: Π10G, FIP, 1-GEN, OPT, AMT, SADS, AST, NCS, and KPT. (shrink)
In nonstandard mathematics, the predicate ‘x is standard’ is fundamental. Recently, ‘relative’ or ‘stratified’ nonstandard theories have been developed in which this predicate is replaced with ‘x is y -standard’. Thus, objects are not standard in an absolute sense, but standard relative to other objects and there is a whole stratified universe of ‘levels’ or ‘degrees’ of standardness. Here, we study stratified nonstandard arithmetic and the related transfer principle. Using the latter, we obtain the ‘reduction theorem’ which states that arithmetical (...) formulas can be reduced to equivalent bounded formulas. Surprisingly, the reduction theorem is also equivalent to the transfer principle. As applications, we obtain a truth definition for arithmetical sentences and we formalize Nelson's notion of impredicativity. (shrink)
Reverse mathematics is a program in the foundations of mathematics founded by Friedman and developed extensively by Simpson and others. The aim of RM is to find the minimal axioms needed to prove a theorem of ordinary, that is, non-set-theoretic, mathematics. As suggested by the title, this paper deals with the study of the topological notions of dimension and paracompactness, inside Kohlenbach’s higher-order RM. As to splittings, there are some examples in RM of theorems A, B, C such that A (...) ↔, that is, A can be split into two independent parts B and C, and the aforementioned topological notions give rise to a number of splittings involving highly natural A, B, C. Nonetheless, the higher-order picture is markedly different from the second-one: in terms of comprehension axioms, the proof in higher-order RM of, for example, the paracompactness of the unit interval requires full second-order arithmetic, while the second-order/countable version of paracompactness of the unit interval is provable in the base theory RCA 0. We obtain similarly “exceptional” results for the Urysohn identity, the Lindelöf lemma, and partitions of unity. We show that our results exhibit a certain robustness, in that they do not depend on the exact definition of cover, even in the absence of the axiom of choice. (shrink)
Kohlenbach's proof mining program deals with the extraction of effective information from typically ineffective proofs. Proof mining has its roots in Kreisel's pioneering work on the so-called unwinding of proofs. The proof mining of classical mathematics is rather restricted in scope due to the existence of sentences without computational content which are provable from the law of excluded middle and which involve only two quantifier alternations. By contrast, we show that the proof mining of classical Nonstandard Analysis has a very (...) large scope. In particular, we will observe that this scope includes any theorem of pure Nonstandard Analysis, where 'pure' means that only nonstandard definitions are used. In this note, we survey results in analysis, computability theory, and Reverse Mathematics. (shrink)