Reverse mathematics studies which subsystems of second order arithmetic are equivalent to key theorems of ordinary, non-set-theoretic mathematics. The main philosophical application of reverse mathematics proposed thus far is foundational analysis, which explores the limits of different foundations for mathematics in a formally precise manner. This paper gives a detailed account of the motivations and methodology of foundational analysis, which have heretofore been largely left implicit in the practice. It then shows how this account can be fruitfully applied in the (...) evaluation of major foundational approaches by a careful examination of two case studies: a partial realization of Hilbert’s program due to Simpson , and predicativism in the extended form due to Feferman and Schütte. -/- Shore [2010, 2013] proposes that equivalences in reverse mathematics be proved in the same way as inequivalences, namely by considering only omega-models of the systems in question. Shore refers to this approach as computational reverse mathematics. This paper shows that despite some attractive features, computational reverse mathematics is inappropriate for foundational analysis, for two major reasons. Firstly, the computable entailment relation employed in computational reverse mathematics does not preserve justification for the foundational programs above. Secondly, computable entailment is a Pi-1-1 complete relation, and hence employing it commits one to theoretical resources which outstrip those available within any foundational approach that is proof-theoretically weaker than Pi-1-1-CA0. (shrink)
¿Qué ha pasado con el problema del cardinal del continuo después de Gödel (1938) y Cohen (1964)? Intentos de responder esta pregunta pueden encontrarse en los artículos de José Alfredo Amor (1946-2011), "El Problema del continuo después de Cohen (1964-2004)", de Carlos Di Prisco , "Are we closer to a solution of the continuum problem", y de Joan Bagaria, "Natural axioms of set and the continuum problem" , que se pueden encontrar en la biblioteca digital de mi blog de Lógica (...) Matemática y Fundamentos de la Matemática (ver). También en la entrada "The Continuum Hypothesis" de la web de Enciclopedia de Filosofía de la Universidad de Stanford existe información importante y actualizada al respecto. En esta breve nota se comenta sobre el tema de una manera divulgativa. (shrink)
Context: Consistency of mathematical constructions in numerical analysis and the application of computerized proofs in the light of the occurrence of numerical chaos in simple systems. Purpose: To show that a computer in general and a numerical analysis in particular can add its own peculiarities to the subject under study. Hence the need of thorough theoretical studies on chaos in numerical simulation. Hence, a questioning of what e.g. a numerical disproof of a theorem in physics or a prediction in numerical (...) economics could mean. Method: An algebraic simple model system is subjected to a deeper structure of underlying variables. With an algorithm simulating the steps in taking a limit of second order difference quotients the error terms are studied at the background of their algebraic expression. Results: With the algorithm that was applied to a simple quadratic polynomial system we found unstably amplified round-off errors. The possibility of numerical chaos is already known but not in such a simple system as used in our paper. The amplification of the errors implies that it is not possible with computer means to constructively show that the algebra and numerical analysis will ‘on the long run’ converge to each other and the error term will vanish. The algebraic vanishing of the error term cannot be demonstrated with the use of the computer because the round-off errors are amplified. In philosophical terms, the amplification of the round-off error is equivalent to the continuum hypothesis. This means that the requirement of (numerical) construction of mathematical objects is no safeguard against inference-only conclusions of qualities of (numerical) mathematical objects. Unstably amplified round-off errors are a same type of problem as the ordering in size of transfinite cardinal numbers. The difference is that the former problem is created within the requirements of constructive mathematics. This can be seen as the reward for working numerically constructive. (shrink)
In this papers ,we use the control method of the maximal fractional integral and obtain the boundedness of higher order commutator generated by maximal Bochner-Riesz operator on Morrey space. Moreover , we get it's continuty from Morrey space to Lipschtz space and from Morrey space to BMO space.
One shortcoming of the chain rule is that it does not iterate: it gives the derivative of f(g(x)), but not (directly) the second or higher-order derivatives. We present iterated differentials and a version of the multivariable chain rule which iterates to any desired level of derivative. We first present this material informally, and later discuss how to make it rigorous (a discussion which touches on formal foundations of calculus). We also suggest a finite calculus chain rule (contrary to Graham, Knuth (...) and Patashnik's claim that "there's no corresponding chain rule of finite calculus"). (shrink)
In Bertrand Russell's 1903 Principles of Mathematics, he offers an apparently devastating criticism of the neo-Kantian Hermann Cohen's Principle of the Infinitesimal Method and its History (PIM). Russell's criticism is motivated by his concern that Cohen's account of the foundations of calculus saddles mathematics with the paradoxes of the infinitesimal and continuum, and thus threatens the very idea of mathematical truth. This paper defends Cohen against that objection of Russell's, and argues that properly understood, Cohen's views of limits and infinitesimals (...) do not entail the paradoxes of the infinitesimal and continuum. Essential to that defense is an interpretation, developed in the paper, of Cohen's positions in the PIM as deeply rationalist. The interest in developing this interpretation is not just that it reveals how Cohen's views in the PIM avoid the paradoxes of the infinitesimal and continuum. It also reveals some of what is at stake, both historically and philosophically, in Russell's criticism of Cohen. (shrink)
The concept of ‘ideas’ plays central role in philosophy. The genesis of the idea of continuity and its essential role in intellectual history have been analyzed in this research. The main question of this research is how the idea of continuity came to the human cognitive system. In this context, we analyzed the epistemological function of this idea. In intellectual history, the idea of continuity was first introduced by Leibniz. After him, this idea, as a paradigm, formed the base of (...) several fundamental scientific conceptions. This idea also allowed mathematicians to justify the nature of real numbers, which was one of central questions and intellectual discussions in the history of mathematics. For this reason, we analyzed how Dedekind’s continuity idea was used to this justification. As a result, it can be said that several fundamental conceptions in intellectual history, philosophy and mathematics cannot arise without existence of the idea of continuity. However, this idea is neither a purely philosophical nor a mathematical idea. This is an interdisciplinary concept. For this reason, we call and classify it as mathematical and philosophical invariance. (shrink)
Tanabe Hajime (1885-1962) in his later years explored the so-called "dialectical" interpretation of complex analysis, an important part of his philosophy of mathematics that has previously been criticized as lacking mathematical accuracy and philosophical importance. I interpret his elaboration on complex analysis as an attempt to develop Leibniz's theory of individual notion and to supplement Hegel's view of higher analysis with the development in mathematics such as the theory of analytic continuation and Riemann surface. This interpretation shows the previously underrated (...) philosophico-mathematical significance of Tanabe's argument. (shrink)
Foundations of Science recently published a rebuttal to a portion of our essay it published 2 years ago. The author, G. Schubring, argues that our 2013 text treated unfairly his 2005 book, Conflicts between generalization, rigor, and intuition. He further argues that our attempt to show that Cauchy is part of a long infinitesimalist tradition confuses text with context and thereby misunderstands the significance of Cauchy’s use of infinitesimals. Here we defend our original analysis of various misconceptions and misinterpretations concerning (...) the history of infinitesimals and, in particular, the role of infinitesimals in Cauchy’s mathematics. We show that Schubring misinterprets Proclus, Leibniz, and Klein on non-Archimedean issues, ignores the Jesuit context of Moigno’s flawed critique of infinitesimals, and misrepresents, to the point of caricature, the pioneering Cauchy scholarship of D. Laugwitz. (shrink)
The paper is an introduction to geometric algebra and geometric calculus for those with a knowledge of undergraduate mathematics. No knowledge of physics is required. The section Further Study lists many papers available on the web.
This article examines Stefan Banach’s contributions to the field of functional analysis based on the concept of structure and the multiply-flavored expression of generality that arises in his work on linear operations. More specifically, it discusses the two stages in the process by which Banach elaborated a new framework for functional analysis where structures were bound to play an essential role. It considers whether Banach spaces, or complete normed vector spaces, were born in Banach’s first paper, the 1922 doctoral dissertation (...) On operations on abstract spaces and their application to integral equations. It also analyzes what appears to be the core of Banach’s 1922 article and the transformation into a general setting that it represents. The main achievements of Banach’s dissertation, as well as all the essential features that bear witness to the birth of a new theory, are concentrated in the study of linear operations. (shrink)
The derivative is a basic concept of differential calculus. However, if we calculate the derivative as change in distance over change in time, the result at any instant is 0/0, which seems meaningless. Hence, Newton and Leibniz used the limit to determine the derivative. Their method is valid in practice, but it is not easy to intuitively accept. Thus, this article describes the novel method of differential calculus based on the double contradiction, which is easier to accept intuitively. Next, the (...) geometrical meaning of the double contradiction is considered as follows. A tangent at a point on a convex curve is iterated. Then, the slope of the tangent at the point is sandwiched by two kinds of lines. The first kind of line crosses the curve at the original point and a point to the right of it. The second kind of line crosses the curve at the original point and a point to the left of it. Then, the double contradiction can be applied, and the slope of the tangent is determined as a single value. Finally, the meaning of this method for the foundation of mathematics is considered. We reflect on Dehaene’s notion that the foundation of mathematics is based on the intuitions, which evolve independently. Hence, there may be gaps between intuitions. In fact, the Ancient Greeks identified inconsistency between arithmetic and geometry. However, Eudoxus developed the theory of proportion, which is equivalent to the Dedekind Cut. This allows the iteration of an irrational number by rational numbers as precisely as desired. Simultaneously, we can define the irrational number by the double contradiction, although its existence is not guaranteed. Further, an area of a curved figure is iterated and defined by rectilinear figures using the double contradiction. (shrink)
Filtration combustion is described by Laplacian growth without surface tension. These equations have elegant analytical solutions that replace the complex integro-differential motion equations by simple differential equations of pole motion in a complex plane. The main problem with such a solution is the existence of finite time singularities. To prevent such singularities, nonzero surface tension is usually used. However, nonzero surface tension does not exist in filtration combustion, and this destroys the analytical solutions. However, a more elegant approach exists for (...) solving the problem. First, we can introduce a small amount of pole noise to the system. Second, for regularisation of the problem, we throw out all new poles that can produce a finite time singularity. It can be strictly proved that the asymptotic solution for such a system is a single finger. Moreover, the qualitative consideration demonstrates that a finger with 1 2 of the channel width is statistically stable. Therefore, all properties of such a solution are exactly the same as those of the solution with nonzero surface tension under numerical noise. The solution of the ST problem without surface tension is similar to the solution for the equation of cellular flames in the case of the combustion of gas mixtures. (shrink)
This comment is analysing the last section of a paper by Piotr Blaszczyk, Mikhail G. Katz, and David Sherry on alleged misconceptions committed by historians of mathematics regarding the history of analysis, published in this journal in the first issue of 2013. Since this section abounds of wrong attributions and denouncing statements regarding my research and a key publication, the comment serves to rectify them and to recall some minimal methodological requirements for historical research.
The main aim of Samuel Hartlib was to provide an advancement of learning finalized to the amelioration of the material conditions of men and the pursuit of a religious peace, i.e., the unification of the Protestants. To this aim, inspired by Comenius, he devoted his efforts or gathering knowledge by the creation of a society or office of learned men (in technical fields, philosophy, and theology), and by the establishment of a network of correspondents (the Hartlib Circle). The method of (...) discovery underlying his program of advancement of learning was inspired by Bacon’s Novum Organum and by Jacopo Aconcio’s method of analysis, while the categorization and transmission of knowledge had to be based on commonplace books and artificial languages. His plan of economic improvement, to be fulfilled mainly through the amelioration of husbandry, was motivated by the Puritan Millenarianism to which he adhered. (shrink)
This work builds on the Volterra series formalism presented in Dreisigmeyer and Young to model nonconservative systems. Here we treat Lagrangians and actions as ‘time dependent’ Volterra series. We present a new family of kernels to be used in these Volterra series that allow us to derive a single retarded equation of motion using a variational principle.
The widespread idea that infinitesimals were “eliminated” by the “great triumvirate” of Cantor, Dedekind, and Weierstrass is refuted by an uninterrupted chain of work on infinitesimal-enriched number systems. The elimination claim is an oversimplification created by triumvirate followers, who tend to view the history of analysis as a pre-ordained march toward the radiant future of Weierstrassian epsilontics. In the present text, we document distortions of the history of analysis stemming from the triumvirate ideology of ontological minimalism, which identified the continuum (...) with a single number system. Such anachronistic distortions characterize the received interpretation of Stevin, Leibniz, d’Alembert, Cauchy, and others. (shrink)
In this paper, I present a puzzle involving special relativity and the random selection of real numbers. In a manner to be specified, darts thrown later hit reals further into a fixed well-ordering than darts thrown earlier. Special relativity is then invoked to create a puzzle. I consider four ways of responding to this puzzle which, I suggest, fail. I then propose a resolution to the puzzle, which relies on the distinction between the potential infinite and the actual infinite. I (...) suggest that certain structures, such as a well-ordering of the reals, or the natural numbers, are examples of the potential infinite, whereas infinite integers in a nonstandard model of arithmetic are examples of the actual infinite. (shrink)
We develop a point-free construction of the classical one- dimensional continuum, with an interval structure based on mereology and either a weak set theory or logic of plural quantification. In some respects this realizes ideas going back to Aristotle,although, unlike Aristotle, we make free use of classical "actual infinity". Also, in contrast to intuitionistic, Bishop, and smooth infinitesimal analysis, we follow classical analysis in allowing partitioning of our "gunky line" into mutually exclusive and exhaustive disjoint parts, thereby demonstrating the independence (...) of "indecomposability" from a non-punctiform conception. It is surprising that such simple axioms as ours already imply the Archimedean property and that they determine an isomorphism with the Dedekind-Cantor structure of R as a complete, separable, ordered field. We also present some simple topological models of our system, establishing consistency relative to classical analysis. Finally, after describing how to nominalize our theory, we close with comparisons with earlier efforts related to our own. (shrink)
In this paper we consider the major development of mathematical analysis during the mid-nineteenth century. On the basis of Jahnke’s (Hist Math 20(3):265–284, 1993 ) distinction between considering mathematics as an empirical science based on time and space and considering mathematics as a purely conceptual science we discuss the Swedish nineteenth century mathematician E.G. Björling’s general view of real- and complexvalued functions. We argue that Björling had a tendency to sometimes consider mathematical objects in a naturalistic way. One example is (...) how Björling interprets Cauchy’s definition of the logarithm function with respect to complex variables, which is investigated in the paper. Furthermore, in view of an article written by Björling (Kongl Vetens Akad Förh Stockholm 166–228, 1852 ) we consider Cauchy’s theorem on power series expansions of complex valued functions. We investigate Björling’s, Cauchy’s and the Belgian mathematician Lamarle’s different conditions for expanding a complex function of a complex variable in a power series. We argue that one reason why Cauchy’s theorem was controversial could be the ambiguities of fundamental concepts in analysis that existed during the mid-nineteenth century. This problem is demonstrated with examples from Björling, Cauchy and Lamarle. (shrink)
Model theorists have been studying analytic functions since the late 1970s. Highlights include the seminal work of Denef and van den Dries on the theory of the p-adics with restricted analytic functions, Wilkie's proof of o-minimality of the theory of the reals with the exponential function, and the formulation of Zilber's conjecture for the complex exponential. My goal in this talk is to survey these main developments and to reflect on today's open problems, in particular for theories of valued fields.
The article deals with Cantor's argument for the non-denumerability of reals somewhat in the spirit of Lakatos' logic of mathematical discovery. At the outset Cantor's proof is compared with some other famous proofs such as Dedekind's recursion theorem, showing that rather than usual proofs they are resolutions to do things differently. Based on this I argue that there are "ontologically" safer ways of developing the diagonal argument into a full-fledged theory of continuum, concluding eventually that famous semantic paradoxes based on (...) diagonal construction are caused by superficial understanding of what a name is. (shrink)
Recent mathematical results, obtained by the author, in collaboration with Alexander Stokolos, Olof Svensson, and Tomasz Weiss, in the study of harmonic functions, have prompted the following reflections, intertwined with views on some turning points in the history of mathematics and accompanied by an interpretive key that could perhaps shed some light on other aspects of (the development of) mathematics.
Mate Meršić (Merchich, 1850-1928) sees the origin of Zeno’s paradox ‘Achilles’ in the ambiguities of the concept of the infinity. According to him (and to the tradition started by Gregory St. Vincent), those ambiguities are resolved by the concept of convergent geometric series. In this connection, Meršić proposes a general ontological theory with the priority of the finite over the infinite, and, proceeding from Newton’s concept of fluxion, he develops a modal interpretation of differential calculus.
The goal of this paper consists of developing a new (more physical and numerical in comparison with standard and non-standard analysis approaches) point of view on Calculus with functions assuming infinite and infinitesimal values. It uses recently introduced infinite and infinitesimal numbers being in accordance with the principle ‘The part is less than the whole’ observed in the physical world around us. These numbers have a strong practical advantage with respect to traditional approaches: they are representable at a new kind (...) of a computer – the Infinity Computer – able to work numerically with all of them. An introduction to the theory of physical and mathematical continuity and differentiation (including subdifferentials) for functions assuming finite, infinite, and infinitesimal values over finite, infinite, and infinitesimal domains is developed in the paper. This theory allows one to work with derivatives that can assume not only finite but infinite and infinitesimal values, as well. It is emphasized that the newly introduced notion of the physical continuity allows one to see the same mathematical object as a continuous or a discrete one, in dependence on the wish of the researcher, i.e., as it happens in the physical world where the same object can be viewed as a continuous or a discrete in dependence on the instrument of the observation used by the researcher. Connections between pure mathematical concepts and their computational realizations are continuously emphasized through the text. Numerous examples are given. (shrink)
In the course of ten short sections, we comment on Gödel's seminal dialectica paper of fifty years ago and its aftermath. We start by suggesting that Gödel's use of functionals of finite type is yet another instance of the realistic attitude of Gödel towards mathematics, in tune with his defense of the postulation of ever increasing higher types in foundational studies. We also make some observations concerning Gödel's recasting of intuitionistic arithmetic via the dialectica interpretation, discuss the extra principles that (...) the interpretation validates and comment on extensionality and higher order equality. The latter sections focus on the role of majorizability considerations within the dialectica and related interpretations for extracting computational information from ordinary proofs in mathematics. (shrink)
Some concepts that are now part and parcel of mathematics used to be, at least until the beginning of the twentieth century, a central preoccupation of mathematicians and philosophers. The concept of continuity, or the continuous, is one of them. Nowadays, many philosophers of mathematics take it for granted that mathematicians of the last quarter of the nineteenth century found an adequate conceptual analysis of the continuous in terms of limits and that serious philosophical thinking is no longer required, except (...) perhaps when the question of the continuum is transferred to the arena of set theory where it takes the form of the infamous continuum hypothesis. As Philip Ehrlich has recently shown, this conviction goes back to the early writings of Russell who, in 1903 and then again in later writings, forcefully and eloquently pushed the view that mathematicians had given the final answer to immemorial conundrums arising from the continuous and infinitesimals . This proclamation of victory came with what was announced as the necessary defeat of the notion of the infinitesimal, despite the fact that mathematicians like Thomae, Du Bois-Reymond, Stolz, Bettazi, Veronese, Levi-Civita, and Hahn were investigating mathematical structures containing infinitesimals in a mathematically rigorous and logically consistent manner. In this respect Russell was merely walking in the footsteps of Cantor, and many of Russell's contemporaries were only too keen to keep infinitesimals out of Cantor's paradise. However, although Cantor certainly wanted to send the notion of infinitesimal to Hell, the notion kept a low profile in the mathematical purgatory, making its way in the study of non-Archimedean ordered algebraic systems. Of course, nowadays everyone has heard of Robinson's attempt at resurrecting infinitesimals in analysis in the form of non-standard analysis, but Robinson's work, despite the fact that it had, in …. (shrink)
The metaphysical concept of continuity is important, not least because physical continua are not known to be impossible. While it is standard to model them with a mathematical continuum based upon set-theoretical intuitions, this essay considers, as a contribution to the debate about the adequacy of those intuitions, the neglected intuition that dividing the length of a line by the length of an individual point should yield the line’s cardinality. The algebraic properties of that cardinal number are derived pre-theoretically from (...) the obvious properties of a line of points, whence it becomes clear that such a number would cohere surprisingly well with our elementary number systems. (shrink)
This book constitutes the refereed proceedings of the Third International Symposium on Stochastic Algorithms: Foundations and Applications, SAGA 2005, held in Moscow, Russia in October 2005. The 14 revised full papers presented together with 5 invited papers were carefully reviewed and selected for inclusion in the book. The contributed papers included in this volume cover both theoretical as well as applied aspects of stochastic computations whith a special focus on new algorithmic ideas involving stochastic decisions and the design and evaluation (...) of stochastic algorithms within realistic scenarios. (shrink)
Abstract.This paper is the second in a series of three culminating in an ordinal analysis of Π12-comprehension. Its objective is to present an ordinal analysis for the subsystem of second order arithmetic with Δ12-comprehension, bar induction and Π12-comprehension for formulae without set parameters. Couched in terms of Kripke-Platek set theory, KP, the latter system corresponds to KPi augmented by the assertion that there exists a stable ordinal, where KPi is KP with an additional axiom stating that every set is contained (...) in an admissible set. (shrink)
We discuss the philosophical status of the statement that (9n – 1) is divisible by 8 for various sizes of the number n. We argue that even this simple problem reveals deep tensions between truth and verification. Using Gillies's empiricist classification of theories into levels, we propose that statements in arithmetic should be classified into three different levels depending on the sizes of the numbers involved. We conclude by discussing the relationship between the real number system and the physical continuum.
There is no uniquely standard concept of an effectively decidable set of real numbers or real n-tuples. Here we consider three notions: decidability up to measure zero [M.W. Parker, Undecidability in Rn: Riddled basins, the KAM tori, and the stability of the solar system, Phil. Sci. 70(2) (2003) 359–382], which we abbreviate d.m.z.; recursive approximability [or r.a.; K.-I. Ko, Complexity Theory of Real Functions, Birkhäuser, Boston, 1991]; and decidability ignoring boundaries [d.i.b.; W.C. Myrvold, The decision problem for entanglement, in: R.S. (...) Cohen et al. (Eds.), Potentiality, Entanglement, and Passion-at-a-Distance: Quantum Mechanical Studies fo Abner Shimony, Vol. 2, Kluwer Academic Publishers, Great Britain, 1997, pp. 177–190]. Unlike some others in the literature, these notions apply not only to certain nice sets, but to general sets in Rn and other appropriate spaces. We consider some motivations for these concepts and the logical relations between them. It has been argued that d.m.z. is especially appropriate for physical applications, and on Rn with the standard measure, it is strictly stronger than r.a. [M.W. Parker, Undecidability in Rn: Riddled basins, the KAM tori, and the stability of the solar system, Phil. Sci. 70(2) (2003) 359–382]. Here we show that this is the only implication that holds among our three decidabilities in that setting. Under arbitrary measures, even this implication fails. Yet for intervals of non-zero length, and more generally, convex sets of non-zero measure, the three concepts are equivalent. (shrink)
Defining the real numbers by abstraction as ratios of quantities gives prominence to then- applications in just the way that Frege thought we should. But if all the reals are to be obtained in this way, it is necessary to presuppose a rich domain of quantities of a land we cannot reasonably assume to be exemplified by any physical or other empirically measurable quantities. In consequence, an explanation of the applications of the reals, defined in this way, must proceed indirectly. (...) This paper explains the main complications involved and answers the main objections advanced in Batitsky's paper in this issue. (shrink)
This is a collection of the abstracts of lectures given at the International Conference on Differential Equations, Approximations and Applications, which will be held at the old campus of the Vietnam National University at Hanoi December 10-15, 2001.
Hermann Weyl, one of the twentieth century's greatest mathematicians, was unusual in possessing acute literary and philosophical sensibilities—sensibilities to which he gave full expression in his writings. In this paper I use quotations from these writings to provide a sketch of Weyl's philosophical orientation, following which I attempt to elucidate his views on the mathematical continuum, bringing out the central role he assigned to intuition.
On the neo-Fregean approach to the foundations of mathematics, elementary arithmetic is analytic in the sense that the addition of a principle wliich may be held to IMJ explanatory of the concept of cardinal number to a suitable second-order logical basis suffices for the derivation of its basic laws. This principle, now commonly called Hume's principle, is an example of a Fregean abstraction principle. In this paper, I assume the correctness of the neo-Fregean position on elementary aritlunetic and seek to (...) explain one way in which it may be extended to encompass the theory of real numbers, introducing the reals, by means of suitable further abstraction principles, as ratios of quantities. (shrink)