A major objection to epistemic infinitism is that it seems to make justification impossible. For if there is an infinite chain of reasons, each receiving its justification from its neighbour, then there is no justification to inherit in the first place. Some have argued that the objection arises from misunderstanding the character of justification. Justification is not something that one reason inherits from another; rather it gradually emerges from the chain as a whole. Nowhere however is it made clear what (...) exactly is meant by emergence. The aim of this paper is to fill that lacuna: we describe a detailed procedure for the emergence of justification that enables us to see exactly how justification surfaces from a chain of reasons. (shrink)
A Zenonian supertask involving an infinite number of colliding balls is considered, under the restriction that the total mass of all the balls is finite. Classical mechanics leads to the conclusion that momentum, but not necessarily energy, must be conserved. Relativistic mechanics, on the other hand, implies that energy and momentum conservation are always violated. Quantum mechanics, however, seems to rule out the Zeno configuration as an inconsistent system.
A characteristic of contemporary analytic philosophy is its ample use of thought experiments. We formulate two features that can lead one to suspect that a given thought experiment is a poor one. Although these features are especially in evidence within the philosophy of mind, they can, surprisingly enough, also be discerned in some celebrated scientific thought experiments. Yet in the latter case the consequences appear to be less disastrous. We conclude that the use of thought experiments is more successful in (...) science than in philosophy. (shrink)
So far no known measure of confirmation of a hypothesis by evidence has satisfied a minimal requirement concerning thresholds of acceptance. In contrast, Shogenji’s new measure of justification (Shogenji, Synthese, this number 2009) does the trick. As we show, it is ordinally equivalent to the most general measure which satisfies this requirement. We further demonstrate that this general measure resolves the problem of the irrelevant conjunction. Finally, we spell out some implications of the general measure for the Conjunction Effect; in (...) particular we give an example in which the effect occurs in a larger domain, according to Shogenji justification, than Carnap’s measure of confirmation would have led one to expect. (shrink)
An infinite number of elastically colliding balls is considered in a classical, and then in a relativistic setting. Energy and momentum are not necessarily conserved globally, even though each collision does separately conserve them. This result holds in particular when the total mass of all the balls is finite, and even when the spatial extent and temporal duration of the process are also finite. Further, the process is shown to be indeterministic: there is an arbitrary parameter in the general solution (...) that corresponds to the injection of an arbitrary amount of energy (classically), or energy-momentum (relativistically), into the system at the point of accumulation of the locations of the balls. Specific examples are given that illustrate these counter-intuitive results, including one in which all the balls move with the same velocity after every collision has taken place. (shrink)
Today it is generally assumed that epistemic justification comes in degrees. The consequences, however, have not been adequately appreciated. In this paper we show that the assumption invalidates some venerable attacks on infinitism: once we accept that epistemic justification is gradual, an infinitist stance makes perfect sense. It is only without the assumption that infinitism runs into difficulties.
Galileo claimed inconsistency in the Aristotelian dogma concerning falling bodies and stated that all bodies must fall at the same rate. However, there is an empirical situation where the speeds of falling bodies are proportional to their weights; and even in vacuo all bodies do not fall at the same rate under terrestrial conditions. The reason for the deficiency of Galileo’s reasoning is analyzed, and various physical scenarios are described in which Aristotle’s claim is closer to the truth than is (...) Galileo’s. The purpose is not to reinstate Aristotelian physics at the expense of Galileo and Newton, but rather to provide evidence in support of the verdict that empirical knowledge does not come from prior philosophy.Author Keywords: Author Keywords: Aristotle; Galileo; Thought experiments; Falling bodies. (shrink)
We discuss two objections that foundationalists have raised against infinite chains of probabilistic justification. We demonstrate that neither of the objections can be maintained.
Like many discussions on the pros and cons of epistemic foundationalism, the debate between C. I. Lewis and H. Reichenbach dealt with three concerns: the existence of basic beliefs, their nature, and the way in which beliefs are related. In this paper we concentrate on the third matter, especially on Lewis’s assertion that a probability relation must depend on something that is certain, and Reichenbach’s claim that certainty is never needed. We note that Lewis’s assertion is prima facie ambiguous, but (...) argue that this ambiguity is only apparent if probability theory is viewed within a modal logic. Although there are empirical situations where Reichenbach is right, and others where Lewis’s reasoning seems to be more appropriate, it will become clear that Reichenbach’s stance is the generic one. We conclude that this constitutes a threat to epistemic foundationalism.Keywords: Epistemic foundationalism; Probability; Clarence Irving Lewis; Hans Reichenbach. (shrink)
From 1929 onwards, C. I. Lewis defended the foundationalist claim that judgements of the form 'x is probable' only make sense if one assumes there to be a ground y that is certain (where x and y may be beliefs, propositions, or events). Without this assumption, Lewis argues, the probability of x could not be anything other than zero. Hans Reichenbach repeatedly contested Lewis's idea, calling it "a remnant of rationalism". The last move in this debate was a challenge by (...) Lewis, defying Reichenbach to produce a regress of probability values that yields a number other than zero. Reichenbach never took up the challenge, but we will meet it on his behalf, as it were. By presenting a series converging to a limit, we demonstrate that x can have a definite and computable probability, even if its justification consists of an infinite number of steps. Next we show the invalidity of a recent riposte of foundationalists that this limit of the series can be the ground of justification. Finally we discuss the question where justification can come from if not from a ground. (shrink)
This article presents a generalization of the Condorcet Jury Theorem. All results to date assume a fixed value for the competence of jurors, or alternatively, a fixed probability distribution over the possible competences of jurors. In this article, we develop the idea that we can learn the competence of the jurors by the jury vote. We assume a uniform prior probability assignment over the competence parameter, and we adapt this assignment in the light of the jury vote. We then compute (...) the posterior probability, conditional on the jury vote, of the hypothesis voted over. We thereby retain the central results of Condorcet, but we also show that the posterior probability depends on the size of the jury as well as on the absolute margin of the majority. (shrink)
The notion of probabilistic support is beset by well-known problems. In this paper we add a new one to the list: the problem of transitivity. Tomoji Shogenji has shown that positive probabilistic support, or confirmation, is transitive under the condition of screening off. However, under that same condition negative probabilistic support, or disconfirmation, is intransitive. Since there are many situations in which disconfirmation is transitive, this illustrates, but now in a different way, that the screening-off condition is too restrictive. We (...) therefore weaken this condition to what we call ‘partial’ screening off. We show that the domain defined by partial screening off comprises two mutually exclusive subdomains. In one subdomain disconfirmation is indeed transitive, but confirmation is then intransitive. In the other, confirmation is transitive, but here disconfirmation is once more intransitive. (shrink)
Time is often said to play in quantum mechanics an essentially different role from position: whereas position is represented by a Hermitian operator, time is represented by a c-number. This discrepancy has been found puzzling and has given rise to a vast literature and many efforts at a solution. In this paper it is argued that the discrepancy is only apparent and that there is nothing in the formalism of quantum mechanics that forces us to treat position and time differently. (...) The apparent problem is caused by the dominant role point particles play in physics and can be traced back to classical mechanics. (shrink)
Can some evidence confirm a conjunction of two hypotheses more than it confirms either of the hypotheses separately? We show that it can, moreover under conditions that are the same for ten different measures of confirmation. Further we demonstrate that it is even possible for the conjunction of two disconfirmed hypotheses to be confirmed by the same evidence.
Various arguments have been put forward to show that Zeno-like paradoxes are still with us. A particularly interesting one involves a cube composed of colored slabs that geometrically decrease in thickness. We first point out that this argument has already been nullified by Paul Benacerraf. Then we show that nevertheless a further problem remains, one that withstands Benacerraf s critique. We explain that the new problem is isomorphic to two other Zeno-like predicaments: a problem described by Alper and Bridger in (...) 1998 and a modified version of the problem that Benardete introduced in 1964. Finally, we present a solution to the three isomorphic problems. (shrink)
We have earlier shown by construction that a proposition can have a welldefined nonzero probability, even if it is justified by an infinite probabilistic regress. We thought this to be an adequate rebuttal of foundationalist claims that probabilistic regresses must lead either to an indeterminate, or to a determinate but zero probability. In a comment, Frederik Herzberg has argued that our counterexamples are of a special kind, being what he calls ‘solvable’. In the present reaction we investigate what Herzberg means (...) by solvability. We discuss the advantages and disadvantages of making solvability a sine qua non , and we ventilate our misgivings about Herzberg’s suggestion that the notion of solvability might help the foundationalist. (shrink)
Quantum electrodynamics is a time-symmetric theory that is part of the electroweak interaction, which is invariant under a generalized form of this symmetry, the PCT transformation. The thesis is defended that the arrow of time in electrodynamics is a consequence of the assumption of an initial state of high order, together with the quantum version of the equiprobability postulate.
My theme is thought experiment in natural science, and its relation to real experiment. I shall defend the thesis that thought experiments that do not lead to theorizing and to a real experiment are generally of much less value that those that do so. To illustrate this thesis I refer to three examples, from three very different periods, and with three very different kinds of status. The first is the classic thought experiment in which Galileo imagined that he had, by (...) pure thought, demolished Aristoteles' dogma that heavier bodies fall more quickly than light ones. I will show that he was mistaken. The second is the Einstein-Podolsky-Rosen paper purporting to show that quantum mechanics must be incomplete in its domain of application. This thought experiment is a very good one, not because its conclusions are correct, but precisely because it was fruitful, leading to theory and, above all, to a real experiment. Finally I discuss the modern string theory of everything, which, while it is regarded as a physical theory by its instigators, shares some properties of the least successful sort of thought experiment. (shrink)
An infinite number of elastically colliding balls is considered in a classical, and then in a relativistic setting. Energy and momentum are not necessarily conserved globally, even though each collision does separately conserve them. This result holds in particular when the total mass of all the balls is finite, and even when the spatial extent and temporal duration of the process are also finite. Further, the process is shown to be indeterministic: there is an arbitrary parameter in the general solution (...) that corresponds to the injection of an arbitrary amount of energy (classically), or energy-momentum (relativistically), into the system at the point of accumulation of the locations of the balls. Specific examples are given that illustrate these counter-intuitive results, including one in which all the balls move with the same velocity after every collision has taken place. (shrink)
Some series can go on indefinitely, others cannot, and epistemologists want to know in which class to place epistemic chains. Is it sensible or nonsensical to speak of a proposition or belief that is justified by another proposition or belief, ad infinitum? In large part the answer depends on what we mean by “justification.” Epistemologists have failed to find a definition on which everybody agrees, and some have even advised us to stop looking altogether. In spite of this, the present (...) essay submits a few candidate definitions. It argues that, although not giving the final word, these candidates tell us something about the possibility of infinite epistemic chains. And it shows that they can short-circuit a debate about doxastic justification. (shrink)
A Zenonian supertask involving an infinite number of colliding balls is considered, under the restriction that the total mass of all the balls is finite. Classical mechanics leads to the conclusion that momentum, but not necessarily energy, must be conserved. Relativistic mechanics, on the other hand, implies that energy and momentum conservation are always violated. Quantum mechanics, however, seems to rule out the Zeno configuration as an inconsistent system.
In an earlier paper we have shown that a proposition can have a well-defined probability value, even if its justification consists of an infinite linear chain. In the present paper we demonstrate that the same holds if the justification takes the form of a closed loop. Moreover, in the limit that the size of the loop tends to infinity, the probability value of the justified proposition is always well-defined, whereas this is not always so for the infinite linear chain. This (...) suggests that infinitism sits more comfortably with a coherentist view of justification than with an approach in which justification is portrayed as a linear process. (shrink)
It is widely held that the paradox of Achilles and the Tortoise, introduced by Zeno of Elea around 460 B.C., was solved by mathematical advances in the nineteenth century. The techniques of Weierstrass, Dedekind and Cantor made it clear, according to this view, that Achilles’ difficulty in traversing an infinite number of intervals while trying to catch up with the tortoise does not involve a contradiction, let alone a logical absurdity. Yet ever since the nineteenth century there have been dissidents (...) claiming that the apparatus of Weierstrass et al. has not resolved the paradox, and that serious problems remain. It seems that these claims have received unexpected support from recent developments in mathematical physics. This support has however remained largely unnoticed by historians of philosophy, presumably because the relevant debates are cast in mathematical-technical terms that are only accessible to people with the relevant training. That is unfortunate, since the debates in question might well profit from input by philosophers in general and historians of philosophy in particular. Below we will first recall the Achilles paradox, and describe the way in which nineteenth century mathematics supposedly solved it. Then we discuss recent work that contests this solution, reiterating the dissident dogma that no mathematical approach whatsoever can even come close to solving the original Achilles. We shall argue that this dissatisfaction with a mathematical solution is inadequate as it stands, but that it can perhaps be reformulated in the light of new developments in mathematical physics. (shrink)
Could some evidence confirm a conjunction of two hypotheses more than it confirms either of the hypotheses separately? We show that it might, moreover under conditions that are the same for ten different measures of confirmation. Further, we demonstrate that it is even possible for the conjunction of two disconfirmed hypotheses to be confirmed by the same evidence.
Various arguments have been put forward to show that Zeno-like paradoxes are still with us. A particularly interesting one involves a cube composed of colored slabs that geometrically decrease in thickness. We first point out that this argument has already been nullified by Paul Benacerraf. Then we show that nevertheless a further problem remains, one that withstands Benacerraf’s critique. We explain that the new problem is isomorphic to two other Zeno-like predicaments: a problem described by Alper and Bridger in 1998 (...) and a modified version of the problem that Benardete introduced in 1964. Finally, we present a solution to the three isomorphic problems. (shrink)
Quantum electrodynamics is a time-symmetric theory that is part of the electroweak interaction, which is invariant under a generalized form of this symmetry, the PCT transformation. The thesis is defended that the arrow of time in electrodynamics is a consequence of the assumption of an initial state of high order, together with the quantum version of the equiprobability postulate.
An actual infinity of colliding balls can be in a configuration in which the laws of mechanics lead to logical inconsistency. It is argued that one should therefore limit the domain of these laws to a finite, or only a potentially infinite number of elements. With this restriction indeterminism, energy nonconservation and creatio ex nihilo no longer occur. A numerical analysis of finite systems of colliding balls is given, and the asymptotic behaviour that corresponds to the potentially infinite system is (...) inferred. (shrink)
As is well known, implication is transitive but probabilistic support is not. Eells and Sober, followed by Shogenji, showed that screening off is a sufficient constraint for the transitivity of probabilistic support. Moreover, this screening off condition can be weakened without sacrificing transitivity, as was demonstrated by Suppes and later by Roche. In this paper we introduce an even weaker sufficient condition for the transitivity of probabilistic support, in fact one that can be made as weak as one wishes. We (...) explain that this condition has an interesting property: it shows that transitivity is retained even though the Simpson paradox reigns. We further show that by adding a certain restriction the condition can be turned into one that is both sufficient and necessary for transitivity. (shrink)
Quantum electrodynamics is a time-symmetric theory that is part of the electroweak interaction, which is invariant under a generalized form of this symmetry, the PCT transformation. The thesis is defended that the arrow of time in electrodynamics is a consequence of the assumption of an initial state of high order, together with the quantum version of the equiprobability postulate.
A Zenonian supertask involving an infinite number of identical colliding balls is generalized to include balls with different masses. Under the restriction that the total mass of all the balls is finite, classical mechanics leads to velocities that have no upper limit. Relativistic mechanics results in velocities bounded by that of light, but energy and momentum are not conserved, implying indeterminism. The notion that both determinism and the conservation laws might be salvaged via photon creation is shown to be flawed.
A Zenonian supertask involving an infinite number of identical colliding balls is generalized to include balls with different masses. Under the restriction that the total mass of all the balls is finite, classical mechanics leads to velocities that have no upper limit. Relativistic mechanics results in velocities bounded by that of light, but energy and momentum are not conserved, implying indeterminism. The notion that both determinism and the conservation laws might be salvaged via photon creation is shown to be flawed.
It is argued that probability should be defined implicitly by the distributions of possible measurement values characteristic of a theory. These distributions are tested by, but not defined in terms of, relative frequencies of occurrences of events of a specified kind. The adoption of an a priori probability in an empirical investigation constitutes part of the formulation of a theory. In particular, an assumption of equiprobability in a given situation is merely one hypothesis inter alia, which can be tested, like (...) any other assumption. Probability in relation to some theories – for example quantum mechanics – need not satisfy the Kolmogorov axioms. To illustrate how two theories about the same system can generate quite different probability concepts, and not just different probabilistic predictions, a team game for three players is described. If only classical methods are allowed, a 75% success rate at best can be achieved. Nevertheless, a quantum strategy exists that gives a 100% probability of winning. (shrink)
We have never entirely agreed with Daniel Cohnitz on the status and rôle of thought experiments. Several years ago, enjoying a splendid lunch together in the city of Ghent, we cheerfully agreed to disagree on the matter; and now that Cohnitz has published his considered opinion of our views, we are glad that we have the opportunity to write a rejoinder and to explicate some of our disagreements. We choose not to deal here with all the issues that Cohnitz raises, (...) but rather to restrict ourselves to three specific points. (shrink)
Can some evidence confirm a conjunction of two hypotheses more than it confirms either of the hypotheses separately? We show that it can, moreover under conditions that are the same for nine different measures of confirmation. Further we demonstrate that it is even possible for the conjunction of two disconfirmed hypotheses to be confirmed by the same evidence.
Heisenberg’s uncertainty principle is a milestone of twentieth-century physics. We sketch the history that led to the formulation of the principle, and we recall the objections of Grete Hermann and Niels Bohr. Then we explain that there are in fact two uncertainty principles. One was published by Heisenberg in the Zeitschrift für Physik of March 1927 and subsequently targeted by Bohr and Hermann. The other one was introduced by Earle Kennard in the same journal a couple of months later. While (...) Kennard’s principle remains untarnished, the principle of Heisenberg has recently been criticized in a way that is very different from the objections by Bohr and Hermann: there are reasons to believe that Heisenberg’s formula is not valid. Experimental results seem to support this claim. -/- . (shrink)
We discuss two objections that foundationalists have raised against infinite chains of probabilistic justification. We demonstrate that neither of the objections can be maintained.
Some philosophers have claimed that it is meaningless or paradoxical to consider the probability of a probability. Others have however argued that second-order probabilities do not pose any particular problem. We side with the latter group. On condition that the relevant distinctions are taken into account, second-order probabilities can be shown to be perfectly consistent. May the same be said of an infinite hierarchy of higher-order probabilities? Is it consistent to speak of a probability of a probability, and of a (...) probability of a probability of a probability, and so on, {\em ad infinitum}? We argue that it is, for it can be shown that there exists an infinite system of probabilities that has a model. In particular, we define a regress of higher-order probabilities that leads to a convergent series which determines an infinite-order probability value. We demonstrate the consistency of the regress by constructing a model based on coin-making machines. (shrink)
The classical electrodynamics of point charges can be made finite by the introduction of effects that temporally precede their causes. The idea of retrocausality is also inherent in the Feynman propagators of quantum electrodynamics. The notion allows a new understanding of the violation of the Bell inequalities, and of the world view revealed by quantum mechanics. Published in The Universe, Visions and Perspectives, edited by N. Dadhich and A. Kembhavi, Kluwer Academic Publishers, 2000, pages 35-50.
This paper is the third and final one in a sequence of three. All three papers emphasize that a proposition can be justified by an infinite regress, on condition that epistemic justification is interpreted probabilistically. The first two papers showed this for one-dimensional chains and for one-dimensional loops of propositions, each proposition being justified probabilistically by its precursor. In the present paper we consider the more complicated case of two-dimensional nets, where each "child" proposition is probabilistically justified by two "parent" (...) propositions. Surprisingly, it turns out that probabilistic justification in two dimensions takes on the form of Mandelbrot's iteration. Like so many patterns in nature, probabilistic reasoning might in the end be fractal in character. (shrink)
It is argued that while classical probability theory, as it is encapsulated in the axioms of Kolmogorov and in his criterion for the independence of two events, can consistently be employed in quantum mechanics, this can only be accomplished at an exorbitant price. By considering rst the classic two-slit experiment, and then the passage of one photon through three polarizers, the applicability of Kolmogorov's last axiom is called into question, but the standard rebu of the Copenhagen interpretation is shown to (...) be adequate to this challenge. In the EPR experiment of Aspect, and the violation of the Bell inequalities, the matter is more delicate: it is not directly the last axiom, but rather the relevance of Kolmogorovian independence that is at issue. It is explained how two events with space-like separation cannot be independent in Kolmogorov's sense, even in the presence of hidden variables. The escape route of supposing these variables to be nonlocal, with a heavy metaphysical ballast of holism, which however is cosmically censored to prevent superluminal information transfer, has all the trappings of an ad hoc makeshift. The adoption of quantum mechanical probability, which does not obey the rules of Kolmogorov, but does survive empirical testing in terms of relative frequencies of events, is more economical. The solution is simple: correlations obey the rules of quantum mechanics and probability is a theory-laden concept that is tested by, but not de ned in terms of, the relative frequency of selected classes of events. (shrink)
Suppose q is some proposition, and let P(q) = v0 (1) be the proposition that the probability of q is v0.1 How can one know that (1) is true? One cannot know it for sure, for all that may be asserted is a further probabilistic statement like P(P(q) = v0) = v1, (2) which states that the probability that (1) is true is v1. But the claim (2) is also subject to some further statement of an even higher probability: P(P(P(q) (...) = v0) = v1) = v2, (3) and so on. Thus, an infinite regress emerges of probabilities of probabilities, and the question arises as to whether this regress is vicious or harmless. Radical probabilists would like to claim that it is harmless, but Nicholas Rescher (2010), in his scholarly and very stimulating Infinite Regress: The Theory and History of Varieties of Change, argues that it is vicious. He believes that an infinite hierarchy of probabilities makes it impossible to know anything about the probability of the original proposition q: unless some claims are going to be categorically validated and not just adjudged probabilistically, the radically probabilistic epistemology envisioned here is going to be beyond the prospect of implementation. . . . If you can indeed be certain of nothing, then how can you be sure of your probability assessments. If all you ever have is a nonterminatingly regressive claim of the format . . . the probability is .9 that (the probability is .9 that (the probability of q is .9)) then in the face of such a regress, you would know effectively nothing about the condition of q. After all, without a categorically established factual basis of some sort, there is no way of assessing probabilities. But if these requisites themselves are never categorical but only probabilistic, then we are propelled into a vitiating regress of presuppositions. (shrink)
Reichenbach's use of 'posits' to defend his frequentistic theory of probability has been criticized on the grounds that it makes unfalsifiable predictions. The justice of this criticism has blinded many to Reichenbach's second use of a posit, one that can fruitfully be applied to current debates within epistemology. We show first that Reichenbach's alternative type of posit creates a difficulty for epistemic foundationalists, and then that its use is equivalent to a particular kind of Jeffrey conditionalization. We conclude that, under (...) particular circumstances, Reichenbach's approach and that of the Bayesians amount to the same thing, thereby presenting us with a new instance in which chance and credence coincide. (shrink)
Richard Jeffrey’s radical probabilism (‘probability all the way down’) is augmented by the claim that probability cannot be turned into certainty, except by data that logically exclude all alternatives. Once we start being uncertain, no amount of updating will free us from the treadmill of uncertainty. This claim is cast first in objectivist and then in subjectivist terms.
Theo Kuipers describes four kinds of research programs and the question is raised here as to whether string theory could be accommodated by one of them, or whether it should be classified in a new, fifth kind of research program.
In A Treatise of Human Nature, David Hume presents an argument according to which all knowledge reduces to probability, and all probability reduces to nothing. Many have criticized this argument, while others find nothing wrong with it. In this paper we explain that the argument is invalid as it stands, but for different reasons than have been hitherto acknowledged. Once the argument is repaired, it becomes clear that there is indeed something that reduces to nothing, but it is something other (...) than what, according to many, Hume had in mind. Thus two views emerge of what exactly it is that reduces. We surmise that Hume failed to distinguish the two, because he lacked the formal means to differentiate between a rendering of his argument that is in accordance with the probability calculus, and one that is not. (shrink)
The original article has been corrected. Erroneously, a comma and a space were added in line 164 to 500, 500, and the authors would like readers to know that this should instead read 500,500.
In A Treatise of Human Nature, David Hume presents an argument according to which all knowledge reduces to probability, and all probability reduces to nothing. Many have criticized this argument, while others find nothing wrong with it. In this paper we explain that the argument is invalid as it stands, but for different reasons than have been hitherto acknowledged. Once the argument is repaired, it becomes clear that there is indeed something that reduces to nothing, but it is something other (...) than what, according to many, Hume had in mind. Thus two views emerge of what exactly it is that reduces. We surmise that Hume failed to distinguish the two, because he lacked the formal means to differentiate between a rendering of his argument that is in accordance with the probability calculus, and one that is not. (shrink)