This book compares various approaches to the interpretation of quantum mechanics, in particular those which are related to the key words "the Copenhagen interpretation", "the antirealist view", "quantum logic" and "hidden variable theory". Using the concept of "correlation" carefully analyzed in the context of classical probability and in quantum theory, the author provides a framework to compare these approaches. He also develops an extension of probability theory to construct a local hidden variable theory. The book should be of interest for (...) physicists and philosophers of science interested in the foundations of quantum theory. (shrink)
We develop and defend the thesis that the Hilbert space formalism of quantum mechanics is a new theory of probability. The theory, like its classical counterpart, consists of an algebra of events, and the probability measures defined on it. The construction proceeds in the following steps: (a) Axioms for the algebra of events are introduced following Birkhoff and von Neumann. All axioms, except the one that expresses the uncertainty principle, are shared with the classical event space. The only models for (...) the set of axioms are lattices of subspaces of inner product spaces over a field K. (b) Another axiom due to Soler forces K to be the field of real, or complex numbers, or the quaternions. We suggest a probabilistic reading of Soler's axiom. (c) Gleason's theorem fully characterizes the probability measures on the algebra of events, so that Born's rule is derived. (d) Gleason's theorem is equivalent to the existence of a certain finite set of rays, with a particular orthogonality graph (Wondergraph). Consequently, all aspects of quantum probability can be derived from rational probability assignments to finite "quantum gambles". (e) All experimental aspects of entanglement- the violation of Bell's inequality in particular- are explained as natural outcomes of the probabilistic structure. (f) We hypothesize that even in the absence of decoherence macroscopic entanglement can very rarely be observed, and provide a precise conjecture to that effect .We also discuss the relation of the present approach to quantum logic, realism and truth, and the measurement problem. (shrink)
We argue that the intractable part of the measurement problem -- the 'big' measurement problem -- is a pseudo-problem that depends for its legitimacy on the acceptance of two dogmas. The first dogma is John Bell's assertion that measurement should never be introduced as a primitive process in a fundamental mechanical theory like classical or quantum mechanics, but should always be open to a complete analysis, in principle, of how the individual outcomes come about dynamically. The second dogma is the (...) view that the quantum state has an ontological significance analogous to the significance of the classical state as the 'truthmaker' for propositions about the occurrence and non-occurrence of events, i.e., that the quantum state is a representation of physical reality. We show how both dogmas can be rejected in a realist information-theoretic interpretation of quantum mechanics as an alternative to the Everett interpretation. The Everettian, too, regards the 'big' measurement problem as a pseudo-problem, because the Everettian rejects the assumption that measurements have definite outcomes, in the sense that one particular outcome, as opposed to other possible outcomes, actually occurs in a quantum measurement process. By contrast with the Everettians, we accept that measurements have definite outcomes. By contrast with the Bohmians and the GRW 'collapse' theorists who add structure to the theory and propose dynamical solutions to the 'big' measurement problem, we take the problem to arise from the failure to see the significance of Hilbert space as a new kinematic framework for the physics of an indeterministic universe, in the sense that Hilbert space imposes kinematic objective probabilistic constraints on correlations between events. (shrink)
In the mid-nineteenth century George Boole formulated his ‘conditions of possible experience’. These are equations and ineqaulities that the relative frequencies of events must satisfy. Some of Boole's conditions have been rediscovered in more recent years by physicists, including Bell inequalities, Clauser Horne inequalities, and many others. In this paper, the nature of Boole's conditions and their relation to propositional logic is explained, and the puzzle associated with their violation by quantum frequencies is investigated in relation to a variety of (...) approaches to the interpretation of quantum mechanics. * While preparing this paper for publication I have learnt of the untimely death of Professor J. S. Bell, and I wish to dedicate the paper to his memory. This research was undertaken while I spent a sabbatical leave at Wolfson College, and the History and Philosophy of Science Department at the University of Cambridge. I would like to thank Michael Redhead and Jeremy Butterfield for their hospitality and for helpful discussions. A first draft of this paper has been distributed among the participants of the conference 'Einstein in Context' which was held in Israel, in April 1990.1 have benefited from the comments of many colleagues. I would like to thank in particular Arthur Fine who enlightened me on the prism models, David Albert. Maya Bar-Hillel. Yemima Ben-Menachem, Mara Beller. Simon Saunders, and Mark Steiner. This research is partially supported by the Edelstein Center for the History and Philosophy of Science at the Hebrew University. (shrink)
We develop a systematic approach to quantum probability as a theory of rational betting in quantum gambles. In these games of chance, the agent is betting in advance on the outcomes of several (finitely many) incompatible measurements. One of the measurements is subsequently chosen and performed and the money placed on the other measurements is returned to the agent. We show how the rules of rational betting imply all the interesting features of quantum probability, even in such finite gambles. These (...) include the uncertainty principle and the violation of Bell's inequality among others. Quantum gambles are closely related to quantum logic and provide a new semantics for it. We conclude with a philosophical discussion on the interpretation of quantum mechanics. (shrink)
A deterministic model that accounts for the statistical behavior of random samples of identical particles is presented. The model is based on some nonmeasurable distribution of spin values in all directions. The mathematical existence of such distributions is proved by set-theoretical techniques, and the relation between these distributions and observed frequencies is explored within an appropriate extension of probability theory. The relation between quantum mechanics and the model is specified. The model is shown to be consistent with known polarization phenomena (...) and the existence of macroscopic magnetism. Finally.. (shrink)
We describe a possible physical device that computes a function that cannot be computed by a Turing machine. The device is physical in the sense that it is compatible with General Relativity. We discuss some objections, focusing on those which deny that the device is either a computer or computes a function that is not Turing computable. Finally, we argue that the existence of the device does not refute the Church–Turing thesis, but nevertheless may be a counterexample to Gandy's thesis.
We develop a systematic approach to quantum probability as a theory of rational betting in quantum gambles. In these games of chance the agent is betting in advance on the outcomes of several incompatible measurements. One of the measurements is subsequently chosen and performed and the money placed on the other measurements is returned to the agent. We show how the rules of rational betting imply all the interesting features of quantum probability, even in such finite gambles. These include the (...) uncertainty principle and the violation of Bell's inequality among others. Quantum gambles are closely related to quantum logic and provide a new semantics to it. We conclude with a philosophical discussion on the interpretation of quantum mechanics. (shrink)
1. The Physical Church-Turing Thesis. Physicists often interpret the Church-Turing Thesis as saying something about the scope and limitations of physical computing machines. Although this was not the intention of Church or Turing, the Physical Church Turing thesis is interesting in its own right. Consider, for example, Wolfram’s formulation: One can expect in fact that universal computers are as powerful in their computational capabilities as any physically realizable system can be, that they can simulate any physical system . . . (...) No physically implementable procedure could then shortcut a computationally irreducible process. (Wolfram 1985) Wolfram’s thesis consists of two parts: (a) Any physical system can be simulated (to any degree of approximation) by a universal Turing machine (b) Complexity bounds on Turing machine simulations have physical significance. For example, suppose that the computation of the minimum energy of some system of n particles takes at least exponentially (in n) many steps. Then the relaxation time of the actual physical system to its minimum energy state will also take exponential time. (shrink)
Why do we not see large macroscopic objects in entangled states? There are two ways to approach this question. The first is dynamic. The coupling of a large object to its environment cause any entanglement to decrease considerably. The second approach, which is discussed in this paper, puts the stress on the difficulty of observeing a large-scale entanglement. As the number of particles n grows we need an ever more precise knowledge of the state and an ever more carefully designed (...) experiment, in order to recognize entanglement. To develop this point we consider a family of observables, called witnesses, which are designed to detect entanglement. A witness W distinguishes all the separable (unentangled) states from some entangled states. If we normalize the witness W to satisfy tr W 1 for all separable states , then the efficiency of W depends on the size of its maximal eigenvalue in absolute value; that is, its operator norm W . It is known that there are witnesses on the space of n qubits for which W is exponential in n. However, we conjecture that for a large majority of n-qubit witnesses W O n log n . Thus, in a nonideal measurement, which includes errors, the largest eigenvalue of a typical witness lies below the threshold of detection. We prove this conjecture for the family of extremal witnesses introduced by Werner and Wolf [Phys. Rev. A 64, 032112 (2001)]. (shrink)
We argue that certain types of many minds (and many worlds) interpretations of quantum mechanics, e.g. Lockwood ([1996a]), Deutsch ([1985]) do not provide a coherent interpretation of the quantum mechanical probabilistic algorithm. By contrast, in Albert and Loewer's ([1988]) version of the many minds interpretation, there is a coherent interpretation of the quantum mechanical probabilities. We consider Albert and Loewer's probability interpretation in the context of Bell-type and GHZ-type states and argue that it implies a certain (weak) form of nonlocality. (...) 1 Introduction 2 Albert and Loewer's interpretation 3 Probabilities in Lockwood's interpretation 4 Sets of minds and their correlations 5 Many minds and GHZ. (shrink)
The Einstein-Podolsky-Rosen argument for the incompleteness of quantum mechanics involves two assumptions: one about locality and the other about when it is legitimate to infer the existence of an element-of-reality. Using one simple thought experiment, we argue that quantum predictions and the relativity of simultaneity require that both these assumptions fail, whether or not quantum mechanics is complete.
Boltzmann’s approach to statistical mechanics is widely believed to be conceptually superior to Gibbs’ formulation. However, the microcanonical distribution often fails to behave as expected: The ergodicity of the motion relative to it can rarely be established for realistic systems; worse, it can often be proved to fail. Also, the approach involves idealizations that have little physical basis. Here we take Khinchin’s advice and propose a de…nition of equilibrium that is more realistic: The de…nition re‡ects the fact that the system (...) is made of a great number of particles, and implies that all measurable macroscopic observables have steady values. (shrink)
The essays in this volume were written by leading researchers on classical mechanics, statistical mechanics, quantum theory, and relativity. They detail central topics in the foundations of physics, including the role of symmetry principles in classical and quantum physics, Einstein's hole argument in general relativity, quantum mechanics and special relativity, quantum correlations, quantum logic, and quantum probability and information.
We consider the set of all matrices of the form pij = tr[W (Ei ⊗ Fj)] where Ei, Fj are projections on a Hilbert space H, and W is some state on H ⊗ H. We derive the basic properties of this set, compare it with the classical range of probability, and note how its properties may be related to a geometric measures of entanglement.
If p(x 1 ,...,x n ) and q(x 1 ,...,x n ) are two logically equivalent propositions then p(π (x 1 ),...,π (x n )) and q(π (x 1 ),...,π (x n )) are also logically equivalent where π is an arbitrary permutation of the elementary constituents x 1 ,...,x n . In Quantum Logic the invariance of logical equivalences breaks down. It is proved that the distribution rules of classical logic are in fact equivalent to the meta-linguistic rule of (...) universal substitution and that the more restrictive structure of the substitution group of Quantum Logic prevents us from defining truth in a classical fashion. These observations lead to a more profound understanding of the Logic of Quantum Mechanics and of the role that symmetry principles play in that theory. (shrink)
The distribution function associated with a classical gas at equilibrium is considered. We prove that apart from a factorisable multiplier, the distribution function is fully determined by the correlations among local momenta fluctuations. Using this result we discuss the conditions which enable idealised local observers, who are immersed in the gas and form a part of it, to determine the distribution 'from within'. This analysis sheds light on two views on thermodynamic equilibrium, the 'ergodic' and the 'thermodynamic limit' schools, and (...) the relations between them. It also provides an outline for a new definition of equilibrium that is weaker than full ergodicity. Finally, we briefly discuss the possibility that the distribution can be determined by external observers. (shrink)
We present a general method for obtaining all Bell inequalities for a given experimental setup. Although the algorithm runs slowly, we apply it to two cases. First, the Greenberger-Horne-Zeilinger setup with three observers each performing one of two possible measurements. Second, the case of two observers each performing one of three possible experiments. In both cases we obtain hundreds of inequalities. Since this is the set of all inequalities, the one that is maximally violated in a given quantum state must (...) be among them. We demonstrate this fact with a few examples. We also note the deep connection between the inequalities and classical logic, and their violation with quantum logic. (shrink)
The intuition guiding the de…nition of computation has shifted over time, a process that is re‡ected in the changing formulations of the Church-Turing thesis. The theory of computation began with logic and gradually moved to the capacity of …nite automata. Consequently, modern computer models rely on general physical principles, with quantum computers representing the extreme case. The paper discusses this development, and the challenges to the Church-Turing thesis in its physical form, in particular, Kieu’s quantum computer and relativistic hyper-computation. Finally, (...) the robustness of the boundary between polynomial and exponential time complexity is considered in connection with quantum computers and quantum information theory. (shrink)
A generally covariant theory, written in the spirit of Bohm's theory of quantum potentials, which applies to spinless, non interacting, gravitating systems, is formulated. In this theory the quantum state ψ is coupled to the metric tensor g, and the effect of the “quantum potential” is absorbed in the geometry. At the same time, ψ satisfies a covariant wave equation with respect to the very same g. This provides sufficient constraints to derive 11 coupled equations in the 11 unknowns: ψ (...) and the components of the metric tensor gµv. The states of stable localized particles are identified, and vacuum-state solutions for both the Euclidean and the Lorentzian case are explicitly presented. (shrink)
A classical gas at equilibrium satisfies the locality conditionif the correlations between local fluctuations at a pair of remote small regions diminish in the thermodynamic limit. The gas satisfies a strong locality conditionif the local fluctuations at any number of remote locations have no (pair, triple, quadruple....) correlations among them in the thermodynamic limit. We prove that locality is equivalent to a certain factorizability condition on the distribution function. The analogous quantum condition fails in the case of a freeBose gas. (...) Next we prove that strong locality is equivalent to the total factorizability of the distribution function, and thus (given Liourille’s theorem) to the Maxwell Boltzmann distribution for an ideal gas. (shrink)
Consider the set Q of quantum correlation vectors for two observers, each with two possible binary measurements. Quadric (hyperbolic) inequalities which are satis…ed by every q 2 Q are proved, and equality holds on a two dimensional manifold consisting of the local boxes, and all..
Kochen and Specker’s theorem can be seen as a consequence of Gleason’s theorem and logical compactness. Similar compactness arguments lead to stronger results about finite sets of rays in Hilbert space, which we also prove by a direct construction. Finally, we demonstrate that Gleason’s theorem itself has a constructive proof, based on a generic, finite, effectively generated set of rays, on which every quantum state can be approximated. r 2003 Elsevier Ltd. All rights reserved.
Kuhnʼs influential book, The Structure of Scientific Revolutions,1 is often viewed as a revolt against empiricist philosophy of science. However, Friedman has reminded us lately2 that the book was commissioned by logical positivists, who were delighted with the result. In fact, the book was part of the International Encyclopedia of United Science initiated by members of the Vienna Circle, whose first volumes were published in 1938.3 The project aimed at providing a systematic positivist perspective on all the sciences, from logic (...) and mathematics through linguistics and on to psychology and sociology. The publication of Structure as volume 12 of the encyclopedia was greeted enthusiastically by the editor, Carnap, as can be learned from the letters he wrote to Kuhn. There are several reasons for this reaction. First, as noted by Friedman, there is a resemblance between Kuhnʼs notion of changing paradigms and Carnapʼs philosophical ideas. Logical empiricism is “logical” because of its central tenet that scientific knowledge is the organization of facts within a conceptual structure; the existence of an appropriate conceptual structure is a precondition for the very possibility of scientific inquiry. Unlike Kant, however, the logical empiricists believed that the conceptual. (shrink)
Contemporary versions of Bell’s argument against local hidden variable (LHV) theories are based on the Clauser Horne Shimony and Holt (CHSH) inequality, and various attempts to generalize it. The amount of violation of these inequalities cannot exceed the bound set by the Grothendieck constants. However, if we go back to the original derivation by Bell, and use the perfect anticorrelation embodied in the singlet spin state, we can go beyond these bounds. In this paper we derive two-particle Bell inequalities for (...) traceless two-outcome observables, whose violation in the singlet spin state go beyond the Grothendieck constant both for the two and three dimensional cases. Moreover, creating a higher dimensional analog of perfect correlations, and applying a recent result of Alon and his associates (Invent. Math. 163 499 (2006)) we prove that there are two-particle Bell inequalities for traceless two-outcome observables whose violation increases to in…nity as the dimension and number of measurements grow. Technically these result are possible because perfect correlations (or anti-correlations) allow us to transport the indices of the inequality from the edges of a bipartite graph to those of the complete graph. Finally, it is shown how to apply these results to mixed Werner states, provided that the noise does not exceed 20%. (shrink)
More specifically, one notices that X1 X2, P1 P2 0 where X1, X2 are the position operators for the first and second particles respectively, and P1, P2 their momenta operators. This means that, in principle, one can prepare the pair of particles with simultaneously known values of X1 X2 and P1 P2. Then the knowledge of the value of P2 allows to infer the value of P1.(However, performing the experiment with these continuous variables is technically (...) impossible and it remains a thought experiment.). (shrink)
In a fundamental paper [Phys. Rev. Lett. 78, 325 (1997)] Grover showed how a quantum computer can …nd a single marked object in a database of size N by using only O(pN ) queries of the oracle that identi…es the object. His result was generalized to the case of …nding one object in a subset of marked elements. We consider the following computational problem: A subset of marked elements is given whose number of elements is either M or K, M (...) < K, our task is to determine which is the case. We show how to solve this problem with a high probability of success using only iterations of Grover’s basic step (and no other algorithm). Let m be the required number of iterations; we prove that under certain restrictions on the sizes of M and K the estimation.. (shrink)
Let be n events in a probability space, and suppose that we have only partial information about the distribution: The probabilites of the events themselves, and their pair intersections. With this partial information we cannot, usually, deternine the probability of an event B in the algebra generated by the 's, but we can obtain lower and upper bounds. This is done by a linear program related to the correlation polytope c(n), a structure introduced in [3], [4]. In the first part (...) of the paper I demonstrate how laws of large numbers (for sequences of events which are not necessarily independent) can be proved, using only the duality theorem of linear programming. These include the weak law of large numbers (necessary and sufficient condition) and various sufficient conditions for strong laws. The connection between these laws and the facet structure of the correlation polytope is established. In the second part of the paper I consider a more general case. Assume that our information consists of the values of the probabilities of all intersections of the 's up to size k, k < n. The techniques of linear programming lead naturally to an application of the theory of polynomial approximation in estimating the size of various events. In particular, I prove an approximate version of the central limit theorem. (shrink)
Why don't we see large macroscopic objects in entangled states? Even if the particles composing the object were all entangled and insulated from the environment, we shall still find it almost always impossible to observe the superposition. The reason is that as the number of particles n grows, we need an ever more careful preparation, and an ever more carefully designed experiment, in order to recognize the entangled character of the state of the object. An observable W that distinguishes all (...) the unentangled states from some entangled states is called a witness. We consider witnesses on n quantum bits (qbits), and use the following normalization: A witness W satisfies |tr(Wr)|<= 1 for all separable states r, while ||W|| >1, with the norm being the maximum among the absolute values of the eigenvalues of W. Although there are n-qbit witnesses whose norm is exponential in n, we conjecture that for a large majority of such witnesses ||W||<=O[(nlogn)^1/2]. We prove this conjecture for the family of extremal witnesses introduced by Werner and Wolf (Phys. Rev. A 64, 032112 (2001)). Assuming the conjecture is valid we argue that multiparticle entanglement can be detected only if a system has been carefully prepared in a very special state. Otherwise, multiparticle entanglement lies below the threshold of detection, even if it exists, and even if decoherence has been ``turned off''. (shrink)
Can the axioms of probability theory and the classical patterns of statistical inference ever be falsified by observation? Various possible answers to this question are examined in a set theoretical context and in relation to the findings of microphysics.
Introduction.Yemima Ben-Menahem & Itamar Pitowsky - 2001 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 32 (4):503-510.details
A local "resolution" of the Einstein-Podolsky-Rosen Paradox by way of a mechanical analogue (roul ette) is presented together with some notes regarding the consequences of such models for the foundations of mathematics and the theory of probability.
Kochen and Specker's theorem can be seen as a consequence of Gleason's theorem and logical compactness. Similar compactness arguments lead to stronger results about finite sets of rays in Hilbert space, which we also prove by a direct construction. Finally, we demonstrate that Gleason's theorem itself has a constructive proof, based on a generic, finite, effectively generated set of rays, on which every quantum state can be approximated.
Quantum theory has played a significant role in modern philosophy both as a source of metaphysical ideas and as an important example of a 'scientific revolution'. In spite of the sixty or so years that have elapsed since its invention, a long lasting controversy concerning the interpretation and meaning of quantum theory prevails. Almost all authors, however, seem to agree on one major point, namely, that there could be no interpretation of this theory which is both realistic and local. ;The (...) purpose of this thesis is to demonstrate that this premiss is false and that a realistic, local and deterministic interpretation of quantum theory does exist, provided that we extend the classical concept of probability. ;In order to establish this a 'quasi classical' probability theory is developed based on some non Lebesgue measurable 'events', which is then applied to account for spin-statistics. Finally I note how this model reflects on the problems of physical realism, locality, the status of probability theory and the philosophical foundations of mathematics. (shrink)
The existence of fields besides gravitation may provide us with a way to decide empirically whether spacetime is really a nonflat Riemannian manifold or a flat Minkowskian manifold that appears curved as a result of gravitational distortions. This idea is explained using a modification of Poincaré's famous 'diskworld'.
Introduction.Yemima Ben-Menahem & Itamar Pitowsky - 2001 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 32 (4):503-510.details