This book is devoted to a thorough analysis of the role that models play in the practise of physical theory. The authors, a mathematical physicist and a philosopher of science, appeal to the logicians’ notion of model theory as well as to the concepts of physicists.
Phase transitions are well-understood phenomena in thermodynamics (TD), but it turns out that they are mathematically impossible in finite SM systems. Hence, phase transitions are truly emergent properties. They appear again at the thermodynamic limit (TL), i.e., in infinite systems. However, most, if not all, systems in which they occur are finite, so whence comes the justification for taking TL? The problem is then traced back to the TD characterization of phase transitions, and it turns out that the characterization is (...) the result of serious idealizations which under suitable circumstances approximate actual conditions. (shrink)
Two alternative accounts of quantum spontaneous symmetry breaking (SSB) are compared and one of them, the decompositional account in the algebraic approach, is argued to be superior for understanding quantum SSB. Two exactly solvable models are given as applications of our account -- the Weiss-Heisenberg model for ferromagnetism and the BCS model for superconductivity. Finally, the decompositional account is shown to be more conducive to the causal explanation of quantum SSB.
Traditional theories construe approximate truth or truthlikeness as a measure of closeness to facts, singular facts, and idealization as an act of either assuming zero of otherwise very small differences from facts or imagining ideal conditions under which scientific laws are either approximately true or will be so when the conditions are relaxed. I first explain the serious but not insurmountable difficulties for the theories of approximation, and then argue that more serious and perhaps insurmountable difficulties for the theory of (...) idealization force us to sever its close tie to approximation. This leads to an appreciation of lawlikeness as a measure of closeness to laws, which I argue is the real measure of idealization whose main purpose is to carve nature at its joints. (shrink)
I first give a brief summary of a critique of the traditional theories of approximation and idealization; and after identifying one of the major roles of idealization as detaching component processes or systems from their joints, a detailed analysis is given of idealized laws – which are discoverable and/or applicable – in such processes and systems (i.e., idealized model systems). Then, I argue that dispositional properties should be regarded as admissible properties for laws and that such an inclusion supplies the (...) much needed connection between idealized models and the laws they `produce'' or `accommodate''. And I then argue that idealized law-statements so produced or accommodated in the models may be either true simpliciter or true approximately, but the latter is not because of the idealizations involved. I argue that the kind of limiting-case idealizations that produce approximate truth is best regarded as approximation; and finally I compare my theory with some existing theories of laws of nature.We seem to trace [in KingLear] ... the tendency of imagination toanalyse and abstract, to decomposehuman nature into its constituentfactors, and then to construct beings in whomone or more of these factors isabsent or atrophied or only incipient. (shrink)
This article argues for an anti-deflationist view of scientific representation. Our discussion begins with an analysis of the recent Callender–Cohen deflationary view on scientific representation. We then argue that there are at least two radically different ways in which a thing can be represented: one is purely symbolic, and therefore conventional, and the other is epistemic. The failure to recognize that scientific models are epistemic vehicles rather than symbolic ones has led to the mistaken view that whatever distinguishes scientific models (...) from other representational vehicles must merely be a matter of pragmatics. It is then argued that even though epistemic vehicles also contain conventional elements, they do their job of demonstration in spite of such elements. (shrink)
In this paper, a criticism of the traditional theories of approximation and idealization is given as a summary of previous works. After identifying the real purpose and measure of idealization in the practice of science, it is argued that the best way to characterize idealization is not to formulate a logical model – something analogous to Hempel's D-N model for explanation – but to study its different guises in the praxis of science. A case study of it is then made (...) in thermostatistical physics. After a brief sketch of the theories for phase transitions and critical phenomena, I examine the various idealizations that go into the making of models at three difference levels. The intended result is to induce a deeper appreciation of the complexity and fruitfulness of idealization in the praxis of model-building, not to give an abstract theory of it. (shrink)
This paper examines the justifications for using infinite systems to 'recover' thermodynamic properties, such as phase transitions (PT), critical phenomena (CP), and irreversibility, from the micro-structure of matter in bulk. Section 2 is a summary of such rigorous methods as in taking the thermodynamic limit (TL) to recover PT and in using renormalization (semi-) group approach (RG) to explain the universality of critical exponents. Section 3 examines various possible justifications for taking TL on physically finite systems. Section 4 discusses the (...) legitimacy of applying TL to the problem of irreversibility and assesses the repercussions for its legitimacy on its home turf. (shrink)
The objective of this paper is to show that, instead of quantum probabilities, wave packets are physically real. First, Cartwright's recent argument for the reality of quantum probabilities is criticized. Then, the notion of ‘physically real’ is precisely defined and the difference between wave functions and quantum probabilities clarified. Being thus prepared, some strong reasons are discussed for considering the wave packet to be physically real. Finding the reasons inconclusive, I explain how the Aharonov—Bohm effect delivers the final punch. I (...) conclude that wave packets are the quantum objects that underlie the indeterministic quantum processes and have the propensity of displaying probabilistic (or indeterministic) behavior. (shrink)
Physics seems to tell us that there are four fundamental force-fields in nature: the gravitational, the electromagnetic, the weak, and the strong (or interactions). But it also seems to tell us that gravity cannot possibly be a force-field, in the same sense as the other three are. And yet the search for a grand unification of all four force-fields is today one of the hottest pursuits. Is this the result of a simple confusion? This article aims at clarifying this situation (...) by (i) reviewing the gauge-field programme and its conception of unification of force-fields, (ii) examining the various attempts at a gauge theory of gravity, and (iii) articulating the nature of "gauging" and using it to explain the difference between gravity and the other force-fields. (shrink)
This paper examines the justifications for using infinite systems to ‘recover’ thermodynamic properties, such as phase transitions, critical phenomena, and irreversibility, from the micro-structure of matter in bulk. Section 2 is a summary of such rigorous methods as in taking the thermodynamic limit to recover PT and in using renormalization group approach to explain the universality of critical exponents. Section 3 examines various possible justifications for taking TL on physically finite systems. Section 4 discusses the legitimacy of applying TL to (...) the problem of irreversibility and assesses the repercussions for its legitimacy on its home turf. (shrink)
This paper aims at answering the simple question, “What is spontaneous symmetry breaking (SSB) in classical systems?” I attempt to do this by analyzing from a philosophical perspective a simple classical model which exhibits some of the main features of SSB. Related questions include: What does it mean to say that a symmetry is spontaneously broken? Is it broken without any causes, or is the symmetry not broken but merely hidden? Is the principle, “no asymmetry in, no asymmetry out,” violated (...) by SSB? What really distinguishes SSB from the usual types of symmetry breaking? (shrink)
In this paper, we argue that symbols are conventional vehicles whose chief function is denotation, while models are epistemic vehicles, and their chief function is to show what their targets are like in the relevant aspects. And we explain why this is incompatible with the deflationary view on scientific modeling. Although the same object may serve both functions, the two vehicles are conceptually distinct and most models employ both elements. With the clarification of this point we offer an alternative account (...) to the deflationary view – the Hybrid Account; and we defend our account in contrast with deflationism. (shrink)
Understanding and predicting extreme turning points in the financial market, such as financial bubbles and crashes, has attracted much attention in recent years. Experimental observations of the superexponential increase of prices before crashes indicate the predictability of financial extremes. In this study, we aim to forecast extreme events in the stock market using 19-year time-series data of the financial market, covering 12 kinds of worldwide stock indices. In addition, we propose an extremes indicator through the network, which is constructed from (...) the price time series using a weighted visual graph algorithm. Experimental results on 12 stock indices show that the proposed indicators can predict financial extremes very well. (shrink)
The paper discusses the recent literature on abstraction/idealization in connection with the “paradox of infinite idealization.” We use the case of taking thermodynamics limit in dealing with the phenomena of phase transition and critical phenomena to broach the subject. We then argue that the method of infinite idealization is widely used in the practice of science, and not all uses of the method are the same. We then confront the compatibility problem of infinite idealization with scientific realism. We propose and (...) defend a contextualist position for realism and argue that the cases for infinite idealization appear to be fully compatible with contextual realism. (shrink)
Two types of idealization in theory construction are distinguished, and the distinction is used to give a critique of Ron Laymon's account of confirming idealized theories and his argument for scientific realism.
This essay is a philosophical evaluation of some of the findings of Wald and Penrose in which they claim to have supported an arrow (or the irreversibility) of time in quantum gravity. First, the notion of lawlike irreversibility (or anisotropy) of time is spelled out, then the general situation in quantum mechanics is briefly discussed. Finally, the findings in quantum gravity are evaluated against such a background. My conclusion is that the arrow of time found in quantum gravity is at (...) best de facto (nonlawlike). (shrink)
This essay explores the nature of spontaneous symmetry breaking (SSB) in connection with a cluster of interrelated concepts such as Curie's symmetry principle, ergodicity, and chance and stability in classical systems. First, a clarification of the two existing senses of SSB is provided and an argument developed for a proposal for SSB, in which not only the possibilities but also the actual breakings are referred to. Second, a detailed analysis is given of classical SSB that answers the questions: (i) how (...) we are justified in regarding it as a matter of chance, and (ii) why the breakings in it are equally probable. The answer provides some support to the applicability of ergodicity in special systems (such as ours). (shrink)
Over forty years after the foundations of the special theory of relativity had been securely laid, a heated debate, beginning in 1965, about the correct formulation of relativistic thermodynamics raged in the physics literature. Prior to 1965, relativistic thermodynamics was considered one of the most secure relativistic theories and one of the most simple and elegant examples of relativization in physics. It is, as its name apparently suggests, the result of the application of the special theory of relativity to thermodynamics. (...) The basic assumption is that the first and second laws of thermodynamics are Lorentz-invariant, and, as a result, a set of Lorentz transformations is derived from thermodynamic magnitudes, such as heat and temperature. (shrink)
The paper, as Part I of a two-part series, argues for a hybrid formulation of the semantic view of scientific theories. For stage-setting, it first reviews the elements of the model theory in mathematical logic (on whose foundation the semantic view rests), the syntactic and the semantic view, and the different notions of models used in the practice of science. The paper then argues for an integration of the notions into the semantic view, and thereby offers a hybrid semantic view, (...) which at once secures the view's logical foundations and enhances its applicability. The dilemma of either losing touch with the practice of science or yielding up the benefits of the model theory is thus avoided. (shrink)
Relativistic Thermodynamics of equilibrium processes has remained a strange chapter in the history of modern physics. It was established by Planck in 1908 as a simple application of Einstein's special theory of relativity. Einstein himself made substantial contributions and its final product remained officially unchallenged until 1965. In 1952, however, at the end of his career, Einstein challenged the theory in his correspondence with von Laue. Many of his unpublished suggestions anticipated the major works in the debate of the 1960s. (...) The debate on the theory of RTD started in 1965 and lasted over a decade. In the end, no satisfactory solution was found even though every possible alternative seemed to have been entertained. Most participants contended that the choice among the alternatives was a matter of convention, depending on how one defines the basic quantities in RTD. ;This dissertation provides a critical study of the history of RTD and a philosophical investigation of its foundations. The first half is a critical study of the origin and the early development of RTD; which culminated in a detailed and, to my best knowledge, the first thorough discussion of the Einstein-von Laue correspondence. In the second half, after the complexity of the problem is described in chapter 5, a solution is found for the whole controversy based on Anderson's sharp insights on the different meanings of the relativity principles. Unless one can prove that pure thermodynamic quantities are geometrical objects, there is no need to look for the Lorentz-Transformations for those quantities; but they do not qualify as geometrical objects for they can only be defined in the rest frame of a system. ;This study also shows how profound the relativity principles are, how difficult it is to grasp their real meaning, and how physicists were led astray by paying too much attention to the formalism of a theory but too little to the soundness of the basic assumptions from which the theory derived. (shrink)
This paper defends an approach to modeling and models in science that is against model fictionalism of a recent stripe (the “new fictionalism” that takes models to be abstract entities that are analogous to works of fiction). It further argues that there is a version of fictionalism on models to which my approach is neutral and which only makes sense if one adopts a special sort of antirealism (e.g. constructive empiricism). Otherwise, my approach strongly suggests that one stays away from (...) fictionalism and embraces realism directly. (shrink)
I argue that categorical realism, contrary to what most believe today, holds for quantum (and indeed for all) objects and substances. The main argument consists of two steps: (i) the recent experimental verification of the AB effect gives strong empirical evidence for taking quantum potentials as physically real (or substantival), which suggests a change of the data upon which any viable interpretation of quantum theory must rely, and (ii) quantum potentials may be consistently taken as the categorical properties of quantum (...) objects so that categorical realism can be restored. (shrink)
The concepts in the title refer to properties of physical theories and this paper investigates their nature and relations. The first three concepts, especially gauge invariance and indeterminism, have been widely discussed in connection to spacetime theories and the hole argument. Since the gauge invariance principle is at the crux of the issue, this paper aims at clarifying the nature of gauge invariance. I first explore the following chain of relations: gauge invariance $\Rightarrow $ the conservation laws $\Rightarrow $ the (...) Cauchy problem $\Rightarrow $ indeterminism. Then I discuss gauge invariance in light of our understanding of the above relations and the possibility of spontaneous symmetry breaking. (shrink)
In this paper I argue against a deflationist view that as representational vehicles symbols and models do their jobs in essentially the same way. I argue that symbols are conventional vehicles whose chief function is denotation while models are epistemic vehicles whose chief function is showing what their targets are like in the relevant aspects. It is further pointed out that models usually do not rely on similarity or some such relations to relate to their targets. For that referential relation (...) they reply instead on symbols (names and labels) given to them and their parts. And a Goodmanian view on pictures of fictional characters reveals the distinction between symbolic and model representations. (shrink)
In this paper, I begin with a discussion of Giere’s recent work arguing against taking models as works of fiction. I then move on to explore a spectrum of scientific models that goes from the obviously fictional to the not so obviously fictional. And then I discuss the modeling of the unobservable and make a case for the idea that despite difficulties of defining them, unobservable systems are modeled in a fundamentally different way than the observable systems. While idealization and (...) approximation is key to the making of models for the observable systems, they are in fact inoperable, at least not straightforwardly so, regarding models for the unobservable. And because of this point, which is so far neglected in the literature, I speculate that factionalism may have a better chance with models for the unobservable. (shrink)
The present essay aims at broadening the recent discussion on the issue of holism vs. particularism in quantum physics. I begin with a clarification of the relation between the holism/particularism debate and the discussion of supervenience relation. I then defend particularism in physics (including quantum physics) by considering a new classification of properties of physical systems. With such a classification, the results in the Bell theorem are shown to violate spatial separability but not physical particularism.
This article develops an approach to modelling and models in science—the hybrid view—that is against model fictionalism of a recent stripe. It further argues that there is a version of fictionalism about models to which my approach is neutral and which makes sense only if one adopts a special sort of antirealism. Otherwise, my approach strongly suggests that one stay away from fictionalism and embrace realism directly.
The paper discusses the recent literature on abstraction/idealization in connection with the “paradox of infinite idealization.” We use the case of taking thermodynamics limit in dealing with the phenomena of phase transition and critical phenomena to broach the subject. We then argue that the method of infinite idealization is widely used in the practice of science, and not all uses of the method are the same. We then confront the compatibility problem of infinite idealization with scientific realism. We propose and (...) defend a contextualist position for realism and argue that the cases for infinite idealization appear to be fully compatible with contextual realism. (shrink)
Our discussion in the first five sections shows that little new can be said about compatibilism, that van Inwagen's argument for incompatibilism still stands, and that the view of free agency for a libertarian has little chance unless she believes that agency contains elements that are not within the natural order. Borrowing from a suggestion from Russell we expanded the Nozick-Kane model of libertarian free agency and connected it to the Wignerian interpretation of quantum measurement. As such, free decisions and (...) choices may well violate the Born rule of probability distribution and yet it is shown how such violations are unlikely to be detected in experiments. This model is probably the only model in which Loewer's van Inwagen style argument for the incompatibility between free agency and quantum indeterminism does not apply, and it is a model in which free agency is not only compatible but necessary. It is compatible with indeterminism and it is necessary for the determinateness of any measurement outcomes. (shrink)
This paper, part I of a two-part project, aims at answering the simple question 'what is spontaneous symmetry breaking?' by analyzing from a philosophical perspective a simple classical model. Related questions include: what does it mean to break a symmetry spontaneously? Is the breaking causal, or is the symmetry not broken but merely hidden? Is the meta-principle, 'no asymmetry in, no asymmetry out,' violated? And what is the role in this of random perturbations (or fluctuations)?
In this essay, I explore a metaphor in geometry for the debate between the unity and the disunity of science, namely, the possibility of putting a global coordinate system (or a chart) on a manifold. I explain why the former is a good metaphor that shows what it means (and takes in principle) for science to be unified. I then go through some of the existing literature on the unity/disunity debate and show how the metaphor sheds light on some of (...) the views and arguments. (shrink)
This paper aims at answering the simple question `what is spontaneous symmetry breaking (SSB)?` by analyzing from a philosophical perspective a simple classical model which exhibits all the requisite properties of SSB. Related questions include: what does it mean to say that a symmetry is spontaneously broken? Is it broken without any cause, or is the symmetry not broken but merely hidden? Is the meta-principle, `no asymmetry in, no asymmetry out,` violated by SSB? And what is the role in this (...) of random perturbations (or fluctuations)? (shrink)
This paper, part II of a two-part project, continues to explore the meaning of spontaneous symmetry breaking (SSB) by applying and expanding the general notion we obtained in part I to some more complex and, from the physics point of view, more important models (in condensed matter physics and in quantum field theories).
This paper contains four variations on Duhem's theme about the contrast between the abstract French mind and the concrete British mind. The first variation brings out the real contrast between two types of methods and their results: the A method or models and the C method or models. The second variation gives a critical discussion of the Callender-Cohen deflationary contruel of scientific representation. The third variation discusses Russell's structuralism in connection to the theme. And the fourth variation critically discusses the (...) relationship between models and fiction in connection to the distinction between the A-models and the C-models. A coda maps out without sufficiently detailed arguments the author's view on the nature of the C-models and why they, and only they, can be viewed as fully fictional. (shrink)
The paper first raises the problem concerning the confirmation of idealized theories in science and its relationship to scientific realism. Then a solution by Laymon is discussed. It is then argued that two different types of idealization need to be distinguished and that only one of them produces false theories. But then, such “theories” are really theory-maps, which point to theories at the end of improvements. Finally, Laymon’s account is modified in accordance with the above insight.
In this essay I argue against the idea that modeling in science is analogous to fiction making in literary works by pointing out that a typical move in the former, which is widely acknowledged in philosophy literature as a signal for fictionalization, is never present in works of fiction. I further argue that the reason for such a disparity is profound and profoundly against conceiving modeling as fictionalization. I then explain the difference between the hypothetical and the fictional, and argue (...) that modeling in science belongs to the former. (shrink)
En este artículo, exploro una metáfora engeometría que nos ayuda a entender mejorel debate sobre la unidad y la desunidad dela ciencia, a saber, la posibilidad de poner unsistema global de coordenadascartesianas sobreuna variedad .Explicaré las razones por las que ésta es unabuena metáfora capaz de mostrar lo quesignifica launificación para la ciencia. Posteriormente,examinaré una parte de la literatura sobre eldebate unidad/desunidad y mostraré cómoesta metáfora puede iluminar algunos de losargumentos y puntos de vista.n this essay, I explore a metaphor (...) in geometryfor the debate between the unity and thedisunity of science, namely, the possibility ofputting a global coordinate system on a manifold. I explain why the former is agood metaphor that shows what it means for science to beunified. I then go through some of the existingliterature on the unity/disunity debate andshow how the metaphor sheds light on someof the views and arguments. (shrink)
In this paper I explore the nature of spontaneous symmetry breaking in connection with a cluster of interrelated concepts such as Curie's symmetry principle, chance, and stability.
This paper is the second of a two-part series on models and theories, the first of which appeared in International Studies in the Philosophy of Science, Vol. 11, No. 2, 1997. It further explores some of themes of the first paper and examines applications, including: the relations between “similarity” and “isomorphism”, and between “model” and “interpretation”, and the notion of structural explanation.