Models as Mediators discusses the ways in which models function in modern science, particularly in the fields of physics and economics. Models play a variety of roles in the sciences: they are used in the development, exploration and application of theories and in measurement methods. They also provide instruments for using scientific concepts and principles to intervene in the world. The editors provide a framework which covers the construction and function of scientific models, and explore the ways in which they (...) enable us to learn about both theories and the world. The contributors to the volume offer their own individual theoretical perspectives to cover a wide range of examples of modelling, from physics, economics and chemistry. These papers provide ideal case-study material for understanding both the concepts and typical elements of modelling, using analytical approaches from the philosophy and history of science. (shrink)
The book examines issues related to the way modeling and simulation enable us to reconstruct aspects of the world we are investigating. It also investigates the processes by which we extract concrete knowledge from those reconstructions and how that knowledge is legitimated.
The paper presents an argument for treating certain types of computer simulation as having the same epistemic status as experimental measurement. While this may seem a rather counterintuitive view it becomes less so when one looks carefully at the role that models play in experimental activity, particularly measurement. I begin by discussing how models function as “measuring instruments” and go on to examine the ways in which simulation can be said to constitute an experimental activity. By focussing on the connections (...) between models and their various functions, simulation and experiment one can begin to see similarities in the practices associated with each type of activity. Establishing the connections between simulation and particular types of modelling strategies and highlighting the ways in which those strategies are essential features of experimentation allows us to clarify the contexts in which we can legitimately call computer simulation a form of experimental measurement. (shrink)
This article examines ontological/dynamical aspects of emergence, specifically the micro-macro relation in cases of universal behavior. I discuss superconductivity as an emergent phenomenon, showing why microphysical features such as Cooper pairing are not necessary for deriving characteristic properties such as infinite conductivity. I claim that the difficulties surrounding the thermodynamic limit in explaining phase transitions can be countered by showing how renormalization group techniques facilitate an understanding of the physics behind the mathematics, enabling us to clarify epistemic and ontological aspects (...) of emergence. I close with a discussion of the impact of these issues for questions concerning natural kinds. (shrink)
This book is about the methods used for unifying different scientific theories under one all-embracing theory. The process has characterized much of the history of science and is prominent in contemporary physics; the search for a 'theory of everything' involves the same attempt at unification. Margaret Morrison argues that, contrary to popular philosophical views, unification and explanation often have little to do with each other. The mechanisms that facilitate unification are not those that enable us to explain how or why (...) phenomena behave as they do. A feature of this book is an account of many case studies of theory unification in nineteenth- and twentieth-century physics and of how evolution by natural selection and Mendelian genetics were unified into what we now term evolutionary genetics. (shrink)
The paper examines philosophical issues that arise in contexts where one has many different models for treating the same system. I show why in some cases this appears relatively unproblematic (models of turbulence) while others represent genuine difficulties when attempting to interpret the information that models provide (nuclear models). What the examples show is that while complementary models needn’t be a hindrance to knowledge acquisition, the kind of inconsistency present in nuclear cases is, since it is indicative of a lack (...) of genuine theoretical understanding. It is important to note that the differences in modeling do not result directly from the status of our knowledge of turbulent flows as opposed to nuclear dynamics—both face fundamental theoretical problems in the construction and application of models. However, as we shall, the ‘problem context(s)’ in which the modeling takes plays a decisive role in evaluating the epistemic merit of the models themselves. Moreover, the theoretical difficulties that give rise to inconsistent as opposed to complementary models (in the cases I discuss) impose epistemic and methodological burdens that cannot be overcome by invoking philosophical strategies like perspectivism, paraconsistency or partial structures. (shrink)
Morrison and Morgan argue for a view of models as 'mediating instruments' whose role in scientific theorising goes beyond applying theory. Models are partially independent of both theories and the world. This autonomy allows for a unified account of their role as instruments that allow for exploration of both theories and the world.
Although the recent emphasis on models in philosophy of science has been an important development, the consequence has been a shift away from more traditional notions of theory. Because the semantic view defines theories as families of models and because much of the literature on “scientific” modeling has emphasized various degrees of independence from theory, little attention has been paid to the role that theory has in articulating scientific knowledge. This paper is the beginning of what I hope will be (...) a redress of the imbalance. I begin with a discussion of some of the difficulties faced by various formulations of the semantic view not only with respect to their account of models but also with their definition of a theory. From there I go on to articulate reasons why a notion of theory is necessary for capturing the structure of scientific knowledge and how one might go about formulating such a notion in terms of different levels of representation and explanation. The context for my discussion is the BCS account of superconductivity, a `theory' that was, and still is, sometimes referred to as a `model'. BCS provides a nice focus for the discussion because it illuminates various features of the theory/model relationship that seem to require a robust notion of theory that is not easily captured by the semantic account. (shrink)
Despite the close connection between the central limit theorem and renormalization group (RG) methods, the latter should be considered fundamentally distinct from the kind of probabilistic framework associated with statistical mechanics, especially the notion of averaging. The mathematics of RG is grounded in dynamical systems theory rather than probability, which raises important issues with respect to the way RG generates explanations of physical phenomena. I explore these differences and show why RG methods should be considered not just calculational tools but (...) the basis for a physical understanding of complex systems in terms of structural properties and relations. (shrink)
The debate between the Mendelians and the (largely Darwinian) biometricians has been referred to by R. A. Fisher as ‘one of the most needless controversies in the history of science’ and by David Hull as ‘an explicable embarrassment’. The literature on this topic consists mainly of explaining why the controversy occurred and what factors prevented it from being resolved. Regrettably, little or no mention is made of the issues that figured in its resolution. This paper deals with the latter topic (...) and in doing so reorients the focus of the debate as one between Karl Pearson and R. A. Fisher rather than between the biometricians and the Mendelians. One reason for this reorientation is that Pearson's own work in 1904 and 1909 suggested that Mendelism and biometry could, to some extent, be made compatible, yet he remained steadfast in his rejection of Mendelism. The interesting question then is why Fisher, who was also a proponent of biometric methods, was able to synthesise the two traditions in a way that Pearson either could not or would not. My answer to this question involves an analysis of the ways in which different kinds of assumptions were used in modelling Mendelian populations. I argue that it is these assumptions, which lay behind the statistical techniques of Pearson and Fisher, that can be isolated as the source of Pearson's rejection of Mendelism and Fisher's success in the synthesis. (shrink)
Many of the arguments against reductionism and fundamental theory as a method for explaining physical phenomena focus on the role of models as the appropriate vehicle for this task. While models can certainly provide us with a good deal of explanatory detail, problems arise when attempting to derive exact results from approximations. In addition, models typically fail to explain much of the stability and universality associated with critical point phenomena and phase transitions, phenomena sometimes referred to as "emergent." The paper (...) examines the connection between theoretical principles like spontaneous symmetry breaking and emergent phenomena and argues that new ways of thinking about emergence and fundamentalism are required in order to account for the behavior of many phenomena in condensed matter and other areas of physics. (shrink)
The paper begins with a generic discussion of modelling, focusing on some of its practices and problems. I then move on to a philosophical discussion about emergence and multi-scale modelling; more specifically, the reasons why what looks like a promising strategy for dealing with emergence is sometimes incapable of delivering interesting results. This becomes especially evident when we look more closely at turbulence and what I take to be the main ontological feature of emergent behavior—universality. Finally, I conclude by showing (...) why, despite displaying multi-scale behaviour and some of the characteristics we identify with emergence, turbulence fails to fit neatly into the latter category and is not successfully captured using multi-scale modelling. The complex nature of turbulence illustrates the difficulties in characterizing emergence and why specific criteria are needed in order to prevent every complex behaviour we don’t understand being classified as emergent. (shrink)
In this paper I argue for a distinction between subjective and value laden aspects of judgements showing why equating the former with the latter has the potential to confuse matters when the goal is uncovering the influence of political influences on scientific practice. I will focus on three separate but interrelated issues. The first concerns the issue of ‘verification’ in computational modelling. This is a practice that involves a number of formal techniques but as I show, even these allegedly objective (...) methods ultimately rely on subjective estimation and evaluation of different types of parameters. This has implications for my second point which relates to uncertainty quantification—an assessment of the degree of uncertainty present in a particular modelling scenario. I argue that while this practice also involves subjective elements, in no way does that detract from its status as an epistemic exercise. Finally I discuss the relation between accuracy and uncertainty and how each relates to judgements that embody social/ethical/political concerns, in particular those associated with high consequence systems. (shrink)
In addition to its obvious successes within the kinetic theory the ideal gas law and the modeling assumptions associated with it have been used to treat phenomena in domains as diverse as economics and biology. One reason for this is that it is useful to model these systems using aggregates and statistical relationships. The issue I deal with here is the way R. A. Fisher used the model of an ideal gas as a methodological device for examining the causal role (...) of selection in producing variation in Mendelian populations. The model enabled him to create the kind of population where one could measure the effects of selection in a way that could not be done empirically. Consequently we are able to see how the model of an ideal gas was transformed into a biological model that functioned as an instrument for both investigating nature and developing a new theory of genetics. (shrink)
Ernst Mayr has criticised the methodology of population genetics for being essentialist: interested only in “types” as opposed to individuals. In fact, he goes so far as to claim that “he who does not understand the uniqueness of individuals is unable to understand the working of natural selection” (1982, 47). This is a strong claim indeed especially since many responsible for the development of population genetics (especially Fisher, Haldane, and Wright) were avid Darwinians. In order to unravel this apparent incompatibility (...) I want to examine the possible sources and implications of essentialism in this context and show why the kind of mathematical analysis found in Fisher's work is better seen as responsible for extending the theory of natural selection to a broader context rather than inhibiting its applicability. (shrink)
The physics of condensed matter, in contrast to quantum physics or cosmology, is not traditionally associated with deep philosophical questions. However, as science - largely thanks to more powerful computers - becomes capable of analysing and modelling ever more complex many-body systems, basic questions of philosophical relevance arise. Questions about the emergence of structure, the nature of cooperative behaviour, the implications of the second law, the quantum-classical transition and many other issues. This book is a collection of essays by leading (...) physicists and philosophers. Each investigates one or more of these issues, making use of examples from modern condensed matter research. Physicists and philosophers alike will find surprising and stimulating ideas in these pages. (shrink)
One of the hallmarks of Kantian philosophy, especially in connection with its characterization of scientific knowledge, is the importance of unity, a theme that is also the driving force behind a good deal of contemporary high energy physics. There are a variety of ways that unity figures in modern science—there is unity of method where the same kinds of mathematical techniques are used in different sciences, like physics and biology; the search for unified theories like the unification of electromagnetism and (...) optics by Maxwell; and, more recently, the project of grand unification or the quest for a theory of everything which involves a reduction of the four fundamental forces under the umbrella of a single theory. In this latter case it is thought that when energies are high enough, the forces , while very different in strength, range and the types of particles on which they act, become one and the same force. The fact that these interactions are known to have many underlying mathematical features in common suggests that they can all be described by a unified field theory. Such a theory describes elementary particles in terms of force fields which further unifies all the interactions by treating particles and interactions in a technically and conceptually similar way. It is this theoretical framework that allows for the prediction that measurements made at a certain energy level will supposedly indicate that there is only one type of force. In other words, not only is there an ontological reduction of the forces themselves but the mathematical framework used to describe the fields associated with these forces facilitates their description in a unified theory. Specific types of symmetries serve an important function in establishing these kinds of unity, not only in the construction of quantum field theories but also in the classification of particles; classifications that can lead to new predictions and new ways of understanding properties like quantum numbers. Hence, in order to address issues about unification and reduction in contemporary physics we must also address the way that symmetries facilitate these processes. (shrink)
Prandtl's work on the boundary layer theory is an interesting example for illustrating several important issues in philosophy of science such as the relation between theories and models and whether it is possible to distinguish, in a principled way, between pure and applied science. In what follows I discuss several proposals by the symposium participants regarding the interpretation of Prandtl's work and whether it should be characterized as an instance of applied science. My own interpretation of this example (1999) emphasised (...) the degree of autonomy embedded in Prandtl's boundary layer model and the way it became integrated in the larger theoretical context of hydrodynamics. In addition to extending that discussion here I also claim that the characterization of applied science which formed the basis for the symposium does not enable us to successfully distinguish applied science from the general practice of 'applying' basic scientific knowledge in a variety of contexts. (shrink)
I argued that the frameworks and mechanisms that produce unification do not enable us to explain why the unified phenomena behave as they do. That is, we need to look beyond the unifying process for an explanation of these phenomena. Anya Plutynski ([2005]) has called into question my claim about the relationship between unification and explanation as well as my characterization of it in the context of the early synthesis of Mendelism with Darwinian natural selection. In this paper I argue (...) that her methodological criticisms rest on a misinterpretation of my views on explanation and defend my historical interpretation of the work of Fisher and Wright. A statement of the problem Methodological differences: how to characterize explanation Historical matters: disagreements about details Explanation revisited: the possible versus the ‘merely actual’. (shrink)
Some very persuasive arguments have been put forward in recent years in support of the disunity of science. Despite this, one is forced to acknowledge that unification, especially the practice of unifying theories, remains a crucial aspect of scientific practice. I explore specific aspects of this tension by examining the nature of theory unification and how it is achieved in the case of the electroweak theory. I claim that because the process of unifying theories is largely dependent on particular kinds (...) of mathematical structures it is possible to have a theory that displays a degree of unity at the level of theoretical structure without an accompanying ontological unity or reduction. As a result, unity and disunity can coexist not only within science but within the same theory. (shrink)
One of Nancy Cartwright's arguments for entity realism focuses on the non-redundancy of causal explanation. In How the Laws of Physics Lie she uses an example from laser theory to illustrate how we can have a variety of theoretical treatments governing the same phenomena while allowing just one causal story. In the following I show that in the particular example Cartwright chooses causal explanation exhibits the same kind of redundancy present in theoretical explanation. In an attempt to salvage Cartwright's example (...) the causal explanation could be reinterpreted as a capacity claim, as outlined in her recent work Nature's Capacities and Their Measurement. However, I argue that capacities cannot be isolated in the way that Cartwright suggests and consequently these capacity claims also fail to provide a unique causal story. We can, however, make sense of capacities by characterizing them in a relational way and I offer some ideas as to how this approach would retain our intuitions about capacities while denying their ontological priority as dormant powers. (shrink)
This paper argues for two related theses. The first is that mathematical abstraction can play an important role in shaping the way we think about and hence understand certain phenomena, an enterprise that extends well beyond simply representing those phenomena for the purpose of calculating/predicting their behaviour. The second is that much of our contemporary understanding and interpretation of natural selection has resulted from the way it has been described in the context of statistics and mathematics. I argue for these (...) claims by tracing attempts to understand the basis of natural selection from its early formulation as a statistical theory to its later development by R.A. Fisher, one of the founders of modern population genetics. Not only did these developments put natural selection of a firm theoretical foundation but its mathematization changed the way it was understood as a biological process. Instead of simply clarifying its status, mathematical techniques were responsible for redefining or reconceptualising selection. As a corollary I show how a highly idealised mathematical law that seemingly fails to describe any concrete system can nevertheless contain a great deal of accurate information that can enhance our understanding far beyond simply predictive capabilities. (shrink)
The physics of condensed matter, in contrast to quantum physics or cosmology, is not traditionally associated with deep philosophical questions. However, as science - largely thanks to more powerful computers - becomes capable of analysing and modelling ever more complex many-body systems, basic questions of philosophical relevance arise. Questions about the emergence of structure, the nature of cooperative behaviour, the implications of the second law, the quantum-classical transition and many other issues. This book is a collection of essays by leading (...) physicists and philosophers. Each investigates one or more of these issues, making use of examples from modern condensed matter research. Physicists and philosophers alike will find surprising and stimulating ideas in these pages. (shrink)
This paper is intended as an extension to some of the recent discussion in the philosophical literature on the nature of experimental evidence. In particular I examine the role of empirical evidence attained through the use of deductions from phenomena. This approach to theory construction has been widely used throughout the history of science both by Newton and Einstein as well as Clerk Maxwell. I discuss a particular formulation of maxwell's electrodynamics, one he claims was deduced from experimental facts. However, (...) the deduction is problematic in that it is not immediately clear that one of the crucial parameters of the theory, the displacement current, can be given an empirical foundation. In outlining Maxwell's argument and his attempts to arrive on an empirically based account of the electromagnetic field equations I draw attention to the philosophical implications of the constraints on theory that arise in this particular case of deduction from phenomena. (shrink)
In The Foundations of Space-Time Theories Friedman argues for a literal realistic interpretation about theoretical structures that participate in theory unification. His account of the relationship between observational and theoretical structure is characterized as that of model to submodel and involves a reductivist strategy that allows for the conjunction of certain theoretical structures with other structures which, taken together, form a truly unified theory. Friedman criticizes the representational account for its failure to allow for a literal interpretation and conjunction of (...) theoretical structure. I argue that contra Friedman the representationalist account can sanction a literal interpretation and in fact presents a more accurate account of scientific practice than the model-submodel account. The strict reductivism characteristic of the model submodel approach can in some cases be seen to prevent rather than facilitate a literal account of theoretical structure. Because of the dependence Friedman places on reduction for his account of conjunction, and because the former cannot be sustained, it would appear that Friedman's own account fails to achieve what it was designed to do. (shrink)
The aim of this paper is to show that the argument put forth by Bell and Hallett against Putnam's thesis regarding the invariance of meaning for quantum logical connectives is insufficient to establish their conclusion. By using an example from the causal theory of time, the paper shows how the condition they specify as relevant in cases of meaning variance in fact fails. As a result, the conclusion that negation undergoes a change of meaning in the quantum logical case is (...) left in doubt. The paper proceeds in three stages. First, a summary of Putnam's argument for the invariance of meaning in the case of quantum logical connectives is provided; this is followed by a review of the criticisms advanced against it by Bell and Hallett. Finally, it is shown how the main claim upon which their major criticism rests is unfounded. (shrink)
One of the hallmarks of Kantian philosophy, especially in connection with its characterization of scientific knowledge, is the importance of unity, a theme that is also the driving force behind a good deal of contemporary high energy physics. There are a variety of ways that unity figures in modern science—there is unity of method where the same kinds of mathematical techniques are used in different sciences, like physics and biology; the search for unified theories like the unification of electromagnetism and (...) optics by Maxwell; and, more recently, the project of grand unification or the quest for a theory of everything which involves a reduction of the four fundamental forces under the umbrella of a single theory. In this latter case it is thought that when energies are high enough, the forces, while very different in strength, range and the types of particles on which they act, become one and the same force. The fact that these interactions are known to have many underlying mathematical features in common suggests that they can all be described by a unified field theory. Such a theory describes elementary particles in terms of force fields which further unifies all the interactions by treating particles and interactions in a technically and conceptually similar way. It is this theoretical framework that allows for the prediction that measurements made at a certain energy level will supposedly indicate that there is only one type of force. In other words, not only is there an ontological reduction of the forces themselves but the mathematical framework used to describe the fields associated with these forces facilitates their description in a unified theory. Specific types of symmetries serve an important function in establishing these kinds of unity, not only in the construction of quantum field theories but also in the classification of particles; classifications that can lead to new predictions and new ways of understanding properties like quantum numbers. Hence, in order to address issues about unification and reduction in contemporary physics we must also address the way that symmetries facilitate these processes. (shrink)