From the beginning of chaos research until today, the unpredictability of chaos has been a central theme. It is widely believed and claimed by philosophers, mathematicians and physicists alike that chaos has a new implication for unpredictability, meaning that chaotic systems are unpredictable in a way that other deterministic systems are not. Hence, one might expect that the question ‘What are the new implications of chaos for unpredictability?’ has already been answered in a satisfactory way. However, this is not the (...) case. I will critically evaluate the existing answers and argue that they do not fit the bill. Then I will approach this question by showing that chaos can be defined via mixing, which has never before been explicitly argued for. Based on this insight, I will propose that the sought-after new implication of chaos for unpredictability is the following: for predicting any event, all sufficiently past events are approximately probabilistically irrelevant. (shrink)
The central question of this paper is: are deterministic and indeterministic descriptions observationally equivalent in the sense that they give the same predictions? I tackle this question for measure-theoretic deterministic systems and stochastic processes, both of which are ubiquitous in science. I first show that for many measure-theoretic deterministic systems there is a stochastic process which is observationally equivalent to the deterministic system. Conversely, I show that for all stochastic processes there is a measure-theoretic deterministic system which is observationally equivalent (...) to the stochastic process. Still, one might guess that the measure-theoretic deterministic systems which are observationally equivalent to stochastic processes used in science do not include any deterministic systems used in science. I argue that this is not so because deterministic systems used in science even give rise to Bernoulli processes. Despite this, one might guess that measure-theoretic deterministic systems used in science cannot give the same predictions at every observation level as stochastic processes used in science. By proving results in ergodic theory, I show that also this guess is misguided: there are several deterministic systems used in science which give the same predictions at every observation level as Markov processes. All these results show that measure-theoretic deterministic systems and stochastic processes are observationally equivalent more often than one might perhaps expect. Furthermore, I criticise the claims of the previous philosophy papers Suppes (1993, 1999), Suppes and de Barros (1996) and Winnie (1998) on observational equivalence. (shrink)
Boltzmannian statistical mechanics partitions the phase space of a sys- tem into macro-regions, and the largest of these is identified with equilibrium. What justifies this identification? Common answers focus on Boltzmann’s combinatorial argument, the Maxwell-Boltzmann distribution, and maxi- mum entropy considerations. We argue that they fail and present a new answer. We characterise equilibrium as the macrostate in which a system spends most of its time and prove a new theorem establishing that equilib- rium thus defined corresponds to the largest (...) macro-region. Our derivation is completely general in that it does not rely on assumptions about a system’s dynamics or internal interactions. (shrink)
This paper addresses the actual practice of justifying definitions in mathematics. First, I introduce the main account of this issue, namely Lakatos's proof-generated definitions. Based on a case study of definitions of randomness in ergodic theory, I identify three other common ways of justifying definitions: natural-world justification, condition justification, and redundancy justification. Also, I clarify the interrelationships between the different kinds of justification. Finally, I point out how Lakatos's ideas are limited: they fail to show how various kinds of justification (...) can be found and can be reasonable, and they fail to acknowledge the interplay among the different kinds of justification. (shrink)
In Boltzmannian statistical mechanics macro-states supervene on micro-states. This leads to a partitioning of the state space of a system into regions of macroscopically indistinguishable micro-states. The largest of these regions is singled out as the equilibrium region of the system. What justifies this association? We review currently available answers to this question and find them wanting both for conceptual and for technical reasons. We propose a new conception of equilibrium and prove a mathematical theorem which establishes in full generality (...) -- i.e. without making any assumptions about the system's dynamics or the nature of the interactions between its components -- that the equilibrium macro-region is the largest macro-region. We then turn to the question of the approach to equilibrium, of which there exists no satisfactory general answer so far. In our account, this question is replaced by the question when an equilibrium state exists. We prove another -- again fully general -- theorem providing necessary and sufficient conditions for the existence of an equilibrium state. This theorem changes the way in which the question of the approach to equilibrium should be discussed: rather than launching a search for a crucial factor, the focus should be on finding triplets of macro-variables, dynamical conditions, and effective state spaces that satisfy the conditions of the theorem. (shrink)
A popular view in contemporary Boltzmannian statistical mechanics is to interpret the measures as typicality measures. In measure-theoretic dynamical systems theory measures can similarly be interpreted as typicality measures. However, a justification why these measures are a good choice of typicality measures is missing, and the paper attempts to fill this gap. The paper first argues that Pitowsky's justification of typicality measures does not fit the bill. Then a first proposal of how to justify typicality measures is presented. The main (...) premises are that typicality measures are invariant and are related to the initial probability distribution of interest. The conclusions are two theorems which show that the standard measures of statistical mechanics and dynamical systems are typicality measures. There may be other typicality measures, but they agree about judgements of typicality. Finally, it is proven that if systems are ergodic or epsilon-ergodic, there are uniqueness results about typicality measures. (shrink)
We argue that concerns about double-counting -- using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate --deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach (...) to incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. (shrink)
There are results which show that measure-theoretic deterministic models and stochastic models are observationally equivalent. Thus there is a choice between a deterministic and an indeterministic model and the question arises: Which model is preferable relative to evidence? If the evidence equally supports both models, there is underdetermination. This paper first distinguishes between different kinds of choice and clarifies the possible resulting types of underdetermination. Then a new answer is presented: the focus is on the choice between a Newtonian deterministic (...) model supported by indirect evidence from other Newtonian models which invoke similar additional assumptions about the physical systems and a stochastic model that is not supported by indirect evidence. It is argued that the deterministic model is preferable. The argument against underdetermination is then generalised to a broader class of cases. Finally, the paper criticises the extant philosophical answers in relation to the preferable model. Winnie’s (1998) argument for the deterministic model is shown to deliver the correct conclusion relative to observations which are possible in principle and where there are no limits, in principle, on observational accuracy (the type of choice Winnie was concerned with). However, in practice the argument fails. A further point made is that Hoefer’s (2008) argument for the deterministic model is untenable. (shrink)
There are two main theoretical frameworks in statistical mechanics, one associated with Boltzmann and the other with Gibbs. Despite their well-known differences, there is a prevailing view that equilibrium values calculated in both frameworks coincide. We show that this is wrong. There are important cases in which the Boltzmannian and Gibbsian equilibrium concepts yield different outcomes. Furthermore, the conditions under which equilibriums exists are different for Gibbsian and Boltzmannian statistical mechanics. There are, however, special circumstances under which it is true (...) that the equilibrium values coincide. We prove a new theorem providing sufficient conditions for this to be the case. (shrink)
This article focuses on three themes concerning determinism and indeterminism. The first theme is observational equivalence between deterministic and indeterministic models. Here I discuss several results about observational equivalence and present an argument on how to choose between deterministic and indeterministic models involving indirect evidence. The second theme is whether Newtonian physics is indeterministic. I argue that the answer depends on what one takes Newtonian mechanics to be, and I highlight how contemporary debates on this issue differ from those in (...) the nineteenth century. The third major theme is how the method of arbitrary functions can be used to make sense of deterministic probabilities. I discuss various ways of interpreting the initial probability distributions and argue that they are best understood as physical, biological etc. quantities characterising the particular situation at hand. Also, I emphasise that the method of arbitrary functions deserves more attention than it has received so far. (shrink)
This article examines initial-condition dependence and initial-condition uncertainty for climate projections and predictions. The first contribution is to provide a clear conceptual characterization of predictions and projections. Concerning initial-condition dependence, projections are often described as experiments that do not depend on initial conditions. Although prominent, this claim has not been scrutinized much and can be interpreted differently. If interpreted as the claim that projections are not based on estimates of the actual initial conditions of the world or that what makes (...) projections true are conditions in the world, this claim is true. However, it can also be interpreted as the claim that simulations used to obtain projections are independent of initial-condition ensembles. This article argues that evidence does not support this claim. Concerning initial-condition uncertainty, three kinds of initial-condition uncertainty are identified. The first is the uncertainty associated with the spread of the ensemble simulations. The second arises because the theoretical initial ensemble cannot be used in calculations and has to be approximated by finitely many initial states. The third uncertainty arises because it is unclear how long the model should be run to obtain potential initial conditions at pre-industrial times. Overall, the discussion shows that initial-condition dependence and uncertainty in climate science are more complex and important issues than usually acknowledged. 1Introduction2Projections and Predictions 2.1Predictions2.2Projections3Initial-Condition Dependence 3.1Projections 3.1.1Dynamical conditions justifying independence of initial conditions?3.1.2What projections can and cannot provide3.2Predictions4Initial-Condition Uncertainty 4.1Projections4.2Predictions5ConclusionAppendix. (shrink)
The received wisdom in statistical mechanics is that isolated systems, when left to themselves, approach equilibrium. But under what circumstances does an equilibrium state exist and an approach to equilibrium take place? In this paper we address these questions from the vantage point of the long-run fraction of time definition of Boltzmannian equilibrium that we developed in two recent papers. After a short summary of Boltzmannian statistical mechanics and our definition of equilibrium, we state an existence theorem which provides general (...) criteria for the existence of an equilibrium state. We first illustrate how the theorem works with a toy example, which allows us to illustrate the various elements of the theorem in a simple setting. After a look at the ergodic programme, we discuss equilibria in a number of different gas systems: the ideal gas, the dilute gas, the Kac gas, the stadium gas, the mushroom gas and the multi-mushroom gas. In the conclusion we briefly summarise the main points and highlight open questions. (shrink)
The aim of the article is to provide a clear and thorough conceptual analysis of the main candidates for a definition of climate and climate change. Five desiderata on a definition of climate are presented: it should be empirically applicable; it should correctly classify different climates; it should not depend on our knowledge; it should be applicable to the past, present, and future; and it should be mathematically well-defined. Then five definitions are discussed: climate as distribution over time for constant (...) external conditions; climate as distribution over time when the external conditions vary as in reality; climate as distribution over time relative to regimes of varying external conditions; climate as the ensemble distribution for constant external conditions; and climate as the ensemble distribution when the external conditions vary as in reality. The third definition is novel and is introduced as a response to problems with existing definitions. The conclusion is that most definitions encounter serious problems and that the third definition is most promising. 1 Introduction2 Climate Variables and a Simple Climate Model3 Desiderata on a Definition of Climate4 Climate as Distribution over Time4.1 Definition 1: Distribution over time for constant external conditions4.2 Definition 2: Distribution over time when the external conditions vary as in reality4.3 Definition 3: Distribution over time for regimes of varying external conditions4.4 Infinite versions5 Climate as Ensemble Distribution5.1 Definition 4: Ensemble distribution for constant external conditions5.2 Definition 5: Ensemble distribution when the external conditions vary as in reality5.3 Infinite versions6 ConclusionAppendix. (shrink)
We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to incremental confirmation. (...) According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. 1 Introduction2 Remarks about Models and Adequacy-for-Purpose3 Evidence for Calibration Can Also Yield Comparative Confirmation3.1 Double-counting I3.2 Double-counting II4 Climate Science Examples: Comparative Confirmation in Practice4.1 Confirmation due to better and worse best fits4.2 Confirmation due to more and less plausible forcings values5 Old Evidence6 Doubts about the Relevance of Past Data7 Non-comparative Confirmation and Catch-Alls8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice9 Concluding Remarks. (shrink)
Equilibrium is a central concept of statistical mechanics. In previous work we introduced the notions of a Boltzmannian alpha-epsilon-equilibrium and a Boltzmannian gamma-epsilon-equilibrium. This was done in a deterministic context. We now consider systems with a stochastic micro-dynamics and transfer these notions from the deterministic to the stochastic context. We then prove stochastic equivalents of the Dominance Theorem and the Prevalence Theorem. This establishes that also in stochastic systems equilibrium macro-regions are large in requisite sense.
Many examples of calibration in climate science raise no alarms regarding model reliability. We examine one example and show that, in employing Classical Hypothesis-testing, it involves calibrating a base model against data that is also used to confirm the model. This is counter to the "intuitive position". We argue, however, that aspects of the intuitive position are upheld by some methods, in particular, the general Cross-validation method. How Cross-validation relates to other prominent Classical methods such as the Akaike Information Criterion (...) and Bayesian Information Criterion is also discussed. (shrink)
This book aims to explain, by appealing to the mathematical method of arbitrary functions (MAF) initiated by Hopf and Poincaré, how the many and various interactions of the parts of a complex system often result in simple probabilistic patterns of behaviour. A complex system is vaguely defined as a system of many parts (called enions) which are somewhat autonomous but strongly interacting (italicized words are Strevens’ jargon). Strevens says that a system shows simple behaviour when it can be described mathematically (...) with a small number of variables. A philosophical treatment of complex systems, the MAF, and the emergence of simple probabilistic patterns is welcome because these important topics have been rather neglected. (shrink)
There are two theoretical approaches in statistical mechanics, one associated with Boltzmann and the other with Gibbs. The theoretical apparatus of the two approaches offer distinct descriptions of the same physical system with no obvious way to translate the concepts of one formalism into those of the other. This raises the question of the status of one approach vis-à-vis the other. We answer this question by arguing that the Boltzmannian approach is a fundamental theory while Gibbsian statistical mechanics is an (...) effective theory, and we describe circumstances under which Gibbsian calculations coincide with the Boltzmannian results. We then point out that regarding GSM as an effective theory has important repercussions for a number of projects, in particular attempts to turn GSM into a nonequilibrium theory. (shrink)
This is a highly welcome book that offers a fresh perspective on the philosophy of biology. It is of interest to both philosophers and biologists and to experienced readers as well as novices. The book is structured into four sections ‘Science’, ‘Biology’, ‘Microbes’ and ‘Humans’ and consists of a collection of articles written by John Dupré over the past few years.
Entropy is ubiquitous in physics, and it plays important roles in numerous other disciplines ranging from logic and statistics to biology and economics. However, a closer look reveals a complicated picture: entropy is defined differently in different contexts, and even within the same domain different notions of entropy are at work. Some of these are defined in terms of probabilities, others are not. The aim of this chapter is to arrive at an understanding of some of the most important notions (...) of entropy and to clarify the relations between them, After setting the stage by introducing the thermodynamic entropy, we discuss notions of entropy in information theory, statistical mechanics, dynamical systems theory and fractal geometry. (shrink)
Gases reach equilibrium when left to themselves. Why do they behave in this way? The canonical answer to this question, originally proffered by Boltzmann, is that the systems have to be ergodic. This answer has been criticised on different grounds and is now widely regarded as flawed. In this paper we argue that some of the main arguments against Boltzmann's answer, in particular, arguments based on the KAM-theorem and the Markus-Meyer theorem, are beside the point. We then argue that something (...) close to Boltzmann's original proposal is true for gases: gases behave thermodynamic-like if they are epsilon-ergodic, i.e., ergodic on the entire accessible phase space except for a small region of measure epsilon. This answer is promising because there are good reasons to believe that relevant systems in statistical mechanics are epsilon-ergodic. (shrink)
The guiding question of this paper is: how similar are deterministic descriptions and indeterministic descriptions from a predictive viewpoint? The deterministic and indeterministic descriptions of concern in this paper are measure-theoretic deterministic systems and stochastic processes, respectively. I will explain intuitively some mathematical results which show that measure-theoretic deterministic systems and stochastic processes give more often the same predictions than one might perhaps have expected, and hence that from a predictive viewpoint these descriptions are quite similar.
It can be shown that certain kinds of classical deterministic and indeterministic descriptions are observationally equivalent. Then the question arises: which description is preferable relative to evidence? This paper looks at the main argument in the literature for the deterministic description by Winnie (The cosmos of science—essays of exploration. Pittsburgh University Press, Pittsburgh, pp 299–324, 1998). It is shown that this argument yields the desired conclusion relative to in principle possible observations where there are no limits, in principle, on observational (...) accuracy. Yet relative to the currently possible observations (of relevance in practice), relative to the actual observations, or relative to in principle observations where there are limits, in principle, on observational accuracy the argument fails. Then Winnie’s analogy between his argument for the deterministic description and his argument against the prevalence of Bernoulli randomness in deterministic descriptions is considered. It is argued that while the arguments are indeed analogous, it is also important to see they are disanalogous in another sense. (shrink)
This article focuses on three recent discussions on determinism in the philosophy of science. First, determinism and predictability will be discussed. Then, second, the paper turns to the topic of determinism, indeterminism, observational equivalence and randomness. Finally, third, there will be a discussion about deterministic probabilities. The paper will end with a conclusion.
Gibbsian statistical mechanics (GSM) is the most widely used version of statistical mechanics among working physicists. Yet a closer look at GSM reveals that it is unclear what the theory actually says and how it bears on experimental practice. The root cause of the difficulties is the status of the averaging principle, the proposition that what we observe in an experiment is the ensemble average of a phase function. We review different stances toward this principle, and eventually present a coherent (...) interpretation of GSM that provides an account of the status and scope of the principle. (shrink)
Gibbsian statistical mechanics is the most widely used version of statistical mechanics among working physicists. Yet a closer look at GSM reveals that it is unclear what the theory actually says and how it bears on experimental practice. The root cause of the difficulties is the status of the Averaging Principle, the proposition that what we observe in an experiment is the ensemble average of a phase function. We review different stances toward this principle, and eventually present a coherent interpretation (...) of GSM that provides an account of the status and scope of the principle. (shrink)
A gas prepared in a non-equilibrium state will approach equilibrium and stay there. An influential contemporary approach to Statistical Mechanics explains this behaviour in terms of typicality. However, this explanation has been criticised as mysterious as long as no connection with the dynamics of the system is established. We take this criticism as our point of departure. Our central claim is that Hamiltonians of gases which are epsilon-ergodic are typical with respect to the Whitney topology. Because equilibrium states are typical, (...) we argue that there follows the desired conclusion that typical initial conditions approach equilibrium and stay there. (shrink)
Recently some results have been presented which show that certain kinds of deterministic descriptions and indeterministic descriptions are observationally equivalent (Werndl 2009a, 2010). This paper focuses on some philosophical questions prompted by these results. More specifically, first, I will discuss the philosophical comments made by mathematicians about observational equivalence, in particular Ornstein and Weiss (1991). Their comments are vague, and I will argue that, according to a reasonable interpretation, they are misguided. Second, the results on observational equivalence raise the (...) question of whether the deterministic or indeterministic description is preferable relative to all evidence. If none of them is preferable, there is underdetermination. I will criticize Winnie's (1998) argument that, by appealing to different observations, one finds that the deterministic description is preferable. In particular, I will clarify a confusion in this argument. Furthermore, I will argue that if the concern is a strong kind of underdetermination, the argument delivers the desired conclusion but this conclusion is trivial; and for other kinds of underdetermination of interest the argument fails. (shrink)
Probability and indeterminism have always been core philosophical themes. This paper aims to contribute to understanding probability and indeterminism in biology. To provide the background for the paper, it will first be argued that an omniscient being would not need the probabilities of evolutionary theory to make predictions about biological processes. However, despite this, one can still be a realist about evolutionary theory, and then the probabilities in evolutionary theory refer to real features of the world. This prompts the question (...) of how to interpret biological probabilities which correspond to real features of the world but are in principle dispensable for predictive purposes. This paper will suggest three possible interpretations. The first interpretation is a propensity interpretation of kinds of systems. It will be argued that backward probabilities in biology do not present a problem for this propensity interpretation. The second interpretation is the frequency interpretation. Third, I will suggest Humean chances are a new interpretation of probability in evolutionary theory. Finally, this paper discusses Sansom’s argument that biological processes are indeterministic because probabilities in evolutionary theory refer to real features of the world. It will be argued that Sansom’s argument is not conclusive, and that the question whether biological processes are deterministic or indeterministic is still with us. (shrink)
This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in (...) light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. (shrink)
This paper reviews some major episodes in the history of the spatial isomorphism problem of dynamical systems theory. In particular, by analysing, both systematically and in historical context, a hitherto unpublished letter written in 1941 by John von Neumann to Stanislaw Ulam, this paper clarifies von Neumann's contribution to discovering the relationship between spatial isomorphism and spectral isomorphism. The main message of the paper is that von Neumann's argument described in his letter to Ulam is the very first proof that (...) spatial isomorphism and spectral isomorphism are not equivalent because spectral isomorphism is weaker than spatial isomorphism: von Neumann shows that spectrally isomorphic ergodic dynamical systems with mixed spectra need not be spatially isomorphic. (shrink)
Consider a gas confined to the left half of a container. Then remove the wall separating the two parts. The gas will start spreading and soon be evenly distributed over the entire available space. The gas has approached equilibrium. Why does the gas behave in this way? The canonical answer to this question, originally proffered by Boltzmann, is that the system has to be ergodic for the approach to equilibrium to take place. This answer has been criticised on different grounds (...) and is now widely regarded as flawed. In this paper we argue that these criticisms have dismissed Boltzmann’s answer too quickly and that something almost like Boltzmann’s answer is true: the approach to equilibrium takes place if the system is epsilon-ergodic, i.e. ergodic on the entire accessible phase space except for a small region of measure epsilon. We introduce epsilon-ergodicity and argue that relevant systems in statistical mechanics are indeed espsilon-ergodic. (shrink)
This is the first of three parts of an introduction to the philosophy of climate science. In this first part about observing climate change, the topics of definitions of climate and climate change, data sets and data models, detection of climate change, and attribution of climate change will be discussed.
The general theme of this article is the actual practice of how definitions are justified and formulated in mathematics. The theoretical insights of this article are based on a case study of topological definitions of chaos. After introducing this case study, I identify the three kinds of justification which are important for topological definitions of chaos: natural-world-justification, condition-justification and redundancy-justification. To my knowledge, the latter two have not been identified before. I argue that these three kinds of justification are widespread (...) in mathematics. After that, I first discuss the state of the art in the literature about the justification of definitions in the light of actual mathematical practice. I then go on to criticize Lakatos’s account of proof-generated definitions—the main account in the literature on this issue—as being limited and also misguided: as for topological definitions of chaos, in nearly all mathematical fields various kinds of justification are found and are also reasonable. (shrink)
References to Popper’s concept of three worlds occupy a central position in ontological and human ecological questions in the recent literature on theoretical geography. This article demonstrates that Popper’s ideas and concepts have not been fully understood, causing problems for integrative research. Firstly, we critically review the discussion of Popper’s concept of three worlds in geography. We criticize its popular ontological interpretation, and furthermore we point out that Popper’s evolutionary basis has been consistently neglected. Subsequently we present an interpretation of (...) Popper’s concept of three worlds which seems most plausible. We thereby identify his intentions and emphasize the evolutionary foundation of our interpretation of his theory. Finally, perspectives for human ecological research in geography on the basis of theories of evolution and emergence will be outlined. (shrink)
This is the second of three parts of an introduction to the philosophy of climate science. In this second part about modelling climate change, the topics of climate modelling, confirmation of climate models, the limits of climate projections, uncertainty and finally model ensembles will be discussed.
In engineering, as in other scientific fields, researchers seek to confirm their models with real-world data. It is common practice to assess models in terms of the distance between the model outputs and the corresponding experimental observations. An important question that arises is whether the model should then be ‘tuned’, in the sense of estimating the values of free parameters to get a better fit with the data, and furthermore whether the tuned model can be confirmed with the same data (...) used to tune it. This dual use of data is often disparagingly referred to as ‘double-counting’. Here, we analyse these issues, with reference to selected research articles in engineering. Our example studies illustrate more and less controversial practices of model tuning and double-counting, both of which, we argue, can be shown to be legitimate within a Bayesian framework. The question nonetheless remains as to whether the implied scientific assumptions in each case are apt from the engineering point of view. (shrink)
The Philosophy of Climate Science Climate change is one of the defining challenges of the 21st century. But what is climate change, how do we know about it, and how should we react to it? This article summarizes the main conceptual issues and questions in the foundations of climate science, as well as of the … Continue reading Climate Science, The Philosophy of →.
Betrachtet man den Gebrauch der Worte ‘Moral’ und ‘Vernunft’ etwas genauer, so stellt man fest, dass nicht klar ist, was sie bezeichnen bzw. wie Moral und Vernunft zusammenhängen. In dem Buch ‘Rationalität in der Angewandten Ethik’, in dem sich verschiedene Autoren die Aufgabe gestellt haben, diese Umstände in das Licht der Betrachtung zu rücken, finden wir Fragen darüber, wie “Moral”, “Angewandte Ethik” und “Vernunft” (auch in der Anwendung) zu verstehen und zu vereinen sind.
On the observational equivalence of continuous-time deterministic and indeterministic descriptions Content Type Journal Article Pages 193-225 DOI 10.1007/s13194-010-0011-5 Authors CharlotteWerndl, Department of Philosophy, Logic and Scientific Method, London School of Economics, Houghton Street, London, WC2A 2AE UK Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912 Journal Volume Volume 1 Journal Issue Volume 1, Number 2.
CharlotteWerndl and Roman Frigg discuss the relationship between the Boltzmannian and Gibbsian framework of statistical mechanics, addressing, in particular, the question when equilibrium values calculated in both frameworks agree. This note points out conceptual confusions that could arise from their discussion, concerning, in particular, the authors’ use of “Boltzmann equilibrium.” It also clarifies the status of the Khinchin condition for the equivalence of Boltzmannian and Gibbsian equilibrium predictions and shows that it follows, under the assumptions proposed by (...)Werndl and Frigg, from standard arguments in probability theory. (shrink)
Many climate scientists have made claims that may suggest that evidence used in tuning or calibrating a climate model cannot be used to evaluate the model. By contrast, the philosophers Katie Steele and CharlotteWerndl have argued that, at least within the context of Bayesian confirmation theory, tuning is simply an instance of hypothesis testing. In this paper I argue for a weak predictivism and in support of a nuanced reading of climate scientists’ concerns about tuning: there are (...) cases, model-tuning among them, in which predictive successes are more highly confirmatory of a model than accommodation of evidence. (shrink)