The paper provides a systematic overview of recent debates in epistemology and philosophy of science on the nature of understanding. We explain why philosophers have turned their attention to understanding and discuss conditions for “explanatory” understanding of why something is the case and for “objectual” understanding of a whole subject matter. The most debated conditions for these types of understanding roughly resemble the three traditional conditions for knowledge: truth, justification and belief. We discuss prominent views about how to construe these (...) conditions for understanding, whether understanding indeed requires conditions of all three types and whether additional conditions are needed. (shrink)
It is often claimed that scientists can obtain new knowledge about nature by running computer simulations. How is this possible? I answer this question by arguing that computer simulations are arguments. This view parallels Norton’s argument view about thought experiments. I show that computer simulations can be reconstructed as arguments that fully capture the epistemic power of the simulations. Assuming the extended mind hypothesis, I furthermore argue that running the computer simulation is to execute the reconstructing argument. I discuss some (...) objections and reject the view that computer simulations produce knowledge because they are experiments. I conclude by comparing thought experiments and computer simulations, assuming that both are arguments. (shrink)
Computer simulations and experiments share many important features. One way of explaining the similarities is to say that computer simulations just are experiments. This claim is quite popular in the literature. The aim of this paper is to argue against the claim and to develop an alternative explanation of why computer simulations resemble experiments. To this purpose, experiment is characterized in terms of an intervention on a system and of the observation of the reaction. Thus, if computer simulations are experiments, (...) either the computer hardware or the target system must be intervened on and observed. I argue against the first option using the non-observation argument, among others. The second option is excluded by e.g. the over-control argument, which stresses epistemological differences between experiments and simulations. To account for the similarities between experiments and computer simulations, I propose to say that computer simulations can model possible experiments and do in fact often do so. (shrink)
If cosmology is to obtain knowledge about the whole universe, it faces an underdetermination problem: Alternative space-time models are compatible with our evidence. The problem can be avoided though, if there are good reasons to adopt the Cosmological Principle (CP), because, assuming the principle, one can confine oneself to the small class of homogeneous and isotropic space-time models. The aim of this paper is to ask whether there are good reasons to adopt the Cosmological Principle in order to avoid underdetermination (...) in cosmology. Various strategies to justify the CP are examined. For instance, arguments to the effect that the truth of the CP follows generically from a large set of initial conditions; an inference to the best explanation; and an inductive strategy are assessed. I conclude that a convincing justification of the CP has not yet been established, but this claim is contingent on a number of results that may have to be revised in the future. (shrink)
Monte Carlo simulations arrive at their results by introducing randomness, sometimes derived from a physical randomizing device. Nonetheless, we argue, they open no new epistemic channels beyond that already employed by traditional simulations: the inference by ordinary argumentation of conclusions from assumptions built into the simulations. We show that Monte Carlo simulations cannot produce knowledge other than by inference, and that they resemble other computer simulations in the manner in which they derive their conclusions. Simple examples of Monte Carlo simulations (...) are analysed to identify the underlying inferences. (shrink)
This unique volume introduces and discusses the methods of validating computer simulations in scientific research. The core concepts, strategies, and techniques of validation are explained by an international team of pre-eminent authorities, drawing on expertise from various fields ranging from engineering and the physical sciences to the social sciences and history. The work also offers new and original philosophical perspectives on the validation of simulations. Topics and features: introduces the fundamental concepts and principles related to the validation of computer simulations, (...) and examines philosophical frameworks for thinking about validation; provides an overview of the various strategies and techniques available for validating simulations, as well as the preparatory steps that have to be taken prior to validation; describes commonly used reference points and mathematical frameworks applicable to simulation validation; reviews the legal prescriptions, and the administrative and procedural activities related to simulation validation; presents examples of best practice that demonstrate how methods of validation are applied in various disciplines and with different types of simulation models; covers important practical challenges faced by simulation scientists when applying validation methods and techniques; offers a selection of general philosophical reflections that explore the significance of validation from a broader perspective. This truly interdisciplinary handbook will appeal to a broad audience, from professional scientists spanning all natural and social sciences, to young scholars new to research with computer simulations. Philosophers of science, and methodologists seeking to increase their understanding of simulation validation, will also find much to benefit from in the text. (shrink)
If we are to constrain our place in the world, two principles are often appealed to in science. According to the Copernican Principle, we do not occupy a privileged position within the Universe. The Cosmological Principle, on the other hand, says that our observations would roughly be the same, if we were located at any other place in the Universe. In our paper we analyze these principles from a logical and philosophical point of view. We show how they are related, (...) how they can be supported and what use is made of them. Our main results are: 1. There is a logical gap between both principles insofar as the Cosmological Principle is significantly stronger than the Copernican Principle. 2. A step that is often taken for establishing the Cosmological Principle on the base of the Copernican Principle and observations is not incontestable as it stands, but can be supplemented with a different argument. 3. The Cosmological Principle might be crucial for cosmology to the extent it is not supported by empirical evidence. (shrink)
Computer simulations are often claimed to be opaque and thus to lack transparency. But what exactly is the opacity of simulations? This paper aims to answer that question by proposing an explication of opacity. Such an explication is needed, I argue, because the pioneering definition of opacity by P. Humphreys and a recent elaboration by Durán and Formanek are too narrow. While it is true that simulations are opaque in that they include too many computations and thus cannot be checked (...) by hand, this doesn’t exhaust what we might want to call the opacity of simulations. I thus make a fresh start with the natural idea that the opacity of a method is its disposition to resist knowledge and understanding. I draw on recent work on understanding and elaborate the idea by a systematic investigation into what type of knowledge and what type of understanding are required if opacity is to be avoided and why the required sort of understanding, in particular, is difficult to achieve. My proposal is that a method is opaque to the degree that it’s difficult for humans to know and to understand why its outcomes arise. This proposal allows for a comparison between different methods regarding opacity. It further refers to a kind of epistemic access that is important in scientific work with simulations. (shrink)
N. Bostrom’s simulation argument and two additional assumptions imply that we likely live in a computer simulation. The argument is based upon the following assumption about the workings of realistic brain simulations: The hardware of a computer on which a brain simulation is run bears a close analogy to the brain itself. To inquire whether this is so, I analyze how computer simulations trace processes in their targets. I describe simulations as fictional, mathematical, pictorial, and material models. Even though the (...) computer hardware does provide a material model of the target, this does not suffice to underwrite the simulation argument because the ways in which parts of the computer hardware interact during simulations do not resemble the ways in which neurons interact in the brain. Further, there are computer simulations of all kinds of systems, and it would be unreasonable to infer that some computers display consciousness just because they simulate brains rather than, say, galaxies. (shrink)
What is the status of a cat in a virtual reality environment? Is it a real object? Or part of a fiction? Virtual realism, as defended by D. J. Chalmers, takes it to be a virtual object that really exists, that has properties and is involved in real events. His preferred specification of virtual realism identifies the cat with a digital object. The project of this paper is to use a comparison between virtual reality environments and scientific computer simulations to (...) critically engage with Chalmers’s position. I first argue that, if it is sound, his virtual realism should also be applied to objects that figure in scientific computer simulations, e.g. to simulated galaxies. This leads to a slippery slope because it implies an unreasonable proliferation of digital objects. A philosophical analysis of scientific computer simulations suggests an alternative picture: The cat and the galaxies are parts of fictional models for which the computer provides model descriptions. This result motivates a deeper analysis of the way in which Chalmers builds up his realism. I argue that he buys realism too cheap. For instance, he does not really specify what virtual objects are supposed to be. As a result, rhetoric aside, his virtual realism isn’t far from a sort of fictionalism. (shrink)
Many results of modern physics—those of quantum mechanics, for instance—come in a probabilistic guise. But what do probabilistic statements in physics mean? Are probabilities matters of objective fact and part of the furniture of the world, as objectivists think? Or do they only express ignorance or belief, as Bayesians suggest? And how are probabilistic hypotheses justified and supported by empirical evidence? Finally, what does the probabilistic nature of physics imply for our understanding of the world? This volume is the first (...) to provide a philosophical appraisal of probabilities in all of physics. Its main aim is to make sense of probabilistic statements as they occur in the various physical theories and models and to provide a plausible epistemology and metaphysics of probabilities. The essays collected here consider statistical physics, probabilistic modelling, and quantum mechanics, and critically assess the merits and disadvantages of objectivist and subjectivist views of probabilities in these fields. In particular, the Bayesian and Humean views of probabilities and the varieties of Boltzmann's typicality approach are examined. The contributions on quantum mechanics discuss the special character of quantum correlations, the justification of the famous Born Rule, and the role of probabilities in a quantum field theoretic framework. Finally, the connections between probabilities and foundational issues in physics are explored. The Reversibility Paradox, the notion of entropy, and the ontology of quantum mechanics are discussed. Other essays consider Humean supervenience and the question whether the physical world is deterministic. (shrink)
This unique volume introduces and discusses the methods of validating computer simulations in scientific research. The core concepts, strategies, and techniques of validation are explained by an international team of pre-eminent authorities, drawing on expertise from various fields ranging from engineering and the physical sciences to the social sciences and history. The work also offers new and original philosophical perspectives on the validation of simulations. Topics and features: introduces the fundamental concepts and principles related to the validation of computer simulations, (...) and examines philosophical frameworks for thinking about validation; provides an overview of the various strategies and techniques available for validating simulations, as well as the preparatory steps that have to be taken prior to validation; describes commonly used reference points and mathematical frameworks applicable to simulation validation; reviews the legal prescriptions, and the administrative and procedural activities related to simulation validation; presents examples of best practice that demonstrate how methods of validation are applied in various disciplines and with different types of simulation models; covers important practical challenges faced by simulation scientists when applying validation methods and techniques; offers a selection of general philosophical reflections that explore the significance of validation from a broader perspective. -/- This truly interdisciplinary handbook will appeal to a broad audience, from professional scientists spanning all natural and social sciences, to young scholars new to research with computer simulations. Philosophers of science, and methodologists seeking to increase their understanding of simulation validation, will also find much to benefit from in the text. (shrink)
The most promising accounts of ontic probability include the Spielraum conception of probabilities, which can be traced back to J. von Kries and H. Poincaré, and the best system account by D. Lewis. This paper aims at comparing both accounts and at combining them to obtain the best of both worlds. The extensions of both Spielraum and best system probabilities do not coincide because the former only apply to systems with a special dynamics. Conversely, Spielraum probabilities may not be part (...) of the best system, e.g. if a broad class of random devices is not often used. Spielraum probabilities have the potential to provide illuminating explanations of frequencies with which outcomes of trials on gambling devices arise. They ultimately fail to account for such frequencies though because they are compatible with frequencies that grossly differ from the values of the respective probabilities. I thus follow recent proposals by M. Strevens and M. Abrams and strengthen the definition of Spielraum probabilities by adding a further condition that restricts some actual frequencies. The resulting account remains limited in scope, but stands objections raised against it in the recent literature: it is neither circular nor based upon arbitrary choices of a measure or of physical variables. Nor does it lack any explanatory power. (shrink)
In physics, objects are often assigned vector characteristics such as a specific velocity. How can this be understood from a metaphysical point of view – is assigning an object a vector characteristic to attribute it an intrinsic property? As a short review of Newtonian, special relativistic and general relativistic physics shows, if we wish to assign some object a vector characteristic, we have to relate it to something – call it S. If S is to be different from the original (...) object – and I argue for this assumption – then having the specific vector characteristic is not an intrinsic property under different readings of “intrinsic”. For instance, I argue that lonely objects with different orientations do not constitute metaphysically distinct possibilities. There are, however, difficulties in applying the loneliness test, and they concern the role of the space‐time. (shrink)
Statistical physicists assume a probability distribution over micro-states to explain thermodynamic behavior. The question of this paper is whether these probabilities are part of a best system and can thus be interpreted as Humean chances. I consider two Boltzmannian accounts of the Second Law, viz. a globalist and a localist one. In both cases, the probabilities fail to be chances because they have rivals that are roughly equally good. I conclude with the diagnosis that well-defined micro-probabilities under-estimate the robust character (...) of explanations in statistical physics. (shrink)
We develop a utilitarian framework to assess different decision rules for the European Council of Ministers. The proposals to be decided on are conceptualized as utility vectors and a probability distribution is assumed over the utilities. We first show what decision rules yield the highest expected utilities for different means of the probability distri- bution. For proposals with high mean utility, simple bench- mark rules (such as majority voting with proportional weights) tend to outperform rules that have been proposed in (...) the political arena. For proposals with low mean utility, it is the other way round. We then compare the expected utilities for smaller and larger countries and look for Pareto- dominance relations. Finally, we provide an extension of the model, discuss its restrictions, and compare our approach with assessments of decision rules that are based on the Penrose measure of voting power. (shrink)
This chapter clarifies the concept of validation of computer simulations by comparing various definitions that have been proposed for the notion. While the definitions agree in taking validation to be an evaluationEvaluation, they differ on the following questions: What exactly is evaluated—results from a computer simulation, a model, a computer codeCode? What are the standardsStandard of evaluationEvaluation––truthTruth, accuracyAccuracy, and credibilityCredibility or also something else? What type of verdict does validation lead to––that the simulation is such and such good, or that (...) it passes a testTest defined by a certain threshold? How strong needs the case to be for the verdict? Does validation necessarily proceed by comparing simulation outputsOutput with measured dataData? Along with these questions, the chapter explains notions that figure prominently in them, e.g., the concepts of accuracy and credibility. It further discusses natural answers to the questions as well as arguments that speak in favor and against these answers. The aim is to obtain a better understandingUnderstanding of the options we have for defining validation and how they are related to each other. (shrink)
Verification and validation are methods with which computer simulations are tested. While many practitioners draw a clear line between verification and validation and demand that the former precedes the latter, some philosophers have suggested that the distinction has been over-exaggerated. This chapter clarifies the relationship between verification and validation. Regarding the latter, validation of the conceptual and of the computational modelComputational model are distinguished. I argue that, as a method, verification is clearly different from validation of either of the models. (...) However, the methods are related to each other as follows: If we allow that the validation of the computational model need not include the comparisonComparison between simulation output and measured data, then the computational model may be validated by validating the conceptual model independently and by verifying the simulation with respect to it. This is often not realistic, however, because, in most cases, the conceptual modelConceptual model cannot be validated independently from the simulation. In such cases, the computational modelComputational model is verified with the aim to use it as an appropriate substitute for the conceptual model. Then simulation output is compared to measured data to validate both the computational and the conceptual model. I analyze the underlying inferences and argue that they require some prior confidence in the conceptual model and in verification. This suggests that verification precede validation that proceeds via a comparisonComparison between simulation outputOutput and measured data. Recent arguments to the effect that the distinction between verification and validation is not clear-cut do not refute these results, or so I argue against philosopher E. Winsberg. (shrink)
We consider a decision board with representatives who vote on proposals on behalf of their constituencies. We look for decision rules that realize utilitarian and egalitarian ideals. We set up a simple model and obtain roughly the following results. If the interests of people from the same constituency are uncorrelated, then a weighted rule with square root weights does best in terms of both ideals. If there are perfect correlations, then the utilitarian ideal requires proportional weights, whereas the egalitarian ideal (...) requires equal weights. We investigate correlations that are in between these extremes and provide analytic arguments to connect our results to Barberà and Jackson :317–339, 2006) and to Banzhaf voting power. (shrink)
We provide welfarist evaluations of decision rules for federations of states and consider models, under which the interests of people from different states are stochastically dependent. We concentrate on two welfarist standards; they require that the expected utility for the federation be maximized or that the expected utilities for people from different states be equal. We discuss an analytic result that characterizes the decision rule with maximum expected utility, set up a class of models that display interstate dependencies and run (...) simulations for different dependency scenarios in the European Union. We find that, under positive correlations, the welfare distribution tends to be less sensitive to the choice of the decision rule, whereas it can be important under negative correlations. The results that Beisbart and Bovens (SCW 29, p. 581, 2007) have found for two types of models without interstate dependencies are relatively stable. There are exceptions, though, under which the way the welfare distribution is shaped by a decision rule is significantly affected by dependencies. (shrink)
Cosmological questions (e.g., how far the world extends and how it all began) have occupied humans for ages and given rise to numerous conjectures, both within and outside philosophy. To put to rest fruitless speculation, Kant argued that these questions move beyond the limits of human knowledge. This article begins with Kant’s doubts about cosmology and shows that his arguments presuppose unreasonably high standards on knowledge and unwarranted assumptions about space-time. As an analysis of the foundations of twentieth-century cosmology reveals, (...) other worries about the discipline can be avoided too if the universe is modeled using Einstein’s general theory of relativity. There is now strong observational support for one particular model. However, due to underdetermination problems, the big cosmological questions cannot be fully answered using this model either. This opens the space for more speculative proposals again (e.g., that the universe is only part of a huge multiverse). (shrink)
This paper details the content and structure of the folk concept of life, and discusses its relevance for scientific research on life. In four empirical studies, we investigate which features of life are considered salient, universal, central, and necessary. Functionings, such as nutrition and reproduction, but not material composition, turn out to be salient features commonly associated with living beings (Study 1). By contrast, being made of cells is considered a universal feature of living species (Study 2), a central aspect (...) of life (Study 3), and our best candidate for being necessary for life (Study 4). These results are best explained by the hypothesis that people take life to be a natural kind subject to scientific scrutiny. (shrink)
The voting power of a voter—the extent to which she can affect the outcome of a collective decision—is often quantified in terms of the probability that she is critical. This measure is extended to a series of power measures of different ranks. The measures quantify the extent to which a voter can be part of a group that can jointly make a difference as to whether a bill passes or not. It is argued that the series of these measures allow (...) for a more appropriate assessment of voting power, particularly of a posteriori voting power in case the votes are stochastically dependent. Also, the new measures discriminate between voting games that cannot be distinguished in terms of the probability of only criticality. (shrink)
Abstract What do we mean to say when we call some person a good business manager? And where do the criteria flow from by which we judge people good business managers? I answer these questions by drawing on von Wright's distinction between several varieties of goodness. We can then discriminate between instrumental, technical and moral senses of the expression ?to be a good business manager?. The first two senses presume that business managers have a characteristic task or that they engage (...) in typical activities. It is natural to specify this task and these activities by appealing to goods and to the good of a business. These goods may then be identified as goods for people. Insofar as the welfare of people is of moral value, we obtain a morally loaded account of business management. I argue against this account and plead for a more sober understanding of economic goods and of the good of a business. The latter can be understood in analogy to the good of an individual person, but is largely determined by the goals of the founders and owners. Economic goods are things that are wanted by some people and thus exchanged on markets. If this is so, then there is no direct conceptual path from instrumentally or technically good business managers to morality even though there are certain affinities. To conclude, I trace the consequences for our understanding of business ethics. (shrink)
Ever since work of Paul Feyerabend, Russell Hanson and Thomas Kuhn in the 1960s, the thesis of the theory-ladenness of scientific observation has attracted much attention both in the philosophy and the sociology of science. The main concern has always been epistemic. It was argued –or feared– that if scientific observations depend on prevalent theories, an objective empirical test of theories and hypotheses by independent observation and experience is impossible. This suggests that theories might appear to be well confirmed by (...) observation, and yet it is not likely that they are largely true or empirically adequate. While some philosophers like Ian Hacking have argued that serious theory-dependence is less common than often assumed, sociologists such as David Bloor, Stephen Shapin, Karin Knorr-Cetina or Harry Collins have based their constructivist programs for the sociology of science on strong claims of theory-ladenness. (shrink)
We construct a new measure of voting power that yields reasonable measurements even if the individual votes are not cast independently. Our measure hinges on probabilities of counterfactuals, such as the probability that the outcome of a collective decision would have been yes, had a voter voted yes rather than no as she did in the real world. The probabilities of such counterfactuals are calculated on the basis of causal information, following the approach by Balke and Pearl. Opinion leaders whose (...) votes have causal influence on other voters' votes can have significantly more voting power under our measure. But the new measure of voting power is also sensitive to the voting rule. We show that our measure can be regarded as an average treatment effect, we provide examples in which it yields intuitively plausible results and we prove that it reduces to Banzhaf voting power in the limiting case of independent and equiprobable votes. (shrink)
Many questions about the fundamentals of some area take the form “What is …?” It does not come as a surprise then that, at the dawn of Western philosophy, Socrates asked the questions of what piety, courage, and justice are. Nor is it a wonder that the philosophical preoccupation with computer simulations centered, among other things, about the question of what computer simulations are. Very often, this question has been answered by stating that computer simulation is a species of a (...) well-known method, e.g., experimentation. Other answers claim at least a close relationship between computer simulation and another method. In any case, correct answers to the question of what a computer simulation is should help us to better understand what validation of simulations is. The aim of this chapter is to discuss the most important proposals to understand computer simulation in terms of another method and to trace consequences for validation. Although it has sometimes been claimed that computer simulations are experiments, there are strong reasons to reject this view. A more appropriate proposal is to say that computer simulations often model experiments. This implies that the simulation scientists should to some extent imitate the validation of an experiment. But the validation of computer simulations turns out to be more comprehensive. Computer simulations have also been conceptualized as thought experimentsThought experiment or close cousins of the latter. This seems true, but not very telling since thought experiments are not a standardStandard method and since it is controversial how they contribute to our acquisition of knowledge. I thus consider a specific view on thought experiments to make some progress on understanding simulations and their validation. There is finally a close connection between computer simulation and modeling, and it can be shown that the validation of a computer simulation is the validation of a specific model, which may either be thought to be mathematical or fictional. (shrink)
The mean majority deficit in a two-tier voting system is a function of the partition of the population. We derive a new square-root rule: For odd-numbered population sizes and equipopulous units the mean majority deficit is maximal when the member size of the units in the partition is close to the square root of the population size. Furthermore, within the partitions into roughly equipopulous units, partitions with small even numbers of units or small even-sized units yield high mean majority deficits. (...) We discuss the implications for the winner-takes-all system in the US Electoral College. (shrink)
What is it to judge something to be a natural end? And what objects may properly be judged natural ends? These questions pose a challenge, because the predicates “natural” and “end” seemingly can not be instantiated at the same time – at least given some Kantian assumptions. My paper defends the thesis that Kant’s “Critique of Teleological Judgment”, nevertheless, provides a sensible account of judging something a natural end. On the account, a person judges an object O a natural end, (...) if she thinks that the parts of O cause O and if she is committed to approach O in a top-down manner, as if the parts were produced in view of the whole. The account is non-realist, because it involves a commitment. With the account comes a characterization that provides necessary and sufficient conditions on objects that may properly be judged natural ends. My paper reconstructs the argument in CTJ, §§64-65 where the account and the characterization are derived. (shrink)
The choice of a social decision rule for a federal assembly affects the welfare distribution within the federation. But which decision rules can be recommended on welfarist grounds? In this paper, we focus on two welfarist desiderata, viz. (i) maximizing the expected utility of the whole federation and (ii) equalizing the expected utilities of people from different states in the federation. We consider the European Union as an example, set up a probabilistic model of decision making and explore how different (...) decision rules fare with regard to the desiderata. We start with a default model, where the interests, and therefore the votes of the different states are not correlated. This default model is then abandoned in favor of models with correlations. We perform computer simulations and find that decision rules with a low acceptance threshold do generally better in terms of desideratum (i), whereas the rules presented in the Accession Treaty and in the (still unratified) Constitution of the European Union tend to do better in terms of desideratum (ii). The ranking obtained regarding desideratum (i) is fairly stable across different correlation patterns. (shrink)
Kant famously thought that mathematics contains synthetic a priori truths. In his book, Wille defends a version of the Kantian thesis on not-so-Kantian grounds. Wille calls his account neo-Kantian, because it makes sense of Kantian tenets by using a methodology that takes the linguistic and pragmatic turns seriously.Wille's work forms part of a larger project in which the statuses of mathematics and proof theory are investigated. The official purpose of the present book is to answer the question: what is mathematics. (...) Wille sets himself the task of finding a definition that enables him to distinguish between mathematics and proof theory. His solution reads roughly as follows. Mathematics is about how to generate synthetic a priori knowledge by acting within some calculus. This definition does not seem to be a promising starting point, because in this way a very controversial claim becomes true by definition. However, Wille's strategy is to make sense of his definition during the course of his argument and to show that the definition is appropriate regarding research that is commonly taken to be mathematics.The second part of the book explains what ‘acting within some calculus’ amounts to. In Wille's view, mathematics should be characterized in …. (shrink)
Bayesian epistemologyEpistemology offers a powerful framework for characterizing scientific inference. Its basic idea is that rational belief comes in degrees that can be measured in terms of probabilities. The axioms of the probability calculus and a rule for updatingUpdating emerge as constraints on the formation of rational belief. Bayesian epistemologyEpistemology has led to useful explications of notions such asConfirmation confirmation. It thus is natural to ask whether Bayesian epistemologyEpistemology offers a useful framework for thinking about the inferences implicit in the (...) validation of computer simulations. The aim of this chapter is to answer this question. Bayesian epistemologyEpistemology is briefly summarized and then applied to validation. UpdatingUpdating is shown to form a viable method for data-driven validation. Bayesians can also express how a simulation obtains prior credibilityCredibility because the underlying conceptual modelConceptual model is credible. But the impact of this prior credibilityCredibility is indirect since simulations at best provide partial and approximate solutions to theConceptual model conceptual model. Fortunately, this gap between the simulations and the conceptual model can be addressed using what we call Bayesian verification. The final part of the chapter systematically evaluates the use of Bayesian epistemologyEpistemology in validation, e.g., by comparing it to a falsificationist approach. It is argued that Bayesian epistemologyEpistemology goes beyond mere calibrationCalibration and that it can provide the foundations for a sound evaluationEvaluation of computer simulations. (shrink)
To provide an introduction to this book, we explain the motivation to publish this volume, state its main goal, characterize its intended readership, and give an overview of its content. To this purpose, we briefly summarize each chapter and put it in the context of the whole volume. We also take the opportunity to stress connections between the chapters. We conclude with a brief outlook.The main motivation to publish this volume was the diagnosis that the validation of computer simulation needs (...) more attention in practice and in theory. The aim of this volume is to improve our understanding of validation. To this purpose, computer scientists, mathematicians, working scientists from various fields, as well as philosophers of science join efforts. They explain basic notions and principles of validation, embed validation in philosophical frameworks such as Bayesian epistemology, detail the steps needed during validation, provide best practice examples, reflect upon challenges to validation, and put validation in a broader perspective. As we suggest in our outlook, the validation of computer simulations will remain an important research topic that needs cross- and interdisciplinary efforts. A key issue is whether, and if so, how very rigorous approaches to validation that have proven useful in, e.g., engineering can be extended to other disciplines. (shrink)
Consider the following two seemingly unrelated questions. First, why does Rousseau (1993 [1762]) believe that the formation of factions or partial associations is not conducive to the general will in Du Contrat Social, II, 3? Second, why do federal assemblies typically strive for some form of degressive proportionality, i.e. a balance between equal and proportional representation, for the countries in the federation? We will show that there is a surprising connection between these questions. We turn to our first question. It (...) is often thought that Rousseau’s opposition to factions can be interpreted in reference to the Condorcet Jury Theorem. The Condorcet Jury Theorem states that if voters are more likely to be right on some issue than not and cast their votes independently, then the chance that the majority is correct converges to one as the number of voters goes to infinity. This interpretation is tempting, but it is inconsistent with some crucial textual evidence. We turn to our second question. Consider a federation of countries with a decision-making assembly in which each country casts a block vote. On equal representation, the vote of each country has the same weight. On proportional representation, the weights of the votes are proportional to the countries’ population sizes. In between these extremes, we can let the weights increase as a function of population sizes, but smaller countries receive greater weights and larger countries receive lesser weights than proportionality would warrant. Such weightings are called degressively proportional weightings. How can degressive proportionality be justified? And does this justification provide us any guidance as to how these weights should be set? A correct understanding of Rousseau’s misgivings about the formation of factions will hold the key to these questions. (shrink)
In den vergangenen Jahren hat die Europäische Union (EU) wiederholt versucht, ihre Institutionen zu reformieren. Als der Entwurf für eine Europäische Verfassung und später der Vertrag von Lissabon ausgehandelt wurden, betraf einer der meistdiskutiertesten Streitpunkte die Frage, nach welcher Entscheidungsregel der EU-Ministerrat abstimmen sollte. Diese Frage ist eine genuin normative Frage. Deshalb sollten auch politische Philosophen und Ethiker etwas zu dieser Frage beitragen können. Im folgenden wollen wir uns dieser Herausforderung stellen und alternative Entscheidungsregeln für den EU-Ministerrat bewerten. Dabei erweisen (...) sich die Methoden der probablistische Modellierung und der Simulation sozialer Prozesse als unerlässlich.1 Damit wird deutlich, wie Simulationen auch innerhalb der angewandten politischen Philosophie als Methode eingesetzt werden können. (shrink)
This volume offers an integrated understanding of how the theory of general relativity gained momentum after Einstein had formulated it in 1915. Chapters focus on the early reception of the theory in physics and philosophy and on the systematic questions that emerged shortly after Einstein's momentous discovery. They are written by physicists, historians of science, and philosophers, and were originally presented at the conference titled Thinking About Space and Time: 100 Years of Applying and Interpreting General Relativity, held at the (...) University of Bern from September 12-14, 2017. By establishing the historical context first, and then moving into more philosophical chapters, this volume will provide readers with a more complete understanding of early applications of general relativity and of related philosophical issues. Because the chapters are often cross-disciplinary, they cover a wide variety of topics related to the general theory of relativity. These include: Heuristics used in the discovery of general relativity Mach's Principle The structure of Einstein's theory Cosmology and the Einstein world Stability of cosmological models The metaphysical nature of spacetime The relationship between spacetime and dynamics The Geodesic Principle Symmetries Thinking About Space and Time will be a valuable resource for historians of science and philosophers who seek a deeper knowledge of the uses of general relativity, as well as for physicists and mathematicians interested in exploring the wider historical and philosophical context of Einstein's theory. (shrink)
Philosophie kann dazu beitragen, dass wir vernünftiger mit den Problemen umgehen, die unsere Gesellschaft und ihr Selbstverständnis herausfordern. Dazu muss die Philosophie sich aber öffentlich einmischen und verstärkt in die Bildung Einzug halten – diese Position vertritt vorliegender Band. Die Beiträge von Anne Burkard, Rainer Hegselmann, Romy Jaster und Markus Wild zeigen einerseits auf, welche Rolle die Philosophie in öffentlichen Debatten spielen kann und soll. Andererseits analysieren sie, welchen Beitrag Philosophie zur schulischen und universitären Bildung liefert.