Modeling is an important scientific practice, yet it raises significant philosophical puzzles. Models are typically idealized, and they are often explored via imaginative engagement and at a certain “distance” from empirical reality. These features raise questions such as what models are and how they relate to the world. Recent years have seen a growing discussion of these issues, including a number of views that treat modeling in terms of indirect representation and analysis. Indirect views treat the model as (...) a bona fide object, specified by the modeler and used to represent and reason about some portion of the concrete empirical world. On some indirect views, model systems are abstract entities, such as mathematical structures, while on other views they are concrete hypothetical things. Here I assess these views and offer a novel account of models. I argue that regarding models as abstracta results in some significant tensions with the practice of modeling, especially in areas where non-mathematical models are common. Furthermore, viewing models as concrete hypotheticals raises difficult questions about model-world relations. The view I argue for treats models as direct, albeit simplified, representations of targets in the world. I close by suggesting a treatment of model-world relations that draws on a recent work by Stephen Yablo concerning the notion of partial truth. (shrink)
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility (...) of models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. (shrink)
The goal of this article is to address the problem of inconsistent models and the challenge it poses for perspectivism. I analyze the argument, draw attention to some hidden premises behind it, and deflate them. Then I introduce the notion of perspectival models as a distinctive class of modeling practices whose primary function is exploratory. I illustrate perspectival modeling with two examples taken from contemporary high-energy physics at the Large Hadron Collider at the European Organization for Nuclear Research, (...) which are designed to show how a plurality of seemingly incompatible models is methodologically crucial to advance the realist quest in cutting-edge areas of scientific inquiry. (shrink)
Experimental modeling in biology involves the use of living organisms (not necessarily so-called "model organisms") in order to model or simulate biological processes. I argue here that experimental modeling is a bona fide form of scientific modeling that plays an epistemic role that is distinct from that of ordinary biological experiments. What distinguishes them from ordinary experiments is that they use what I call "in vivo representations" where one kind of causal process is used to stand in (...) for a physically different kind of process. I discuss the advantages of this approach in the context of evolutionary biology. (shrink)
This paper applies Causal Modeling Semantics (CMS, e.g., Galles and Pearl 1998; Pearl 2000; Halpern 2000) to the evaluation of the probability of counterfactuals with disjunctive antecedents. Standard CMS is limited to evaluating (the probability of) counterfactuals whose antecedent is a conjunction of atomic formulas. We extend this framework to disjunctive antecedents, and more generally, to any Boolean combinations of atomic formulas. Our main idea is to assign a probability to a counterfactual ( A ∨ B ) > C (...) at a causal model M by looking at the probability of C in those submodels that truthmake A ∨ B (Briggs 2012; Fine 2016, 2017). The probability of p (( A ∨ B ) > C ) is then calculated as the average of the probability of C in the truthmaking submodels, weighted by the inverse distance to the original model M. The latter is calculated on the basis of a proposal by Eva et al. (2019). Apart from solving a major problem in the research on counterfactuals, our paper shows how work in semantics, causal inference and formal epistemology can be fruitfully combined. (shrink)
The fate of optimality modeling is typically linked to that of adaptationism: the two are thought to stand or fall together (Gould and Lewontin, Proc Relig Soc Lond 205:581–598, 1979; Orzack and Sober, Am Nat 143(3):361–380, 1994). I argue here that this is mistaken. The debate over adaptationism has tended to focus on one particular use of optimality models, which I refer to here as their strong use. The strong use of an optimality model involves the claim that selection (...) is the only important influence on the evolutionary outcome in question and is thus linked to adaptationism. However, biologists seldom intend this strong use of optimality models. One common alternative that I term the weak use simply involves the claim that an optimality model accurately represents the role of selection in bringing about the outcome. This and other weaker uses of optimality models insulate the optimality approach from criticisms of adaptationism, and they account for the prominence of optimality modeling (broadly construed) in population biology. The centrality of these uses of optimality models ensures a continuing role for the optimality approach, regardless of the fate of adaptationism. (shrink)
Intellectualists about knowledge how argue that knowing how to do something is knowing the content of a proposition (i.e, a fact). An important component of this view is the idea that propositional knowledge is translated into behavior when it is presented to the mind in a peculiarly practical way. Until recently, however, intellectualists have not said much about what it means for propositional knowledge to be entertained under thought's practical guise. Carlotta Pavese fills this gap in the intellectualist view by (...)modeling practical modes of thought after Fregean senses. In this paper, I take up her model and the presuppositions it is built upon, arguing that her view of practical thought is not positioned to account for much of what human agents are able to do. (shrink)
Inquiries into the nature of scientific modeling have tended to focus their attention on mathematical models and, relatedly, to think of nonconcrete models as mathematical structures. The arguments of this article are arguments for rethinking both tendencies. Nonmathematical models play an important role in the sciences, and our account of scientific modeling must accommodate that fact. One key to making such accommodations, moreover, is to recognize that one kind of thing we use the term ‘model’ to refer to (...) is a collection of propositions. (shrink)
The Lotka–Volterra predator-prey-model is a widely known example of model-based science. Here we reexamine Vito Volterra’s and Umberto D’Ancona’s original publications on the model, and in particular their methodological reflections. On this basis we develop several ideas pertaining to the philosophical debate on the scientific practice of modeling. First, we show that Volterra and D’Ancona chose modeling because the problem in hand could not be approached by more direct methods such as causal inference. This suggests a philosophically insightful (...) motivation for choosing the strategy of modeling. Second, we show that the development of the model follows a trajectory from a “how possibly” to a “how actually” model. We discuss how and to what extent Volterra and D’Ancona were able to advance their model along that trajectory. It turns out they were unable to establish that their model was fully applicable to any system. Third, we consider another instance of model-based science: Darwin’s model of the origin and distribution of coral atolls in the Pacific Ocean. Darwin argued more successfully that his model faithfully represents the causal structure of the target system, and hence that it is a “how actually” model. (shrink)
Conscious experiences are characterized by mental qualities, such as those involved in seeing red, feeling pain, or smelling cinnamon. The standard framework for modeling mental qualities represents them via points in geometrical spaces, where distances between points inversely correspond to degrees of phenomenal similarity. This paper argues that the standard framework is structurally inadequate and develops a new framework that is more powerful and flexible. The core problem for the standard framework is that it cannot capture precision structure: for (...) example, consider the phenomenal contrast between seeing an object as crimson in foveal vision versus merely as red in peripheral vision. The solution I favor is to model mental qualities using regions, rather than points. I explain how this seemingly simple formal innovation not only provides a natural way of modeling precision, but also yields a variety of further theoretical fruits: it enables us to formulate novel hypotheses about the space and structures of mental qualities, formally differentiate two dimensions of phenomenal similarity, generate a quantitative model of the phenomenal sorites, and define a measure of discriminatory grain. A noteworthy consequence is that the structure of the mental qualities of conscious experiences is fundamentally different from the structure of the perceptible qualities of external objects. (shrink)
The optimality approach to modeling natural selection has been criticized by many biologists and philosophers of biology. For instance, Lewontin (1979) argues that the optimality approach is a shortcut that will be replaced by models incorporating genetic information, if and when such models become available. In contrast, I think that optimality models have a permanent role in evolutionary study. I base my argument for this claim on what I think it takes to best explain an event. In certain contexts, (...) optimality and game-theoretic models best explain some central types of evolutionary phenomena. ‡Thanks to Michael Friedman, Helen Longino, Michael Weisberg, and especially Elliott Sober for comments on earlier drafts of this paper. †To contact the author, please write to: Department of Philosophy, Stanford University, Stanford, CA 94305-2155; e-mail: [email protected] (shrink)
Optimization models have often been useful in attempting to understand the adaptive significance of behavioral traits. Originally such models were applied to isolated aspects of behavior, such as foraging, mating, or parental behavior. In reality, organisms live in complex, ever-changing environments, and are simultaneously concerned with many behavioral choices and their consequences. This target article describes a dynamic modeling technique that can be used to analyze behavior in a unified way. The technique has been widely used in behavioral studies (...) of insects, fish, birds, mammals, and other organisms. The models use biologically meaningful parameters and variables, and lead to testable predictions. Limitations arise because nature's complexity always exceeds our modeling capacity. (shrink)
Since the introduction of mathematical population genetics, its machinery has shaped our fundamental understanding of natural selection. Selection is taken to occur when differential fitnesses produce differential rates of reproductive success, where fitnesses are understood as parameters in a population genetics model. To understand selection is to understand what these parameter values measure and how differences in them lead to frequency changes. I argue that this traditional view is mistaken. The descriptions of natural selection rendered by population genetics models are (...) in general neither predictive nor explanatory and introduce avoidable conceptual confusions. I conclude that a correct understanding of natural selection requires explicitly causal models of reproductive success. *Received May 2006; revised December 2006. †To contact the author, please write to: Department of Philosophy, Kansas State University, 201 Dickens Hall, Manhattan, KS 66506; e‐mail: [email protected] . (shrink)
Naturalistic theories of representation seek to specify the conditions that must be met for an entity to represent another entity. Although these approaches have been relatively successful in certain areas, such as communication theory or genetics, many doubt that they can be employed to naturalize complex cognitive representations. In this essay I identify some of the difficulties for developing a teleosemantic theory of cognitive representations and provide a strategy for accommodating them: to look into models of signaling in evolutionary game (...) theory. I show how these models can be used to formulate teleosemantics and expand it in new directions. (shrink)
Many in philosophy understand truth in terms of precise semantic values, true propositions. Following Braun and Sider, I say that in this sense almost nothing we say is, literally, true. I take the stand that this account of truth nonetheless constitutes a vitally useful idealization in understanding many features of the structure of language. The Fregean problem discussed by Braun and Sider concerns issues about application of language to the world. In understanding these issues I propose an alternative modeling (...) tool summarized in the idea that inaccuracy of statements can be accommodated by their imprecision. This yields a pragmatist account of truth, but one not subject to the usual counterexamples. The account can also be viewed as an elaborated error theory. The paper addresses some prima facie objections and concludes with implications for how we address certain problems in philosophy. (shrink)
p. cm. — (Zeuthen lecture book series) Includes bibliographical references (p. ) and index. ISBN 0-262-18187-8 (hardcover : alk. paper). — ISBN 0-262-68100-5 (pbk. : alk. paper) 1. Decision-making. 2. Economic man. 3. Game theory. 4. Rational expectations (Economic theory) I. Title. II. Series.
Unlike any other field, the science of morality has drawn attention from an extraordinarily diverse set of disciplines. An interdisciplinary research program has formed in which economists, biologists, neuroscientists, psychologists, and even philosophers have been eager to provide answers to puzzling questions raised by the existence of human morality. Models and simulations, for a variety of reasons, have played various important roles in this endeavor. Their use, however, has sometimes been deemed as useless, trivial and inadequate. The role of models (...) in the science of morality has been vastly underappreciated. This omission shall be remedied here, offering a much more positive picture on the contributions modelers made to our understanding of morality. (shrink)
It is largely acknowledged that natural languages emerge not just from human brains but also from rich communities of interacting human brains (Senghas, ). Yet the precise role of such communities and such interaction in the emergence of core properties of language has largely gone uninvestigated in naturally emerging systems, leaving the few existing computational investigations of this issue at an artificial setting. Here, we take a step toward investigating the precise role of community structure in the emergence of linguistic (...) conventions with both naturalistic empirical data and computational modeling. We first show conventionalization of lexicons in two different classes of naturally emerging signed systems: (a) protolinguistic “homesigns” invented by linguistically isolated Deaf individuals, and (b) a natural sign language emerging in a recently formed rich Deaf community. We find that the latter conventionalized faster than the former. Second, we model conventionalization as a population of interacting individuals who adjust their probability of sign use in response to other individuals' actual sign use, following an independently motivated model of language learning (Yang, , ). Simulations suggest that a richer social network, like that of natural (signed) languages, conventionalizes faster than a sparser social network, like that of homesign systems. We discuss our behavioral and computational results in light of other work on language emergence, and other work of behavior on complex networks. (shrink)
Modeling Mechanisms.Stuart Glennan - 2005 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 36 (2):443-464.details
Philosophers of science increasingly believe that much of science is concerned with understanding the mechanisms responsible for the production of natural phenomena. An adequate understanding of scientific research requires an account of how scientists develop and test models of mechanisms. This paper offers a general account of the nature of mechanical models, discussing the representational relationship that holds between mechanisms and their models as well as the techniques that can be used to test and refine such models. The analysis is (...) supported by study of two competing models of a mechanism of speech perception. (shrink)
I consider the application of possibility semantics to the modeling of the indeterminacy of the future. I argue that interesting problems arise in connection to the addition of object-language determinacy operator. I show that adding a two-dimensional layer to possibility semantics can help solve these problems.
How can mathematical models which represent the causal structure of the world incompletely or incorrectly have any scientific value? I argue that this apparent puzzle is an artifact of a realist emphasis on representation in the philosophy of modeling. I offer an alternative, pragmatic methodology of modeling, inspired by classic papers by modelers themselves. The crux of the view is that models developed for purposes other than explanation may be justified without reference to their representational properties.
Game theory has proved a useful tool in the study of simple economic models. However, numerous foundational issues remain unresolved. The situation is particularly confusing in respect of the non-cooperative analysis of games with some dynamic structure in which the choice of one move or another during the play of the game may convey valuable information to the other players. Without pausing for breath, it is easy to name at least 10 rival equilibrium notions for which a serious case can (...) be made that here is the “right” solution concept for such games. (shrink)
Experimental activity is traditionally identified with testing the empirical implications or numerical simulations of models against data. In critical reaction to the ‘tribunal view’ on experiments, this essay will show the constructive contribution of experimental activity to the processes of modeling and simulating. Based on the analysis of a case in fluid mechanics, it will focus specifically on two aspects. The first is the controversial specification of the conditions in which the data are to be obtained. The second is (...) conceptual clarification, with a redefinition of concepts central to the understanding of the phenomenon and the conditions of its occurrence. (shrink)
Formal models of cultural evolution analyze how cognitive processes combine with social interaction to generate the distributions and dynamics of ‘representations.’ Recently, cognitive anthropologists have criticized such models. They make three points: mental representations are non-discrete, cultural transmission is highly inaccurate, and mental representations are not replicated, but rather are ‘reconstructed’ through an inferential process that is strongly affected by cognitive ‘attractors.’ They argue that it follows from these three claims that: 1) models that assume replication or replicators are inappropriate, (...) 2) selective cultural learning cannot account for stable traditions, and 3) selective cultural learning cannot generate cumulative adaptation. Here we use three formal models to show that even if the premises of this critique are correct, the deductions that have been drawn from them are false. In the rst model, we assume continuously varying representations under the in uence of weak selective transmission and strong attractors. We show that if the attractors are suf ciently strong relative to selective forces, the continuous representation model reduces to the standard.. (shrink)
The Tarskian notion of truth-in-a-model is the paradigm formal capture of our pre-theoretical notion of truth for semantic purposes. But what exactly makes Tarski’s construction so well suited for semantics is seldom discussed. In my Semantics, Metasemantics, Aboutness (OUP 2017) I articulate a certain requirement on the successful formal modeling of truth for semantics – “locality-per-reference” – against a background discussion of metasemantics and its relation to truth-conditional semantics. It is a requirement on any formal capture of sentential truth (...) vis-à-vis the interpretation of singular terms and it is clearly met by the Tarskian notion. In this paper another such requirement is articulated – “locality-per-application” – which is an additional requirement on the formal capture of sentential truth, this time vis-à-vis the interpretation of predicates. This second requirement is also clearly met by the Tarskian notion. The two requirements taken together offer a fuller answer than has been hitherto available to the question what makes Tarski's notion of truth-in-a-model especially well suited for semantics. (shrink)
Real-world economies are open-ended dynamic systems consisting of heterogeneous interacting participants. Human participants are decision-makers who strategically take into account the past actions and potential future actions of other participants. All participants are forced to be locally constructive, meaning their actions at any given time must be based on their local states; and participant actions at any given time affect future local states. Taken together, these essential properties imply real-world economies are locally-constructive sequential games. This paper discusses a modeling (...) approach, Agent-based Computational Economics, that permits researchers to study economic systems from this point of view. ACE modeling principles and objectives are first concisely presented and explained. The remainder of the paper then highlights challenging issues and edgier explorations that ACE researchers are currently pursuing. (shrink)
Two strategies for using a model as “null” are distinguished. Null modeling evaluates whether a process is causally responsible for a pattern by testing it against a null model. Baseline modeling measures the relative significance of various processes responsible for a pattern by detecting deviations from a baseline model. When these strategies are conflated, models are illegitimately privileged as accepted until rejected. I illustrate this using the neutral theory of ecology and draw general lessons from this case. First, (...) scientists cannot draw certain conclusions using null modeling. Second, these conclusions follow using baseline modeling, but doing so requires more evidence. (shrink)
In the last few decades the role played by models and modeling activities has become a central topic in the scientific enterprise. In particular, it has been highlighted both that the development of models constitutes a crucial step for understanding the world and that the developed models operate as mediators between theories and the world. Such perspective is exploited here to cope with the issue as to whether error-based and uncertainty-based modeling of measurement are incompatible, and thus alternative (...) with one another, as sometimes claimed nowadays. The crucial problem is whether assuming this standpoint implies definitely renouncing to maintain a role for truth and the related concepts, particularly accuracy, in measurement. It is argued here that the well known objections against true values in measurement, which would lead to refuse the concept of accuracy as non-operational, or to maintain it as only qualitative, derive from a not clear distinction between three distinct processes: the metrological characterization of measuring systems, their calibration, and finally measurement. Under the hypotheses that (1) the concept of true value is related to the model of a measurement process, (2) the concept of uncertainty is related to the connection between such model and the world, and (3) accuracy is a property of measuring systems (and not of measurement results) and uncertainty is a property of measurement results (and not of measuring systems), not only the compatibility but actually the conjoint need of error-based and uncertainty-based modeling emerges. (shrink)
Schurz proposed a justification of creative abduction on the basis of the Reichenbachian principle of the common cause. In this paper we take up the idea of combining creative abduction with causal principles and model instances of successful creative abduction within a Bayes net framework. We identify necessary conditions for such inferences and investigate their unificatory power. We also sketch several interesting applications of modeling creative abduction Bayesian style. In particular, we discuss use-novel predictions, confirmation, and the problem of (...) underdetermination in the context of abductive inferences. (shrink)
Research in experimental philosophy has increasingly been turning to corpus methods to produce evidence for empirical claims, as they open up new possibilities for testing linguistic claims or studying concepts across time and cultures. The present article reviews the quasi-experimental studies that have been done using textual data from corpora in philosophy, with an eye for the modeling and experimental design that enable statistical inference. I find that most studies forego comparisons that could control for confounds, and that only (...) a little less than half employ statistical testing methods to control for chance results. Furthermore, at least some researchers make modeling decisions that either do not take into account the nature of corpora and of the word-concept relationship, or undermine the experiment's capacity to answer research questions. I suggest that corpus methods could both provide more powerful evidence and gain more mainstream acceptance by improving their modeling practices. (shrink)
Recently, Bechtel and Abrahamsen have argued that mathematical models study the dynamics of mechanisms by recomposing the components and their operations into an appropriately organized system. We will study this claim through the practice of combinational modeling in circadian clock research. In combinational modeling, experiments on model organisms and mathematical/computational models are combined with a new type of model—a synthetic model. We argue that the strategy of recomposition is more complicated than what Bechtel and Abrahamsen indicate. Moreover, synthetic (...)modeling as a kind of material recomposition strategy also points beyond the mechanistic paradigm. (shrink)
Knowledge requires truth, and truth, we suppose, involves unflawed representation. Science does not provide knowledge in this sense but rather provides models, representations that are limited in their accuracy, precision, or, most often, both. Truth as we usually think of it is an idealization, one that serves wonderfully in most ordinary applications, but one that can terribly mislead for certain issues in philosophy. This article sketches how this happens for five important issues, thereby showing how philosophical method must take into (...) account the idealized nature of our familiar conception of truth. (shrink)
This is the second part of a two-part paper. It can be read independently of the first part provided that the reader is prepared to go along with the unorthodox views on game theory which were advanced in Part I and are summarized below. The body of the paper is an attempt to study some of the positive implications of such a viewpoint. This requires an exploration of what is involved in modeling “rational players” as computing machines.
Predictive modeling in education draws on data from past courses to forecast the effectiveness of future courses. The present effort sought to identify such a model of instructional effectiveness in scientific ethics. Drawing on data from 235 courses in the responsible conduct of research, structural equation modeling techniques were used to test a predictive model of RCR course effectiveness. Fit statistics indicated the model fit the data well, with the instructional characteristics included in the model explaining approximately 85% (...) of the variance in RCR instructional effectiveness. Implications for using the model to develop and improve future RCR courses are discussed. (shrink)
According to pancomputationalism, everything is a computing system. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some varieties are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most trivial varieties are true. As a side effect of this exercise, I offer a clarified distinction between computational modelling and computational explanation.<br><br>.
Philosophy can shed light on mathematical modeling and the juxtaposition of modeling and empirical data. This paper explores three philosophical traditions of the structure of scientific theory—Syntactic, Semantic, and Pragmatic—to show that each illuminates mathematical modeling. The Pragmatic View identifies four critical functions of mathematical modeling: (1) unification of both models and data, (2) model fitting to data, (3) mechanism identification accounting for observation, and (4) prediction of future observations. Such facets are explored using a recent (...) exchange between two groups of mathematical modelers in plant biology. Scientific debate can arise from different modeling philosophies. (shrink)
Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cognitive science, moreover, have significantly affected the way in which cognitive mechanisms are understood. Unable to employ some of the more common methods for conducting research on mechanisms, cognitive scientists’ guiding ideas about mechanism have developed in conjunction (...) with their styles of modeling. In particular, mental operations often are conceptualized as comparable to the processes employed in classical symbolic AI or neural network models. These models, in turn, have been interpreted by some as themselves intelligent systems since they employ the same type of operations as does the mind. For this paper, what is significant about these approaches to modeling is that they are constructed specifically to account for behavior and are evaluated by how well they do so—not by independent evidence that they describe actual operations in mental mechanisms. (shrink)
What structure of scientific communication and cooperation, between what kinds of investigators, is best positioned to lead us to the truth? Against an outline of standard philosophical characteristics and a recent turn to social epistemology, this paper surveys highlights within two strands of computational philosophy of science that attempt to work toward an answer to this question. Both strands emerge from abstract rational choice theory and the analytic tradition in philosophy of science rather than postmodern sociology of science. The first (...) strand of computational research models the effect of communicative networks within groups, with conclusions regarding the potential benefit of limited communication. The second strand models the potential benefits of cognitive diversity within groups. Examples from each strand of research are used in analyzing what makes modeling of this sort both promising and distinctly philosophical, but are also used to emphasize possibilities for failure and inherent limitations as well. (shrink)
In this study we use a computational model of language learning called model of syntax acquisition in children (MOSAIC) to investigate the extent to which the optional infinitive (OI) phenomenon in Dutch and English can be explained in terms of a resource-limited distributional analysis of Dutch and English child-directed speech. The results show that the same version of MOSAIC is able to simulate changes in the pattern of finiteness marking in 2 children learning Dutch and 2 children learning English as (...) the average length of their utterances increases. These results suggest that it is possible to explain the key features of the OI phenomenon in both Dutch and English in terms of the interaction between an utterance-final bias in learning and the distributional characteristics of child-directed speech in the 2 languages. They also show how computational modeling techniques can be used to investigate the extent to which cross-linguistic similarities in the developmental data can be explained in terms of common processing constraints as opposed to innate knowledge of universal grammar. (shrink)
At least since Kuhn’s Structure, philosophers have studied the influence of social factors in science’s pursuit of truth and knowledge. More recently, formal models and computer simulations have allowed philosophers of science and social epistemologists to dig deeper into the detailed dynamics of scientific research and experimentation, and to develop very seemingly realistic models of the social organization of science. These models purport to be predictive of the optimal allocations of factors, such as diversity of methods used in science, size (...) of groups, and communication channels among researchers. In this paper we argue that the current research faces an empirical challenge. The challenge is to connect simulation models with data. We present possible scenarios about how the challenge may unfold. (shrink)
My aim in this paper is to articulate an account of scientific modeling that reconciles pluralism about modeling with a modest form of scientific realism. The central claim of this approach is that the models of a given physical phenomenon can present different aspects of the phenomenon. This allows us, in certain special circumstances, to be confident that we are capturing genuine features of the world, even when our modeling occurs independently of a wholly theoretical motivation. This (...) framework is illustrated using a recent debate from meteorology. (shrink)
Model organisms are central to contemporary biology and studies of embryogenesis in particular. Biologists utilize only a small number of species to experimentally elucidate the phenomena and mechanisms of development. Critics have questioned whether these experimental models are good representatives of their targets because of the inherent biases involved in their selection (e.g., rapid development and short generation time). A standard response is that the manipulative molecular techniques available for experimental analysis mitigate, if not counterbalance, this concern. But the most (...) powerful investigative techniques and molecular methods are applicable to single-celled organisms (‘microbes’). Why not use unicellular rather than multicellular model organisms, which are the standard for developmental biology? To claim that microbes are not good representatives takes us back to the original criticism leveled against model organisms. Using empirical case studies of microbes modeling ontogeny, we break out of this circle of reasoning by showing: (a) that the criterion of representation is more complex than earlier discussions have emphasized; and, (b) that different aspects of manipulability are comparable in importance to representation when deciding if a model organism is a good model. These aspects of manipulability harbor the prospect of enhancing representation. The result is a better understanding of how developmental biologists conceptualize research using experimental models and suggestions for underappreciated avenues of inquiry using microbes. More generally, it demonstrates how the practical aspects of experimental biology must be scrutinized in order to understand the associated scientific reasoning. (shrink)