Proponents of mechanistic explanation all acknowledge the importance of organization. But they have also tended to emphasize specificity with respect to parts and operations in mechanisms. We argue that in understanding one important mode of organization—patterns of causal connectivity—a successful explanatory strategy abstracts from the specifics of the mechanism and invokes tools such as those of graph theory to explain how mechanisms with a particular mode of connectivity will behave. We discuss the connection between organization, abstraction, and mechanistic explanation and (...) illustrate our claims by looking at an example from recent research on so-called network motifs. (shrink)
Evolutionary debunking arguments appeal to selective etiologies of human morality in an attempt to undermine moral realism. But is morality actually the product of evolution by natural selection? Although debunking arguments have attracted considerable attention in recent years, little of it has been devoted to whether the underlying evolutionary assumptions are credible. In this paper, we take a closer look at the evolutionary hypotheses put forward by two leading debunkers, namely Sharon Street and Richard Joyce. We raise a battery of (...) considerations, both empirical and theoretical, that combine to cast doubt on the plausibility of both hypotheses. We also suggest that it is unlikely that there is in the vicinity a plausible alternative hypothesis suitable for the debunker’s cause. (shrink)
I distinguish three theses associated with the new mechanistic philosophy – concerning causation, explanation and scientific methodology. Advocates of each thesis are identified and relationships among them are outlined. I then look at some recent work on natural selection and mechanisms. There, attention to different kinds of New Mechanism significantly affects of what is at stake.
Idealization and abstraction are central concepts in the philosophy of science and in science itself. My goal in this paper is suggest an account of these concepts, building on and refining an existing view due to Jones Idealization XII: correcting the model. Idealization and abstraction in the sciences, vol 86. Rodopi, Amsterdam, pp 173–217, 2005) and Godfrey-Smith Mapping the future of biology: evolving concepts and theories. Springer, Berlin, 2009). On this line of thought, abstraction—which I call, for reasons to be (...) explained, abstractness—involves the omission of detail, whereas idealization consists in a deliberate mismatch between a description and the world. I will suggest that while the core idea underlying these authors’ view is correct, they make several assumptions and stipulations that are best avoided. For one thing, they tie abstractness too close to truth. For another, they do not allow sufficient room to the difference between idealization and error. Taking these points into account leads to a refined account of the distinction, in which abstractness is seen in terms of relative richness of detail, and idealization is seen as closely connected with the knowledge and intentions of idealizers. I lay out these accounts in turn, and then discuss the relationship between the two concepts, and several other upshots of the present way of construing the distinction. (shrink)
Modeling is an important scientific practice, yet it raises significant philosophical puzzles. Models are typically idealized, and they are often explored via imaginative engagement and at a certain “distance” from empirical reality. These features raise questions such as what models are and how they relate to the world. Recent years have seen a growing discussion of these issues, including a number of views that treat modeling in terms of indirect representation and analysis. Indirect views treat the model as a bona (...) fide object, specified by the modeler and used to represent and reason about some portion of the concrete empirical world. On some indirect views, model systems are abstract entities, such as mathematical structures, while on other views they are concrete hypothetical things. Here I assess these views and offer a novel account of models. I argue that regarding models as abstracta results in some significant tensions with the practice of modeling, especially in areas where non-mathematical models are common. Furthermore, viewing models as concrete hypotheticals raises difficult questions about model-world relations. The view I argue for treats models as direct, albeit simplified, representations of targets in the world. I close by suggesting a treatment of model-world relations that draws on a recent work by Stephen Yablo concerning the notion of partial truth. (shrink)
Many biological investigations are organized around a small group of species, often referred to as ‘model organisms’, such as the fruit fly Drosophila melanogaster. The terms ‘model’ and ‘modelling’ also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka–Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different epistemic characters. (...) Theoretical modelling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account. 1 Introduction2 Volterra and Theoretical Modelling3 Drosophila as a Model Organism4 Generalizing from Work on Model Organisms5 Phylogenetic Inference and Model Organisms6 Further Roles of Model Organisms6.1 Preparative experimentation6.2 Model organisms as paradigms6.3 Model organisms as theoretical models6.4 Inspiration for engineers6.5 Anchoring a research community7 Conclusion. (shrink)
The Hodgkin–Huxley (HH) model of the action potential is a theoretical pillar of modern neurobiology. In a number of recent publications, Carl Craver (, , ) has argued that the model is explanatorily deficient because it does not reveal enough about underlying molecular mechanisms. I offer an alternative picture of the HH model, according to which it deliberately abstracts from molecular specifics. By doing so, the model explains whole-cell behaviour as the product of a mass of underlying low-level events. The (...) issue goes beyond cellular neurobiology, for the strategy of abstraction exhibited in the HH case is found in a range of biological contexts. I discuss why it has been largely neglected by advocates of the mechanist approach to explanation. 1 Introduction2 A Primer on the HH Model2.1 The basic qualitative picture2.2 The quantitative model3 Interlude: What Did Hodgkin and Huxley Think?4 Craver’s View4.1 Mechanistic explanation4.2 Sketches4.3 Craver's view: The HH model as a mechanism sketch5 An Alternative View of the HH Model5.1 Another look at the equations5.2 The discrete-gating picture5.3 The road paved by Hodgkin and Huxley5.4 Summary and comparison to Craver6 Conclusion: The HH Model and Mechanistic Explanation6.1 Sketches and abstractions6.2 Why has aggregative abstraction been overlooked? (shrink)
This book looks at the role of the imagination in science, from both philosophical and psychological perspectives. These contributions combine to provide a comprehensive and exciting picture of this under-explored subject.
Some philosophers of science – the present author included – appeal to fiction as an interpretation of the practice of modeling. This raises the specter of an incompatibility with realism, since fiction-making is essentially non-truth-regulated. I argue that the prima facie conflict can be resolved in two ways, each involving a distinct notion of fiction and a corresponding formulation of realism. The main goal of the paper is to describe these two packages. Toward the end I comment on how to (...) choose among them. (shrink)
ABSTRACTExperimentation is traditionally considered a privileged means of confirmation. However, why and how experiments form a better confirmatory source relative to other strategies is unclear, and recent discussions have identified experiments with various modeling strategies on the one hand, and with ‘natural’ experiments on the other hand. We argue that experiments aiming to test theories are best understood as controlled investigations of specimens. ‘Control’ involves repeated, fine-grained causal manipulation of focal properties. This capacity generates rich knowledge of the object investigated. (...) ‘Specimenhood’ involves possessing relevant properties given the investigative target and the hypothesis in question. Specimens are thus representative members of a class of systems, to which a hypothesis refers. It is in virtue of both control and specimenhood that experiments provide powerful confirmatory evidence. This explains the distinctive power of experiments: although modelers exert extensive control, they do not exert this control over specimens; although natural experiments utilize specimens, control is diminished. (shrink)
Design thinking in general, and optimality modeling in particular, have traditionally been associated with adaptationism—a research agenda that gives pride of place to natural selection in shaping biological characters. Our goal is to evaluate the role of design thinking in non-evolutionary analyses. Specifically, we focus on research into abstract design principles that underpin the functional organization of extant organisms. Drawing on case studies from engineering-inspired approaches in biology we show how optimality analysis, and other design-related methods, play a specific methodological (...) role that is tangential to the study of adaptation. To account for the role of these reasoning strategies in contemporary biology, we therefore suggest a reevaluation of the connection between design thinking and adaptationism. (shrink)
Analogies to machines are commonplace in the life sciences, especially in cellular and molecular biology — they shape conceptions of phenomena and expectations about how they are to be explained. This paper offers a framework for thinking about such analogies. The guiding idea is that machine-like systems are especially amenable to decompositional explanation, i.e., to analyses that tease apart underlying components and attend to their structural features and interrelations. I argue that for decomposition to succeed a system must exhibit causal (...) orderliness, which I explicate in terms of differentiation among parts and the significance of local relations. I also discuss what makes a model depict its target as machine-like, suggesting that a key issue is the degree of detail with respect to the target’s parts and their interrelations. (shrink)
Recently, various philosophers have argued that we can obtain knowledge via the imagination. In particular, it has been suggested that we can come to know concrete, empirical matters of everyday significance by appropriately imagining relevant scenarios. Arguments for this thesis come in two main varieties: black box reliability arguments and constraints-based arguments. We suggest that both strategies are unsuccessful. Against black-box arguments, we point to evidence from empirical psychology, question a central case-study, and raise concerns about a (claimed) evolutionary rationale (...) for the imagination’s reliability. Against the constraints-based account, we argue that to the extent that it works, this does not give rise to knowledge that is distinctively from the imagination. We conclude by suggesting that the imagination’s role in raising possibilities, traditionally seen as part of the context of discovery, can in fact play a role in justification, including as a bulwark against certain sorts of skepticism. (shrink)
Many have expected that understanding the evolution of norms should, in some way, bear on our first-order normative outlook: How norms evolve should shape which norms we accept. But recent philosophy has not done much to shore up this expectation. Most existing discussions of evolution and norms either jump headlong into the is/ought gap or else target meta-ethical issues, such as the objectivity of norms. My aim in this paper is to sketch a different way in which evolutionary considerations can (...) feed into normative thinking—focusing on stability. I will discuss two forms of argument that utilize information about social stability drawn from evolutionary models, and employs it to assess claims in political philosophy. One such argument treats stability as feature of social states that may be taken into account alongside other features. The other uses stability as a constraint on the realization of social ideals, via a version of the ought-implies-can maxim. These forms of argument are not new; indeed they have a history going back at least to early modern philosophy. But their marriage with evolutionary information is relatively recent, has a significantly novel character, and has received little attention in recent moral and political philosophy. (shrink)
Accounts of mechanistic explanation, especially as applied to biology and sometimes going under the heading of “new mechanism,” provided an attractive alternative to nomological accounts that preceded them. These accounts were motivated by selected examples, drawn primarily from cell and molecular biology and neuroscience. However, the range of examples that scientists take to be mechanistic explanations is far broader. We focus on examples that differ from those traditionally recruited by Mechanists. Our contention is that attention to additional examples will lead (...) to a richer conception of mechanistic explanation, prompting a shift from what we refer to as Mechanism 1.0 to Mechanism 2.0. (shrink)
This paper offers a novel view of unity in neuroscience. I set out by discussing problems with the classical account of unity-by-reduction, due to Oppenheim and Putnam. That view relies on a strong notion of levels, which has substantial problems. A more recent alternative, the mechanistic “mosaic” view due to Craver, does not have such problems. But I argue that the mosaic ideal of unity is too minimal, and we should, if possible, aspire for more. Relying on a number of (...) recent works in theoretical neuroscience—network motifs, canonical neural computations and design-principles—I then present my alternative: a “flat” view of unity, i.e. one that is not based on levels. Instead, it treats unity as attained via the identification of recurrent explanatory patterns, under which a range of neuroscientific phenomena are subsumed. I develop this view by recourse to a causal conception of explanation, and distinguish it from Kitcher’s view of explanatory unification and related ideas. Such a view of unity is suitably ambitious, I suggest, and has empirical plausibility. It is fit to serve as an appropriate working hypothesis for 21st century neuroscience. (shrink)
Biologists frequently draw on ideas and terminology from engineering. Evolutionary systems biology—with its circuits, switches, and signal processing—is no exception. In parallel with the frequent links drawn between biology and engineering, there is ongoing criticism against this cross-fertilization, using the argument that over-simplistic metaphors from engineering are likely to mislead us as engineering is fundamentally different from biology. In this article, we clarify and reconfigure the link between biology and engineering, presenting it in a more favorable light. We do so (...) by, first, arguing that critics operate with a narrow and incorrect notion of how engineering actually works, and of what the reliance on ideas from engineering entails. Second, we diagnose and diffuse one significant source of concern about appeals to engineering, namely that they are inherently and problematically metaphorical. We suggest that there is plenty of fertile ground left for a continued, healthy relationship between engineering and biology. (shrink)
According to [Bayesian] models” in cognitive neuroscience, says a recent textbook, “the human mind behaves like a capable data scientist”. Do they? That is to say, do such model show we are rational? I argue that Bayesian models of cognition, perhaps surprisingly, do not and indeed cannot, show that we are Bayesian-rational. The key reason is that such models appeal to approximations, a fact that carries significant implications. After outlining the argument, I critique two responses, seen in recent cognitive neuroscience. (...) One says that the mind can be seen as approximately Bayes-rational, while the other reconceives norms of rationality. (shrink)
Michael Strevens has produced an ambitious and comprehensive new account of scientific explanation. This review discusses its main themes, focusing on regularity explanation and a number of methodological concerns.
The study of biological altruism is a cornerstone of modern evolutionary biology. Associated with foundational issues about natural selection, it is often supposed that explaining altruism is key to understanding social behavior more generally. Typically, biological altruism is defined in purely effects-based, behavioral terms – as an interaction in which one organism contributes fitness to another, at its own expense. Crucially, such a definition isn’t meant to rest on psychological or intentional assumptions. We show that, appearances and official definitions notwithstanding, (...) the notion biological altruism carries a vestige of the psychological, intentional concept familiar to us from the human domain. In particular, definitions of altruism from Hamilton onwards presuppose an actor\recipient distinction – a distinction, so we argue, that has questionable biological grounding. We arrive at this conclusion step-by-step, first looking at several simple, “austere” definitions and their problems, and then critiquing the actor\recipient distinction directly. If successful, our arguments suggest that the category of biological altruism requires a significant rethink. (shrink)
La réception du christianisme byzantin par l’Eglise catholique présente une sorte d’anomalie. Invoquant l’autorité de Thomas d’Aquin, les théologiens occidentaux rejettent généralement l’idée d’une distinction réelle entre l’essence et les énergies divines, tout comme la notion de grâce incréée, laquelle joue un rôle essentiel dans la vision de Grégoire Palamas . D’un autre côté, ces mêmes théologiens ont été nombreux à redécouvrir, durant la période récente, la pensée de Maxime le Confesseur , voyant en celui-ci un génial précurseur de Thomas (...) d’Aquin. Or que resterait-il de la doctrine de Grégoire Palamas sans le patronage de Maxime le Confesseur? Comment méconnaître l’un et reconnaître l’autre au nom du même Thomas d’Aquin? Ce qui vient ici au jour à travers l’étude des contextes et des enjeux doctrinaux, c’est la coexistence, jusqu’alors insoupçonnée, de deux représentations distinctes du rapport entre le créé et l´incréé. L’Occident latin et l’Orient byzantin n´en finissent pas de comprendre différemment cette foi qui leur est pourtant indiscutablement commune. (shrink)
Carl Craver’s recent book offers an account of the explanatory and theoretical structure of neuroscience. It depicts it as centered around the idea of achieving mechanistic understanding, i.e., obtaining knowledge of how a set of underlying components interacts to produce a given function of the brain. Its core account of mechanistic explanation and relevance is causal-manipulationist in spirit, and offers substantial insight into casual explanation in brain science and the associated notion of levels of explanation. However, the focus on mechanistic (...) explanation leaves some open questions regarding the role of computation and cognition. (shrink)
This paper derives from a broader project dealing with the notion of causal order. I use this term to signify two kinds of parts-whole dependence: Orderly systems have rich, decomposable, internal structure; specifically, parts play differential roles, and interactions are primarily local. Disorderly systems, in contrast, have a homogeneous internal structure, such that differences among parts and organizational features are less important. Orderliness, I suggest, marks one key difference between individuals and collectives. My focus here will be the connection between (...) order and robustness, i.e. functional resilience in the face of internal or environmental perturbations. I distinguish three varieties of robustness. Ordered robustness is grounded in the system’s specific organizational pattern. In contrast, disorderly robustness stems from the aggregate outcome of many similar parts. In between, we find semi-ordered robustness, wherein a messy ensemble of elements is subjected to a selection or stabilization mechanism. I give brief characterizations of each category, discuss examples and remark on the connection between the order/disorder axis and the notions of individual versus collective. (shrink)