Explanations are very important to us in many contexts: in science, mathematics, philosophy, and also in everyday and juridical contexts. But what is an explanation? In the philosophical study of explanation, there is long-standing, influential tradition that links explanation intimately to causation: we often explain by providing accurate information about the causes of the phenomenon to be explained. Such causal accounts have been the received view of the nature of explanation, particularly in philosophy of science, since the 1980s. However, philosophers (...) have recently begun to break with this causal tradition by shifting their focus to kinds of explanation that do not turn on causal information. The increasing recognition of the importance of such non-causal explanations in the sciences and elsewhere raises pressing questions for philosophers of explanation. What is the nature of non-causal explanations – and which theory best captures it? How do non-causal explanations relate to causal ones? How are non-causal explanations in the sciences related to those in mathematics and metaphysics? This volume of new essays explores answers to these and other questions at the heart of contemporary philosophy of explanation. The essays address these questions from a variety of perspectives, including general accounts of non-causal and causal explanations, as well as a wide range of detailed case studies of non-causal explanations from the sciences, mathematics and metaphysics. (shrink)
The goal of this paper is to develop a counterfactual theory of explanation. The CTE provides a monist framework for causal and non-causal explanations, according to which both causal and non-causal explanations are explanatory by virtue of revealing counterfactual dependencies between the explanandum and the explanans. I argue that the CTE is applicable to two paradigmatic examples of non-causal explanations: Euler’s explanation and renormalization group explanations of universality.
Laws of nature take center stage in philosophy of science. Laws are usually believed to stand in a tight conceptual relation to many important key concepts such as causation, explanation, confirmation, determinism, counterfactuals etc. Traditionally, philosophers of science have focused on physical laws, which were taken to be at least true, universal statements that support counterfactual claims. But, although this claim about laws might be true with respect to physics, laws in the special sciences (such as biology, psychology, economics etc.) (...) appear to have—maybe not surprisingly—different features than the laws of physics. Special science laws—for instance, the economic law “Under the condition of perfect competition, an increase of demand of a commodity leads to an increase of price, given that the quantity of the supplied commodity remains constant” and, in biology, Mendel's Laws—are usually taken to “have exceptions”, to be “non-universal” or “to be ceteris paribus laws”. How and whether the laws of physics and the laws of the special sciences differ is one of the crucial questions motivating the debate on ceteris paribus laws. Another major, controversial question concerns the determination of the precise meaning of “ceteris paribus”. Philosophers have attempted to explicate the meaning of ceteris paribus clauses in different ways. The question of meaning is connected to the problem of empirical content, i.e., the question whether ceteris paribus laws have non-trivial and empirically testable content. Since many philosophers have argued that ceteris paribus laws lack empirically testable content, this problem constitutes a major challenge to a theory of ceteris paribus laws. (shrink)
Toy models are highly idealized and extremely simple models. Although they are omnipresent across scientific disciplines, toy models are a surprisingly under-appreciated subject in the philosophy of science. The main philosophical puzzle regarding toy models concerns what the epistemic goal of toy modelling is. One promising proposal for answering this question is the claim that the epistemic goal of toy models is to provide individual scientists with understanding. The aim of this article is to precisely articulate and to defend this (...) claim. In particular, we will distinguish between autonomous and embedded toy models, and then argue that important examples of autonomous toy models are sometimes best interpreted to provide how-possibly understanding, while embedded toy models yield how-actually understanding, if certain conditions are satisfied. _1_ Introduction _2_ Embedded and Autonomous Toy Models _2.1_ Embedded toy models _2.2_ Autonomous toy models _2.3_ Qualification _3_ A Theory of Understanding for Toy Models _3.1_ Preliminaries and requirements _3.2_ The refined simple view _4_ Two Kinds of Understanding with Toy Models _4.1_ Embedded toy models and how-actually understanding _4.2_ Against a how-actually interpretation of all autonomous toy models _4.3_ The how-possibly interpretation of some autonomous toy models _5_ Conclusion. (shrink)
Toy models are highly idealized and extremely simple models. Although they are omnipresent across scientific disciplines, toy models are a surprisingly under-appreciated subject in the philosophy of science. The main philosophical puzzle regarding toy models is that it is an unsettled question what the epistemic goal of toy modeling is. One promising proposal for answering this question is the claim that the epistemic goal of toy models is to provide individual scientists with understanding. The aim of this paper is to (...) precisely articulate and to defend this claim. In particular, we will distinguish between autonomous and embedded toy models, and, then, argue that important examples of autonomous toy models are sometimes best interpreted to provide how-possibly understanding, while embedded toy models yield how-actually understanding, if certain conditions are satisfied. (shrink)
In this paper, I aim to provide access to the current debate on non-causal explanations in philosophy of science. I will first present examples of non-causal explanations in the sciences. Then, I will outline three alternative approaches to non-causal explanations – that is, causal reductionism, pluralism, and monism – and, corresponding to these three approaches, different strategies for distinguishing between causal and non-causal explanation. Finally, I will raise questions for future research on non-causal explanations.
Bertrand Russell famously argued that causation is not part of the fundamental physical description of the world, describing the notion of cause as “a relic of a bygone age”. This paper assesses one of Russell’s arguments for this conclusion: the ‘Directionality Argument’, which holds that the time symmetry of fundamental physics is inconsistent with the time asymmetry of causation. We claim that the coherence and success of the Directionality Argument crucially depends on the proper interpretation of the ‘ time symmetry’ (...) of fundamental physics as it appears in the argument, and offer two alternative interpretations. We argue that: if ‘ time symmetry’ is understood as the time -reversal invariance of physical theories, then the crucial premise of the Directionality Argument should be rejected; and if ‘ time symmetry’ is understood as the temporally bidirectional nomic dependence relations of physical laws, then the crucial premise of the Directionality Argument is far more plausible. We defend the second reading as continuous with Russell’s writings, and consider the consequences of the bidirectionality of nomic dependence relations in physics for the metaphysics of causation. (shrink)
We explore the prospects of a monist account of explanation for both non-causal explanations in science and pure mathematics. Our starting point is the counterfactual theory of explanation for explanations in science, as advocated in the recent literature on explanation. We argue that, despite the obvious differences between mathematical and scientific explanation, the CTE can be extended to cover both non-causal explanations in science and mathematical explanations. In particular, a successful application of the CTE to mathematical explanations requires us to (...) rely on counterpossibles. We conclude that the CTE is a promising candidate for a monist account of explanation in both science and mathematics. (shrink)
Renormalization group (RG) methods are an established strategy to explain how it is possible that microscopically different systems exhibit virtually the same macro behavior when undergoing phase-transitions. I argue – in agreement with Robert Batterman – that RG explanations are non-causal explanations. However, Batterman misidentifies the reason why RG explanations are non-causal: it is not the case that an explanation is non- causal if it ignores causal details. I propose an alternative argument, according to which RG explanations are non-causal explanations (...) because their explanatory power is due to the application mathematical operations, which do not serve the purpose of representing causal relations. (shrink)
In the recent philosophy of explanation, a growing attention to and discussion of non-causal explanations has emerged, as there seem to be compelling examples of non-causal explanations in the sciences, in pure mathematics, and in metaphysics. I defend the claim that the counterfactual theory of explanation (CTE) captures the explanatory character of both non-causal scientific and metaphysical explanations. According to the CTE, scientific and metaphysical explanations are explanatory by virtue of revealing counterfactual dependencies between the explanandum and the explanans. I (...) support this claim by illustrating that CTE is applicable to Euler’s explanation (an example of a non-causal scientific explanation) and Loewer’s explanation (an example of a non-causal metaphysical explanation). (shrink)
In the recent philosophy of explanation, a growing attention to and discussion of non-causal explanations has emerged, as there seem to be compelling examples of non-causal explanations in the sciences, in pure mathematics, and in metaphysics. I defend the claim that the counterfactual theory of explanation captures the explanatory character of both non-causal scientific and metaphysical explanations. According to the CTE, scientific and metaphysical explanations are explanatory by virtue of revealing counterfactual dependencies between the explanandum and the explanans. I support (...) this claim by illustrating that CTE is applicable to Euler’s explanation and Loewer’s explanation. (shrink)
Getting rid of interventions.Alexander Reutlinger - 2012 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 43 (4):787-795.details
According to James Woodward’s influential interventionist account of causation, X is a cause of Y iff, roughly, there is a possible intervention on X that changes Y. Woodward requires that interventions be merely logically possible. I will argue for two claims against this modal character of interventions: First, merely logically possible interventions are dispensable for the semantic project of providing an account of the meaning of causal statements. If interventions are indeed dispensable, the interventionist theory collapses into a counterfactual theory (...) of causation. Thus, the interventionist theory is not tenable as a theory of causation in its own right. Second, if one maintains that merely logically possible interventions are indispensable, then interventions with this modal character lead to the fatal result that interventionist counterfactuals are evaluated inadequately. Consequently, interventionists offer an inadequate theory of causation. I suggest that if we are concerned with explicating causal concepts and stating the truth-conditions of causal claims we best get rid of Woodwardian interventions. (shrink)
Econophysics is a new and exciting cross-disciplinary research field that applies models and modelling techniques from statistical physics to economic systems. It is not, however, without its critics: prominent figures in more mainstream economic theory have criticized some elements of the methodology of econophysics. One of the main lines of criticism concerns the nature of the modelling assumptions and idealizations involved, and a particular target are ‘kinetic exchange’ approaches used to model the emergence of inequality within the distribution of individual (...) monetary income. This article will consider such models in detail, and assess the warrant of the criticisms drawing upon the philosophical literature on modelling and idealization. Our aim is to provide the first steps towards informed mediation of this important and interesting interdisciplinary debate, and our hope is to offer guidance with regard to both the practice of modelling inequality, and the inequality of modelling practice. _1_ Introduction _1.1_ Econophysics and its discontents _1.2_ Against burglar economics _2_ Modelling Inequality _2.1_ Mainstream economic models for income distribution _2.2_ Econophysics models for income distribution _3_ Idealizations in Kinetic Exchange Models _3.1_ Binary interactions _3.2_ Conservation principles _3.3_ Exchange dynamics _ 4 _ Fat Tails and Savings _ 5 _ Evaluation. (shrink)
This paper analyses the anti-reductionist argument from renormalisation group explanations of universality, and shows how it can be rebutted if one assumes that the explanation in question is captured by the counterfactual dependence account of explanation.
Biased research occurs frequently in the sciences. In this paper, I will focus on one particular kind of biased research: research that is subject to sponsorship bias. I will address the following epistemological question: what precisely is epistemically wrong with biased research of this kind? I will defend the evidential account of epistemic wrongness: that is, research affected by sponsorship bias is epistemically wrong if and only if the researchers in question make false claims about the evidential support of some (...) hypothesis H by data E. I will argue that the evidential account captures the epistemic wrongness of three paradigmatic types of sponsorship bias. (shrink)
Laws in the special sciences are usually regarded to be non-universal. A theory of laws in the special sciences faces two challenges. (I) According to Lange's dilemma, laws in the special sciences are either false or trivially true. (II) They have to meet the ?requirement of relevance?, which is a way to require the non-accidentality of special science laws. I argue that both challenges can be met if one distinguishes four dimensions of (non-) universality. The upshot is that I argue (...) for the following explication of special science laws: L is a special science law just if (1) L is a system law, (2) L is quasi-Newtonian, and (3) L is minimally invariant. (shrink)
Renormalization group explanations account for the astonishing phenomenon that microscopically very different physical systems display the same macro-behavior when undergoing phase-transitions. Among philosophers, this explanandum phenomenon is often described as the occurrence of a particular kind of multiply realized macro-behavior. In several recent publications, Robert Batterman denies that RG explanations account for this explanandum phenomenon by following the commonality strategy, i.e. by identifying properties that microscopically very different physical systems have in common. Arguing against Batterman’s claim, I defend the view (...) that RG explanations are in accord with the commonality strategy. (shrink)
In their influential paper “Ceteris Paribus, There is No Problem of Provisos”, Earman and Roberts (Synthese 118:439–478, 1999) propose to interpret the non-strict generalizations of the special sciences as statistical generalizations about correlations. I call this view the “statistical account”. Earman and Roberts claim that statistical generalizations are not qualified by “non-lazy” ceteris paribus conditions. The statistical account is an attractive view, since it looks exactly like what everybody wants: it is a simple and intelligible theory of special science laws (...) without the need for mysterious ceteris paribus conditions. I present two challenges to the statistical account. According to the first challenge, the statistical account does not get rid of so-called “non-lazy” ceteris paribus conditions. This result undermines one of the alleged and central advantages of the statistical account. The second challenge is that the statistical account, qua general theory of special science laws, is weakened by the fact that idealized law statements resist a purely statistical interpretation. (shrink)
In their Every Thing Must Go, Ladyman and Ross defend a novel version of Neo- Russellian metaphysics of causation, which falls into three claims: (1) there are no fundamental physical causal facts (orthodox Russellian claim), (2) there are higher-level causal facts of the special sciences, and (3) higher-level causal facts are explanatorily emergent. While accepting claims (1) and (2), I attack claim (3). Ladyman and Ross argue that higher-level causal facts are explanatorily emergent, because (a) certain aspects of these higher-level (...) facts (their universality) can be captured by renormalization group (RG) explanations, and (b) RG explanations are not reductive explanations. However, I argue that RG explanation should be understood as reductive explanations. This result undermines Ladyman and Ross’s RG-based argument for the explanatory emergence of higher-level causal facts. (shrink)
This is an introduction to the volume "Explanation Beyond Causation: Philosophical Perspectives on Non-Causal Explanations", edited by A. Reutlinger and J. Saatsi (OUP, forthcoming in 2017). -/- Explanations are very important to us in many contexts: in science, mathematics, philosophy, and also in everyday and juridical contexts. But what is an explanation? In the philosophical study of explanation, there is long-standing, influential tradition that links explanation intimately to causation: we often explain by providing accurate information about the causes of the (...) phenomenon to be explained. Such causal accounts have been the received view of the nature of explanation, particularly in philosophy of science, since the 1980s. However, philosophers have recently begun to break with this causal tradition by shifting their focus to kinds of explanation that do not turn on causal information. The increasing recognition of the importance of such non-causal explanations in the sciences and elsewhere raises pressing questions for philosophers of explanation. What is the nature of non-causal explanations - and which theory best captures it? How do non-causal explanations relate to causal ones? How are non-causal explanations in the sciences related to those in mathematics and metaphysics? This volume of new essays explores answers to these and other questions at the heart of contemporary philosophy of explanation. The essays address these questions from a variety of perspectives, including general accounts of non-causal and causal explanations, as well as a wide range of detailed case studies of non-causal explanations from the sciences, mathematics and metaphysics. (shrink)
Building on Nozick's invariantism about objectivity, I propose to define scientific objectivity in terms of counterfactual independence. I will argue that such a counterfactual independence account is (a) able to overcome the decisive shortcomings of Nozick's original invariantism and (b) applicable to three paradigmatic kinds of scientific objectivity (that is, objectivity as replication, objectivity as robustness, and objectivity as Mertonian universalism).
Craig Callender, Jonathan Cohen and Markus Schrenk have recently argued for an amended version of the best system account of laws – the better best system account (BBSA). This account of lawhood is supposed to account for laws in the special sciences, among other desiderata. Unlike David Lewis's original best system account of laws, the BBSA does not rely on a privileged class of natural predicates, in terms of which the best system is formulated. According to the BBSA, a contingently (...) true generalization is a law of a special science S iff the generalization is an axiom (or a theorem) of the best system relative to the set of predicates used by special science S. We argue that the BBSA is, at best, an incomplete theory of special science laws, as it does not account for typical features of special science laws, such as attached ceteris paribus conditions and the idealized character of law statements in these disciplines. (shrink)
John Earman and John T. Roberts advocate a challenging and radical claim regarding the semantics of laws in the special sciences: the statistical account. According to this account, a typical special science law “asserts a certain precisely defined statistical relation among well-defined variables” and this statistical relation does not require being hedged by ceteris paribus conditions. In this paper, we raise two objections against the attempt to cash out the content of special science generalizations in statistical terms.
Several proponents of the interventionist theory of causation have recently argued for a neo-Russellian account of causation. The paper discusses two strategies for interventionists to be neo-Russellians. Firstly, I argue that the open systems argument – the main argument for a neo-Russellian account advocated by interventionists – fails. Secondly, I explore and discuss an alternative for interventionists who wish to be neo-Russellians: the statistical mechanical account. Although the latter account is an attractive alternative, it is argued that interventionists are not (...) able to adopt it straightforwardly. Hence, to be neo-Russellians remains a challenge to interventionists. (shrink)
What exactly do social scientists and biologists say when they make causal claims? This question is one of the central puzzles in philosophy of science. Alexander Reutlinger sets out to answer this question. He aims to provide a theory of causation in the special sciences (that is, a theory causation in the social sciences, the biological sciences and other higher-level sciences). According one recent prominent view, causation is that causation is intimately tied to manipulability and the possibility of intervene. Reutlinger's (...) main negative target is to argue interventionist account of causation is not adequate. Where do interventionist accounts go wrong? Reutlinger argues that the central concept of the interventionist theories – that is, the very concept of an intervention – is tremendously problematic. Reutlinger's main positive claim consists in replacing the interventionist approach by an alternative explication of causation in the special sciences, the comparative variability theory of causation. This alternative preserves many insights of the interventionist account without a commitment to the claim that causation and interventions are intimately tied together. (shrink)
What are ceteris paribus laws? Which disciplines appeal to cp laws and which semantics, metaphysical underpinning, and epistemological dimensions do cp law statements have? Firstly, we give a short overview of the recent discussion on cp laws, which addresses these questions. Secondly, we suggest that given the rich and diverse literature on cp laws a broad conception of cp laws should be endorsed which takes into account the different ways in which laws can be non-universal . Finally, we provide an (...) overview of the special issue on that basis and describe the individual contributions to the special issue according to the issues they address: the range of applications of cp laws as well as the semantics, metaphysics, and epistemology pertaining to cp law statements. (shrink)
Solving the “new demarcation problem” requires a distinction between epistemically legitimate and illegitimate roles for non-epistemic values in science. This paper addresses one ‘half’ (i.e. a sub-problem) of the new demarcation problem articulated by the Gretchenfrage: What makes the role of a non-epistemic value in science epistemically illegitimate? I will argue for the Explaining Epistemic Errors (EEE) account, according to which the epistemically illegitimate role of a non-epistemic value is defined via an explanatory claim: the fact that an epistemic agent (...) is motivated by a non-epistemic value explains why the epistemic agent commits a particular epistemic error. The EEE account is inspired by Douglas’ and Steel’s “functionalist” or “epistemic constraint” accounts of epistemic illegitimacy. I will suggest that the EEE account is able to meet two challenges that these two accounts face, while preserving the key intuition underlying both accounts. If my arguments succeed, then the EEE account provides a solution to one half of the new demarcation problem (by providing a definition of what makes the role of a non-epistemic value epistemically illegitimate) and it opens up new ways for addressing the other half (i.e. characterizing an epistemically legitimate role of non-epistemic values). (shrink)
Solving the “new demarcation problem” requires a distinction between epistemically legitimate and illegitimate roles for non-epistemic values in science. This paper addresses one ‘half’ (i.e. a sub-problem) of the new demarcation problem articulated by the Gretchenfrage: What makes the role of a non-epistemic value in science epistemically illegitimate? I will argue for the Explaining Epistemic Errors (EEE) account, according to which the epistemically illegitimate role of a non-epistemic value is defined via an explanatory claim: the fact that an epistemic agent (...) is motivated by a non-epistemic value explains why the epistemic agent commits a particular epistemic error. The EEE account is inspired by Douglas’ and Steel’s “functionalist” or “epistemic constraint” accounts of epistemic illegitimacy. I will suggest that the EEE account is able to meet two challenges that these two accounts face, while preserving the key intuition underlying both accounts. If my arguments succeed, then the EEE account provides a solution to one half of the new demarcation problem (by providing a definition of what makes the role of a non-epistemic value epistemically illegitimate) and it opens up new ways for addressing the other half (i.e. characterizing an epistemically legitimate role for non-epistemic values). (shrink)
Wie können Philosoph/innen mit der Bedrohung der akademischen Freiheit umgehen, die von rechtspopulistischen Strömungen (in Deutschland, Europa und weltweit) und autoritären Staaten (wie der Türkei und Ungarn) ausgeht? – Diese Frage stand im Zentrum der Podiumsdiskussion „Bedrohtes Denken“, die während des DGPhil Kongresses in Berlin am Tag der Bundestagswahl 2017 stattfand. Es war eine Diskussion, deren Ende von der bedrückenden Nachricht überschattet wurde, die rechtsextreme AfD werde drittstärkste Kraft im neuen Bundestag. Angesichts dieses zutiefst beunruhigenden Wahlergebnisses glauben wir, dass es (...) wichtig ist, diese Diskussion weiterzuführen. Dieser Kommentar soll dazu einen Anstoß geben. (shrink)
Several philosophers of biology have argued for the claim that the generalizations of biology are historical and contingent.1–5 This claim divides into the following sub-claims, each of which I will contest: first, biological generalizations are restricted to a particular space-time region. I argue that biological generalizations are universal with respect to space and time. Secondly, biological generalizations are restricted to specific kinds of entities, i.e. these generalizations do not quantify over an unrestricted domain. I will challenge this second claim by (...) providing an interpretation of biological generalizations that do quantify over an unrestricted domain of objects. Thirdly, biological generalizations are contingent in the sense that their truth depends on special initial and background conditions. I will argue that the contingent character of biological generalizations does not diminish their explanatory power nor is it the case that this sort of contingency is exclusively characteristic of biological generalizations. (shrink)