In recent years, the value-freeness of science has come under extensive critique. Early objectors to the notion of value-free science can be found in Rudner and Churchman, later objections occur in Leach and Gaa, and more recent critics are Kitcher, Douglas, and Elliott. The goal of this paper is to examine and critique two arguments opposed to the notion of a value-free science. The first argument, the uncertainty argument, cites the endemic uncertainty of science and concludes that values are needed (...) to give direction to scientific investigation. The second, or moral argument, cites the fact that scientists have moral obligations just like everyone else, and.. (shrink)
In Seeing Things, Robert Hudson assesses a common way of arguing about observation reports called "robustness reasoning." Robustness reasoning claims that an observation report is more likely to be true if the report is produced by multiple, independent sources. Seeing Things argues that robustness reasoning lacks the special value it is often claimed to have. Hudson exposes key flaws in various popular philosophical defenses of robustness reasoning. This philosophical critique of robustness is extended by recounting five episodes in the history (...) of science (from experimental microbiology, atomic theory, astrophysics, and astronomy) where robustness reasoning is — or could be claimed to have been — used. Hudson goes on to show that none of these episodes do in fact exhibit robustness reasoning. In this way, the significance of robustness reasoning is rebutted on both philosophical and historical grounds. But the book does more than critique robustness reasoning. It also develops a better defense of the informative value of observation reports. The book concludes by relating insights into the failure of robustness reasoning to a popular approach to scientific realism called "(theoretical) preservationism." Hudson argues that those who defend this approach to realism commit similar errors to those who advocate robustness reasoning. In turn, a new form of realism is formulated and defended. Called "methodological preservationism," it recognizes the fundamental value of naked-eye observation to scientists — and the rest of us. (shrink)
Recently, many scientists have become concerned about an excessive number of failures to reproduce statistically significant effects. The situation has become dire enough that the situation has been named the ‘reproducibility crisis’. After reviewing the relevant literature to confirm the observation that scientists do indeed view replication as currently problematic, I explain in philosophical terms why the replication of empirical phenomena, such as statistically significant effects, is important for scientific progress. Following that explanation, I examine various diagnoses of the reproducibility (...) crisis, and argue that for the majority of scientists the crisis is due, at least in part, to a form of publication bias. This conclusion sets the stage for an assessment of the view that evidential relations in science are inherently value-laden, a view championed by Heather Douglas and Kevin Elliott. I argue, in response to Douglas and Elliott, and as motivated by the meta-scientific resistance scientists harbour to a publication bias, that if we advocate the value-ladenness of science the result would be a deepening of the reproducibility crisis. (shrink)
Culp (1994) provides a defense for a form of experimental reasoning entitled 'robustness'. Her strategy is to examine a recent episode in experimental microbiology--the case of the mistaken discovery of a bacterial organelle called a 'mesosome'--with an eye to showing how experimenters effectively used robust experimental reasoning (or could have used robust reasoning) to refute the existence of the mesosome. My plan is to criticize Culp's assessment of the mesosome episode and to cast doubt on the epistemic significance of robustness. (...) In turn, I present a different account of the experimental reasoning microbiologists used in arriving at the conclusion that mesosomes are artifacts. I call this form of reasoning 'reliable process reasoning', and close the paper with a brief discussion of how experimental microbiologists justify the claim that an experimental process is reliable. (shrink)
Jean Perrin’s proof in the early-twentieth century of the reality of atoms and molecules is often taken as an exemplary form of robustness reasoning, where an empirical result receives validation if it is generated using multiple experimental approaches. In this article, I describe in detail Perrin’s style of reasoning, and locate both qualitative and quantitative forms of argumentation. Particularly, I argue that his quantitative style of reasoning has mistakenly been viewed as a form of robustness reasoning, whereas I believe it (...) is something different, what I call ‘calibration’. From this perspective, I re-evaluate recent interpretations of Perrin provided by Stathis Psillos, Peter Achinstein, Alan Chalmers, and Bas van Fraassen, all of whom read Perrin as a robustness reasoner, though not necessarily in the same sort of way. I then argue that by viewing Perrin as a ‘calibration’ reasoner we gain a better understanding of why he believes himself to have established the reality of atoms and molecules. To conclude, I provide an alternative and more productive understanding of the basis of the dispute between realists and anti-realists. _1_ Introduction _2_ Perrin’s Reasoning: The Qualitative Argument _3_ Perrin’s Reasoning: The Quantitative Argument _4_ Perrin’s Realism _5_ Psillos, Achinstein, Chalmers, and van Fraassen on Understanding Perrin _6_ Conclusion. (shrink)
According to the methodological principle called ‘robustness’, empirical evidence is more reliable when it is generated using multiple, independent (experimental) routes that converge on the same result. As it happens, robustness as a methodological strategy is quite popular amongst philosophers. However, despite its popularity, my goal here is to criticize the value of this principle on historical grounds. My historical reasons take into consideration some recent history of astroparticle physics concerning the search for WIMPs (weakly interacting massive particles), one of (...) the main candidates for cosmic dark matter. On the basis of these reasons, I assert that robustness, at least in this historical case we are considering, has less value than usually assumed by philosophers. (shrink)
In this paper I distinguish two kinds of predictivism, ‘timeless’ and ‘historicized’. The former is the conventional understanding of predictivism. However, I argue that its defense in the works of John Worrall (Scerri and Worrall 2001, Studies in History and Philosophy of Science 32, 407–452; Worrall 2002, In the Scope of Logic, Methodology and Philosophy of Science, 1, 191–209) and Patrick Maher (Maher 1988, PSA 1988, 1, pp. 273) is wanting. Alternatively, I promote an historicized predictivism, and briefly defend such (...) a predictivism at the end of the paper. (shrink)
In this paper I distinguish two kinds of predictivism, 'timeless' and 'historicized'. The former is the conventional understanding of predictivism. However, I argue that its defense in the works of John Worrall and Patrick Maher is wanting. Alternatively, I promote an historicized predictivism, and briefly defend such a predictivism at the end of the paper.
The Structure of Scientific Revolutions) and Alan Musgrave argue that it is impossible to precisely date discovery events and precisely identify discoverers. They defend this claim mainly on the grounds that so-called discoverers have in many cases misconceived the objects of discovery. In this paper, I argue that Kuhn and Musgrave arrive at their view because they lack a substantive account of how well discoverers must be able to conceptualize discovered objects. I remedy this deficiency by providing just such an (...) account, and with this account I delineate how one can secure precision regarding the identity of discoverers and the times of discoveries. Near the end of my paper I bring my target of criticism up-to-date; it turns out that Steve Woolgar adopts an approach to discovery kindred to those of Kuhn and Musgrave and I close the paper by discussing what is at stake in rebutting him. (shrink)
Virtue epistemology is faced with the challenge of establishing the degree to which a knower’s cognitive success is attributable to her cognitive ability. As Duncan Pritchard notes, in some cases one is inclined to a strong version of virtue epistemology, one that requires cognitive success to be because of the exercise of the relevant cognitive abilities. In other cases, a weak version of virtue epistemology seems preferable, where cognitive success need only be the product of cognitive ability. Pritchard’s preference, with (...) his anti-luck virtue epistemology, is for the latter. But as Christoph Kelp has recently argued, this preference is not without controversy. Notably, Kelp argues that Pritchard on the basis of his anti-luck virtue epistemology is impelled to cast the wrong judgment in a case that Pritchard himself discusses many times in his writings, the so-called ‘Temp case’. Though Pritchard argues that Temp lacks knowledge because his cognitive success is not a result of his cognitive ability, I concur with Kelp that Pritchard’s epistemology should in fact attribute knowledge to Temp, and show this by locating weaknesses in three distinct arguments Pritchard uses to show that Temp lacks knowledge. I subsequently argue that if Pritchard wishes to persist in denying knowledge to Temp, he should endorse what I call the ‘true description’ requirement. I close the paper by providing an argument for this requirement, controversial though it is. (shrink)
In 1912, Henri Poincaré published an argument which apparently shows that the hypothesis of quanta is both necessary and sufficient for the truth of Planck''s experimentally corroborated law describing the spectral distribution of radiant energy in a black body. In a recent paper, John Norton has reaffirmed the authority of Poincarés argument, setting it up as a paradigm case in which empirical data can be used to definitively rule out theoretical competitors to a given theoretical hypothesis. My goal is to (...) dispute Norton ''s claim that there is no theoretical underdetermination problem arising between classical physics and early quantum theory. The strategy I use in defending my view is to adopt a suggestion made by Jarrett Leplin and Larry Laudan on how to assess the relative merits of competing theoretical alternatives, where each alternative has an equal capacity to save the phenomena. In the course of the paper, I distinguish between two branches of classical physics: classical mechanics and classical electromagnetism. The former is claimed by Norton and Poincaré to be determinately ruled out by the black body evidence; and it is the former that I argue is compatible with this evidence. (shrink)
It has become more common recently for epistemologists to advocate the pragmatic encroachment on knowledge, the claim that the appropriateness ofknowledge ascriptions is dependent on the relevant practical circumstances. Advocacy of practicalism in epistemology has come at the expense of contextualism, the view that knowledge ascriptions are independent of pragmatic factors and depend alternatively on distinctively epistemological, semantic factors with the result that knowledge ascriptions express different knowledge properties on different occasions of use. Overall, my goal here is to defend (...) a particular version of contextualism drawn from work by Peter Ludlow, called ‘standards contextualism.’ My strategy will be to elaborate on this form of contextualism by defending it from various objections raised by the practicalists Jason Stanley, Jeremy Fantl and Matthew McGrath. In showing how standards contextualism can effectively repel these criticisms I hope to establish that standards contextualism is a viable alternative to practicalism. (shrink)
The goal of this paper is to defend the claim that there is such a thing as direct perception, where by âdirect perceptionâ I mean perception unmediated by theorizing or concepts. The basis for my defense is a general philosophic perspective which I call âempiricist philosophyâ. In brief, empiricist philosophy (as I have defined it) is untenable without the occurrence of direct perception. It is untenable without direct perception because, otherwise, one can't escape the hermeneutic circle, as this phrase is (...) used in van Fraassen (1980). The bulk of the paper is devoted to defending my belief in direct perception against various objections that can be posed against it. I discuss various anticipations of my view found in the literature, eventually focusing on Ian Hacking's related conception of `entity realism' (Hacking 1983). Hacking has been criticized by a number of philosophers and my plan is to respond to these criticisms on behalf of entity realism (or more precisely on behalf of the claim that direct perception is a reality) and to then respond to other possible criticisms that can be launched against direct perception. (shrink)
This book is... a survey history of medicine from the earliest times, centered thematically on how changing concepts of disease have affected its management.... One finds a gratifying mastery of recent as well as classic scholarship in medical history and a careful sidestepping of positivistic excesses.... Disease and Its Control is a fresh and welcome synthesis of historical scholarship that will be accessible to interested laymen. (Annals of Internal Medicine).
My task in this paper is to defend the legitimacy of historicist philosophy of science, defined as the philosophic study of science that takes seriously case studies drawn from the practice of science. Historicistphilosophy of science suffers from what I call the ’evidence problem’. The worry is that case studies cannot qualify as rigorous evidence for the adjudication of philosophic theories. I explore the reasons why one might deny to historical cases a probative value, then reply to these reasons on (...) behalf of historicism. The main proponents of the view I am criticizing are Pitt (2001) and Rasmussen (2001). (shrink)
In “Should We Strive to Make Science Bias‑Free? A Philosophical Assessment of the Reproducibility Crisis”, I argue that the problem of bias in science, a key factor in the current reproducibility crisis, is worsened if we follow Heather Douglas and Kevin C. Elliott’s advice and introduce non-epistemic values into the evidential assessment of scientific hypotheses. In their response to my paper, Douglas and Elliott complain that I misrepresent their views and fall victim to various confusions. In this rebuttal I argue, (...) by means of an examination of their published views, that my initial interpretation of their work is accurate and that, in their hands, science is generally prone to deviations from truth. (shrink)
James Robert Brown’s Who Rules in Science? is an engaging, candid discussion of various postmodern and sociological challenges that have recently been launched at the orthodoxy of science. Interspersed throughout the book are various, largely introductory discussions of issues pertaining to the history of philosophy of science, issues such as realism, unification, instrumentalism, novel predictions, objectivity, and so forth. At the end of the book Brown takes up topics relevant to the politics of science. Altogether it is a pleasant book (...) to read, written with the same warm flair that is characteristic of Brown’s accessible lecturing style. (shrink)
It is commonly thought that Jean Perrin argued for the reality of atoms in the early twentieth century by using what Wesley Salmon calls a “common cause” argument, also known as robustness reasoning. After citing some concerns with this interpretation of Perrin, I offer a different interpretation of Perrin’s work that more closely depicts the details of Perrin’s reasoning in his relevant published writings. I then offer a historical argument that supports this interpretation and discuss the philosophical merits of Perrin’s (...) style of reasoning as I have presented it. I close by showing how my interpretation is distinct from previous interpretations of Perrin’s work. (shrink)
Kurt Gödel criticizes Rudolf Carnap's conventionalism on the grounds that it relies on an empiricist admissibility condition, which, if applied, runs afoul of his second incompleteness theorem. Thomas Ricketts and Michael Friedman respond to Gödel's critique by denying that Carnap is committed to Gödel's admissibility criterion; in effect, they are denying that Carnap is committed to any empirical constraint in the application of his principle of tolerance. I argue in response that Carnap is indeed committed to an empirical requirement vis‐à‐vis (...) tolerance, a fact that becomes clear upon closer scrutiny of Carnap's relevant writings. *Received July 2009; revised January 2010. †To contact the author, please write to: Department of Philosophy, University of Saskatchewan, 9 Campus Drive, Saskatoon, SK S7N 5A5, Canada; e‐mail: [email protected]. (shrink)
In an important 2006 paper, Nishi Shah defends ‘evidentialism’, the position that only evidence for a proposition’s truth constitutes a reason to believe this proposition. In opposition to Shah, Anthony Robert Booth, Andrew Reisner and Asbjørn Steglich-Petersen argue that things other than evidence of truth, so-called non-evidential or ‘pragmatic’ reasons, constitute reasons to believe a proposition. I argue that we can effectively respond to Shah’s pragmatist critics if, following Shah, we are careful to distinguish the evaluation of the reasons for (...) a belief from the process of actually forming a belief and allowing it to influence action. Drawing this distinction is assisted if we utilize Rudolf Carnap’s probabilistic interpretation of what it means to be disposed to believe a claim. (shrink)
What does it mean to replicate an experiment? A distinction is often drawn between ‘exact’ (or ‘direct’) and ‘conceptual’ replication. However, in recent work, Uljana Feest argues that the notion of replication in itself, whether exact or conceptual, is flawed due to the problem of systematic error, and Edouard Machery argues that, although the notion of replication is not flawed, we should nevertheless dispense with the distinction between exact and conceptual replication. My plan in this paper is to defend the (...) value of replication, along with the distinction between exact and conceptual replication, from the critiques of Feest and Machery. To that end, I provide an explication of conceptual replication, and distinguish it from what I call ‘experimental’ replication. On the basis, then, of a tripartite distinction between exact, experimental and conceptual replication, I argue in response to Feest that replication is still informative despite the prospect of systematic error. I also rebut Machery’s claim that conceptual replication is fundamentally confused and wrongly conflates replication and extension, and in turn raise some objections to his own Resampling Account of replication. (shrink)
This collection of essays aims to investigate the complex issues surrounding contemporary cultural discourses on land and identity – their production, construction, and reconstruction across a range of different texts and materials. The chapters offer disciplinary and trans-disciplinary approaches opening up discussion and new routes for research in a number of interrelated areas such as Countryside vs. City, Diaspora, Landscapes of Memory and Trauma, Migrational Spaces, and Ecology. They represent a number of innovative contemporary responses to how concepts of land (...) intersect and dialogue with notions of identity across and between regions, nations, races, and cultures. Through employing interdisciplinary methods and theories drawn from diverse sources, such as cultural studies, spatial theory, philosophy and literary theory, the chapters chart varied and complex themes of identity formation in relation to spatiality. (shrink)
I believe observation is valued by scientists because it is an objective source of information. Objective here can mean two things. First, observation could be objective in that it is an assured source of truths about the world, truths whose meaning is the same for everyone regardless of their personal theoretical vantage points. I criticize this construal of observational objectivity in chapter one. The guilty doctrine, which I entitle 'empiricistic epistemological foundationalism', is shown to be untenable on, in part, historical (...) grounds. The historical episode I deploy for this task is the early stages of quantum theory, an episode I return to at various times throughout the thesis in illustration of my philosophical points. The sense of objective I favour views empirical data as the locus of extensive interpersonal agreement. Observation, from this perspective, plays the role of a communal judge in arbitrating our theoretical disputes. Returning to early quantum theory , I show how observation could not have had the adjudicative effect it exhibited unless it were objective in this sense. ;A crucial philosophical problem I take up is to provide the best definition of observation suited to accomplish the normative goals set out above for empirical data. Two major contenders are presented in chapter two, the semantic and pragmatic theories of observation, and other pragmatically-based proposals are discussed at the end of chapter four. In the end, I show that the pragmatic theory of observation triumphs as the philosophically most responsible account of the value of scientific observations. ;The final chapter turns to an examination of the bearing of evidence on theory. From my particular pragmatic viewpoint, I describe first how the rationality of induction can be established, and following this I reassess the value of Hempel's classic list of confirmation principles. My final task is to use a new, overtly pragmatic definition of confirmation to re-evaluate the experimental confirmation by Rubens and Kurlbaum of Planck's quantum hypothesis. (shrink)
The WIMP (weakly interacting dark matter) is currently the leading candidate for what is thought to be dark matter, the cosmological material claimed to make up almost 99% of the matter of the universe and which is indiscernible by means of electromagnetic radiation. There are many research groups dedicated to experimentally isolating WIMPs, and in this paper we describe the work of three of these groups, the Saclay group, DAMA and UKDM. This exploration into the recent history of astroparticle physics (...) serves to illuminate two philosophical issues. First, is confirmatory evidence more compelling if it coordinates results gleaned from independent experimental investigations? And secondly, in justifying experimental conclusions, how strong must this justification be? Are the high standards set by philosophers, in the spirit of Descartes, relevant to experimental research? (shrink)
In her 1996 book, Error and the Growth of Experimental Knowledge, Deborah Mayo argues that use- (or heuristic) novelty is not a criterion we need to consider in assessing the evidential value of observations. Using the notion of a “severe” test, Mayo claims that such novelty is valuable only when it leads to severity, and never otherwise. To illustrate her view, she examines the historical case involving the famous 1919 British eclipse expeditions that generated observations supporting Einstein's theory of gravitation (...) over Newton's. My plan here is to defend use-novelty as a valuable methodological principle. I begin by exposing a weakness in Mayo's criticism of use-novelty. Remedying this weakness re-establishes the worth of use-novelty under specific conditions; in particular, heuristically novel data are to be preferred, as I will say, “prima facie”. Armed with this revised version of use-novelty, I re-examine the history of the eclipse experiments and offer an interpretation of this episode that to an extent—and contrary to Mayo—restores the mildly heretical, Earman/Glymour evaluation of this episode offered in their (1980). I conclude by responding to criticism of my assessment of Mayo's work. (shrink)
Experimental data are often acclaimed on the grounds that they can be consistently generated. They are, it is said, reproducible. In this paper I describe how this feature of experimental-data (their pragmatic reliability) leads to their epistemic worth (their epistemic reliability). An important part of my description is the supposition that experimental procedures are to certain extent fixed and stable. Various illustrations from the actual practice of science are introduced, the most important coming at the end of the paper with (...) a discussion of Ray Davis' 1967 solar-neutrino detection experiment (as it is portrayed in Pinch, 1980). (shrink)
Recent scholarship (by mainly Michael Friedman, but also by Thomas Uebel) on the philosophy of Rudolf Carnap covering the period from the publication of Carnap’s’ 1928 book Der Logische Aufbau der Welt through to the mid to late 1930’s has tended to view Carnap as espousing a form of conventionalism (epitomized by his adoption of the principle of tolerance) and not a form of empirical foundationalism. On this view, it follows that Carnap’s 1934 The Logical Syntax of Language is the (...) pinnacle of his work during this era, this book having developed in its most complete form the conventionalist approach to dissolving the pseudoproblems that often attend philosophical investigation. My task in this paper, in opposition to this trend, is to resuscitate the empiricist interpretation of Carnap’s work during this time period. The crux of my argument is that Carnap’s 1934 book, by eschewing for the most part the empiricism he espouses in the Aufbau and in his 1932 The Unity of Science, is led to a form of conventionalism that faces the serious hazard of collapsing into epistemological relativism. My speculation is that Carnap came to recognize this deficiency in his 1934 book, and in subsequent work (“Testability and Meaning”, published in 1936/37) felt the need to re-instate his empiricist agenda. This subsequent work provides a much improved empiricist epistemology from Carnap’s previous efforts and, ashistory informs us, sets the standard for future research in the theory of confirmation. (shrink)
Broadly speaking, there are two different ways in which one might defend skepticism – an a priori wayand an empirical way. My first task in this paper is to defend the view that the preferred way to defendskepticism is empirical. My second task is to explain why this approach actually makes sense. Iaccomplish this latter task by responding to various criticisms one might advance against the possibilityof empirically defending skepticism. In service of this response, I distinguish between two differentkinds of (...) hallucination, ‘metaphysical’ and ‘ordinary’, and seek to clarify the notion of a‘presupposition’. (shrink)
It is often claimed that anti-realists are compelled to reject the inference of the knowability paradox, that there are no unknown truths. I call those anti-realists who feel so compelled ‘faint-hearted’, and argue in turn that anti-realists should affirm this inference, if it is to be consistent. A major part of my strategy in defending anti-realism is to formulate an anti-realist definition of truth according to which a statement is true only if it is verified by someone, at some time. (...) I also liberalize what is meant by a verification to allow for indirect forms of verification. From this vantage point, I examine a key objection to anti-realism, that it is committed to the necessary existence of minds, and reject a response to this problem set forth by Michael Hand. In turn I provide a more successful anti-realist response to the necessary minds problem that incorporates what I call an ‘agential’ view of verification. I conclude by considering what intellectual cost there is to being an anti-realist in the sense I am advocating. (shrink)
In this paper, I raise some questions about Pritchard ’s internalist argument for scepticism. I argue that his internalism begs the question in support of scepticism. Correlatively I advance what I take to be a better internalist argument for scepticism, one that leaves open the possibility of empirically adjudicating sceptical hypotheses. I close by discussing what it means to be an internalist.
My goal in this paper is to consider two separate but connected topics, one historical, the other philosophical. The first topic concerns the forms of reasoning contemporary experimental astrophysicists use to investigate the existence of WIMPs (weakly interacting massive particles). These forms of reasoning take two forms, one model-dependent and the other model-independent, and we examine the arguments one WIMP research group (DAMA) uses to support the latter. The second topic concerns recent support Kent Staley has offered for a form (...) of scientific reasoning called ‘robustness’, and I argue that the model-independent strategy propounded by DAMA improves on robustness. (shrink)
This chapter contains sections titled: Emmanuel Levinas Michel Henry Jacques Derrida Jean‐Luc Marion Jacques Derrida The Saturated Phenomenon According to Jean‐Luc Marion The Face According to Levinas The Auto‐Revelation of the Figure of Christ in Michel Henry.