Many philosophers have argued that a hypothesis is better confirmed by some data if the hypothesis was not specifically designed to fit the data. ‘Prediction’, they argue, is superior to ‘accommodation’. Others deny that there is any epistemic advantage to prediction, and conclude that prediction and accommodation are epistemically on a par. This paper argues that there is a respect in which accommodation is superior to prediction. Specifically, the information that the data was accommodated rather than predicted suggests that the (...) data is less likely to have been manipulated or fabricated, which in turn increases the likelihood that the hypothesis is correct in light of the data. In some cases, this epistemic advantage of accommodation may even outweigh whatever epistemic advantage there might be to prediction, making accommodation epistemically superior to prediction all things considered. (shrink)
It is widely presumed that intuitions about thought experiments can help overturn philosophical theories. It is also widely presumed, albeit implicitly, that if thought experiments play any epistemic role in overturning philosophical theories, it is via intuition. In this paper, I argue for a different, neglected epistemic role of philosophical thought experiments, that of improving some reasoner’s appreciation both of what a theory’s predictions consist in and of how those predictions tie to elements of the theory. I call this role (...) theory clarification. I show that theory clarification does not proceed via intuition, and I argue that it is only in conjunction with theory clarification that intuitions about thought experiments can help overturn philosophical theories. I close by sketching how a more radical view might be true, on which thought experiments help justify the rejection of philosophical theories exclusively by clarifying theories, not by any intuitions those thought experiments might generate. (shrink)
Peter Achinstein has argued at length and on many occasions that the view according to which evidential support is defined in terms of probability-raising faces serious counterexamples and, hence, should be abandoned. Proponents of the positive probabilistic relevance view have remained unconvinced. The debate seems to be in a deadlock. This paper is an attempt to move the debate forward and revisit some of the central claims within this debate. My conclusion here will be that while Achinstein may be right (...) that his counterexamples undermine probabilistic relevance views of what it is for e to be evidence that h, there is still room for a defence of a related probabilistic view about an increase in being supported, according to which, if p > p, then h is more supported given e than it is without e. My argument relies crucially on an insight from recent work on the linguistics of gradable adjectives. (shrink)
Evidential support is often equated with confirmation, where evidence supports hypothesis H if and only if it increases the probability of H. This article argues against this received view. As the author shows, support is a comparative notion in the sense that increase-in-probability is not. A piece of evidence can confirm H, but it can confirm alternatives to H to the same or greater degree; and in such cases, it is at best misleading to conclude that the evidence supports H. (...) The author puts forward an alternative view that defines support in terms of measures of degree of confirmation. The proposed view is both sufficiently comparative and able to accommodate the increase-in-probability aspect of support. The author concludes that the proposed measure-theoretic approach to support provides a superior alternative to the standard confirmatory approach. (shrink)
According to the probabilistic relevance account of confirmation, E confirms H relative to background knowledge K just in case P(H/K&E) > P(H/K). This requires an inequality between the rational degree of belief in H determined relative to two bodies of total knowledge which are such that one (K&E) includes the other (K) as a proper part. In this paper, I argue that it is quite plausible that there are no two possible bodies of total knowledge for ideally rational agents meeting (...) this requirement. Hence, the positive relevance account may have to be rejected. (shrink)
Building on Nozick's invariantism about objectivity, I propose to define scientific objectivity in terms of counterfactual independence. I will argue that such a counterfactual independence account is (a) able to overcome the decisive shortcomings of Nozick's original invariantism and (b) applicable to three paradigmatic kinds of scientific objectivity (that is, objectivity as replication, objectivity as robustness, and objectivity as Mertonian universalism).
The French physicist Jean Baptiste Perrin is widely credited with providing the conclusive argument for atomism. The most well-known part of Perrin’s argument is his description of thirteen different procedures for determining Avogadro’s number (N)–the number of atoms, ions, and molecules contained in a gram-atom, gram-ion, and gram-mole of a substance, respectively. Because of its success in ending the atomism debates Perrin’s argument has been the focus of much philosophical interest. The various philosophers, however, have reached different conclusions, not only (...) about the argument’s general rationale but also the role that the multiple determination of N played in it. This paper emphasizes the historical development of Perrin’s experimental work in order to understand the role that the multiple determination of molecular magnitudes played in his argument for molecular reality. It claims that Perrin used the multiple determination strategy to put forward an exceptionally strong no-coincidence argument to argue for both the correctness of the values for the molecular magnitudes determined and the validity of the auxiliary assumptions upon which the different determinations were based. The historicist approach also allows the identification of the elements responsible for the epistemic strength of Perrin’s no-coincidence argument. (shrink)
In the Paradox of the Ravens, a set of otherwise intuitive claims about evidence seems to be inconsistent. Most attempts at answering the paradox involve rejecting a member of the set, which seems to require a conflict either with commonsense intuitions or with some of our best confirmation theories. In contrast, I argue that the appearance of an inconsistency is misleading: ‘confirms’ and cognate terms feature a significant ambiguity when applied to universal generalisations. In particular, the claim that some evidence (...) confirms a universal generalisation ordinarily suggests, in part, that the evidence confirms the reliability of predicting that something which satisfies the antecedent will also satisfy the consequent. I distinguish between the familiar relation of confirmation simpliciter and what I shall call ‘predictive confirmation’. I use them to formulate my answer, illustrate it in a very simple probabilistic model, and defend it against objections. I conclude that, once our evidential concepts are sufficiently clarified, there is no sense in which the initial claims are both plausible and inconsistent. (shrink)
Cosmological models that invoke a multiverse - a collection of unobservable regions of space where conditions are very different from the region around us - are controversial, on the grounds that unobservable phenomena shouldn't play a crucial role in legitimate scientific theories. I argue that the way we evaluate multiverse models is precisely the same as the way we evaluate any other models, on the basis of abduction, Bayesian inference, and empirical success. There is no scientifically respectable way to do (...) cosmology without taking into account different possibilities for what the universe might be like outside our horizon. Multiverse theories are utterly conventionally scientific, even if evaluating them can be difficult in practice. (shrink)
Philosophers such as Goodman, Scheffler and Glymour aim to answer the Paradox of the Ravens by distinguishing between confirmation simpliciter and selective confirmation. In the latter concept, the evidence both supports a hypothesis and undermines one of its "rivals". In this article, I argue that while selective confirmation does seem to be an important scientific notion, no attempt to formalise it thus far has managed to solve the Paradox of the Ravens.
Bayesian confirmation theory is rife with confirmation measures. Many of them differ from each other in important respects. It turns out, though, that all the standard confirmation measures in the literature run counter to the so-called “Reverse Matthew Effect” (“RME” for short). Suppose, to illustrate, that H1 and H2 are equally successful in predicting E in that p(E | H1)/p(E) = p(E | H2)/p(E) > 1. Suppose, further, that initially H1 is less probable than H2 in that p(H1) < p(H2). (...) Then by RME it follows that the degree to which E confirms H1 is greater than the degree to which it confirms H2. But by all the standard confirmation measures in the literature, in contrast, it follows that the degree to which E confirms H1 is less than or equal to the degree to which it confirms H2. It might seem, then, that RME should be rejected as implausible. Festa (2012), however, argues that there are scientific contexts in which RME holds. If Festa’s argument is sound, it follows that there are scientific contexts in which none of the standard confirmation measures in the literature is adequate. Festa’s argument is thus interesting, important, and deserving of careful examination. I consider five distinct respects in which E can be related to H, use them to construct five distinct ways of understanding confirmation measures, which I call “Increase in Probability”, “Partial Dependence”, “Partial Entailment”, “Partial Discrimination”, and “Popper Corroboration”, and argue that each such way runs counter to RME. The result is that it is not at all clear that there is a place in Bayesian confirmation theory for RME. (shrink)
The twin goals of this essay are: to investigate a family of cases in which the goal of guaranteed convergence to the truth is beyond our reach; and to argue that each of three strands prominent in contemporary epistemological thought has undesirable consequences when confronted with the existence of such problems. Approaches that follow Reichenbach in taking guaranteed convergence to the truth to be the characteristic virtue of good methods face a vicious closure problem. Approaches on which there is a (...) unique rational doxastic response to any given body of evidence can avoid incoherence only by rendering epistemology a curiously limited enterprise. Bayesian approaches rule out humility about one’s prospects of success in certain situations in which failure is typical. (shrink)
Peter Brössel and Franz Huber in 2015 argued that the Bayesian concept of confirmation had no use. I will argue that it has both the uses they discussed—it can be used for making claims about how worthy of belief various hypotheses are, and it can be used to measure the epistemic value of experiments. Furthermore, it can be useful in explanations. More generally, I will argue that more coarse-grained concepts can be useful, even when we have more fine-grained concepts available.
We explore the grammar of Bayesian confirmation by focusing on some likelihood principles, including the Weak Law of Likelihood. We show that none of the likelihood principles proposed so far is satisfied by all incremental measures of confirmation, and we argue that some of these measures indeed obey new, prima facie strange, antilikelihood principles. To prove this, we introduce a new measure that violates the Weak Law of Likelihood while satisfying a strong antilikelihood condition. We conclude by hinting at some (...) relevant links between the likelihood principles considered here and other properties of Bayesian confirmation recently explored in the literature. (shrink)
Bayesian confirmation theory is rife with confirmation measures. Zalabardo focuses on the probability difference measure, the probability ratio measure, the likelihood difference measure, and the likelihood ratio measure. He argues that the likelihood ratio measure is adequate, but each of the other three measures is not. He argues for this by setting out three adequacy conditions on confirmation measures and arguing in effect that all of them are met by the likelihood ratio measure but not by any of the other (...) three measures. Glass and McCartney, hereafter “G&M,” accept the conclusion of Zalabardo’s argument along with each of the premises in it. They nonetheless try to improve on Zalabardo’s argument by replacing his third adequacy condition with a weaker condition. They do this because of a worry to the effect that Zalabardo’s third adequacy condition runs counter to the idea behind his first adequacy condition. G&M have in mind confirmation in the sense of increase in probability: the degree to which E confirms H is a matter of the degree to which E increases H’s probability. I call this sense of confirmation “IP.” I set out four ways of precisifying IP. I call them “IP1,” “IP2,” “IP3,” and “IP4.” Each of them is based on the assumption that the degree to which E increases H’s probability is a matter of the distance between p and a certain other probability involving H. I then evaluate G&M’s argument in light of them. (shrink)
The ‘death of evidence’ issue in Canada raises the spectre of politicized science, and thus the question of what role social values may have in science and how this meshes with objectivity and evidence. I first criticize philosophical accounts that have to separate different steps of research to restrict the influence of social and other non-epistemic values. A prominent account that social values may play a role even in the context of theory acceptance is the argument from inductive risk. It (...) maintains that the more severe the social consequences of erroneously accepting a theory would be, the more evidence is needed before the theory may be accepted. However, an implication of this position is that increasing evidence makes the impact of social values converge to zero; and I argue for a stronger role for social values. On this position, social values may determine a theory’s conditions of adequacy, which among other things can include co.. (shrink)
I propose a distinct type of robustness, which I suggest can support a confirmatory role in scientific reasoning, contrary to the usual philosophical claims. In model robustness, repeated production of the empirically successful model prediction or retrodiction against a background of independentlysupported and varying model constructions, within a group of models containing a shared causal factor, may suggest how confident we can be in the causal factor and predictions/retrodictions, especially once supported by a variety of evidence framework. I present climate (...) models of greenhouse gas global warming of the 20th Century as an example, and emphasize climate scientists’ discussions of robust models and causal aspects. The account is intended as applicable to a broad array of sciences that use complex modeling techniques. (shrink)
Assessment of error and uncertainty is a vital component of both natural and social science. This edited volume presents case studies of research practices across a wide spectrum of scientific fields. It compares methodologies and presents the ingredients needed for an overarching framework applicable to all.
There are some candidates that have been thought to measure the degree to which evidence incrementally confirms a hypothesis. This paper provides an argument for one candidate—the log-likelihood ratio measure. For this purpose, I will suggest a plausible requirement that I call the Requirement of Collaboration. And then, it will be shown that, of various candidates, only the log-likelihood ratio measure \(l\) satisfies this requirement. Using this result, Jeffrey conditionalization will be reformulated so as to disclose explicitly what determines new (...) credences after experience. (shrink)
This paper surveys and critically assesses existing theories of evidence with respect to four desiderata. A good theory of evidence should be both a theory of evidential support (i.e., be informative about what kinds of facts speak in favour of a hypothesis), and of warrant (i.e., be informative about how strongly a given set of facts speaks in favour of the hypothesis), it should apply to the non-ideal cases in which scientists typically find themselves, and it should be ‘descriptively adequate’, (...) i.e., able to adequately represent typical episodes of evidentiary reasoning. The theories surveyed here—Bayesianism, hypotheticodeductivism,satisfaction theories, error statistics as well as Achinstein’s and Cartwright’s theories—are all found wanting in important respects. I finally argue that a deficiency all these theories have in common is a neglect or underplaying of the epistemic context in which the episode of evidentiary reasoning takes place.Este artículo describe y valora críticamente diversas teorías de la evidencia en relación a cuatro desiderata. Una buena teoría de la evidencia debería ser tanto una teoría sobre el apoyo evidencial [evidential support] (ser informativa sobre qué tipos de hechos hablan a favor de la hipótesis) como sobre la justificación [warrant]; debería aplicarse en las situaciones no ideales en las que normalmente se encuentran los científicos; y debería ser ‘descriptivamente adecuada’, esto es, capaz de representar correctamente episodios típicos de razonamiento evidencial. Las teorías aquí revisadas—bayesianismo, hipotético-deductivismo, teorías de la satisfacibilidad, la estadística del error, así como las propuestas de Achinstein y Cartwright—se consideran deficientes en aspectos básicos. Argumentaré que un defecto común en todas ellas es que olvidan, o minusvaloran, el contexto epistémico en el que el episodio de razonamiento evidencial tiene lugar. (shrink)
This paper reviews all major theories of evidence such as the Bayesian theory, hypothetico-deductivism, satisfaction theories, error-statistics, Achinstein's explanationist theory and Cartwright's argument theory. All these theories fail to take adequate account of the context in which a hypothesis is established and used. It is argued that the context of an inquiry determines important facts about what evidence is, and how much and what kind has to be collected to establish a hypothesis for a given purpose.
Over the past decades or so the probabilistic model of rational belief has enjoyed increasing interest from researchers in epistemology and the philosophy of science. Of course, such probabilistic models were used for much longer in economics, in game theory, and in other disciplines concerned with decision making. Moreover, Carnap and co-workers used probability theory to explicate philosophical notions of confirmation and induction, thereby targeting epistemic rather than decision-theoretic aspects of rationality. However, following Carnap’s early applications, philosophy has more recently (...) seen an increased popularity of probabilistic models in other areas concerned with the philosophical analysis of belief: there are models targeting coherence, informativeness, simplicity, and so on.In brief, the probabilistic model of belief comprises of a language, detailing the propositions about which an agent is supposed to have beliefs, and a function over the language that expresses beliefs: .. (shrink)
The issue of common method variance and bias in Indonesia still has not gained much attention; even the terminology is less popular, except among psychometric enthusiasts and experts. In fact, the potential for common method variance and bias infiltrating in research results is very high, especially in studies that use a single method, a single source, and concurrent design, which are highly favored by psychological lecturers and researchers in Indonesia. This paper is a critical review, exposing the debate and serious (...) impact regarding common method variance and bias, as well as procedures for detecting, addressing and correcting its effects. The author hoped this paper contributes in filling the gap in the literature, especially in Psychology Research Methodology text books in the Indonesian language, so that psychological researches in Indonesia continue to increase their quality and to have their better place in international publications. (shrink)
I examine the warrants we have in light of the empirical successes of a kind of model I call ‘ hybrid models ’, a kind that includes climate models among its members. I argue that these warrants ’ strengths depend on inferential virtues that are not just explanatory virtues, contrary to what would be the case if inference to the best explanation provided the warrants. I also argue that the warrants in question, unlike those IBE provides, guide inferences only to (...) model implications about which there is real uncertainty. My conclusion provides criteria of adequacy for epistemologies of climate and other hybrid models. (shrink)
This paper is a supplement to, and provides a proof of principle of, Kuhn vs. Popper on Criticism and Dogmatism in Science: A Resolution at the Group Level. It illustrates how calculations may be performed in order to determine how the balance between different functions in science—such as imaginative, critical, and dogmatic—should be struck, with respect to confirmation (or corroboration) functions and rules of scientific method.
It is generally accepted that Popper‘s degree of corroboration, though “inductivist” in a very general and weak sense, is not inductivist in a strong sense, i.e. when by ‘inductivism’ we mean the thesis that the right measure of evidential support has a probabilistic character. The aim of this paper is to challenge this common view by arguing that Popper can be regarded as an inductivist, not only in the weak broad sense but also in a narrower, probabilistic sense. In section (...) 2, first, I begin by briefly characterizing the relevant notion of inductivism that is at stake here; second, I present and discuss the main Popperian argument against it and show that in the only reading in which the argument is formally it is restricted to cases of predicted evidence, and that even if restricted in this way the argument is formally valid it is nevertheless materially unsound. In section 3, I analyze the desiderata that, according to Popper, any acceptable measure for evidential support must satisfy, I clean away its ad-hoc components and show that all the remaining desiderata are satisfied by inductuvist-in-strict-sense measures. In section 4 I demonstrate that two of these desiderata, accepted by Popper, imply that in cases of predicted evidence any measure that satisfies them is qualitatively indistinguishable from conditional probability. Finally I defend that this amounts to a kind of strong inductivism that enters into conflict with Popper’s anti-inductivist argument and declarations, and that this conflict does not depend on the incremental versus non-incremental distinction for evidential-support measures, making Popper’s position inconsistent in any reading.Keywords: Popper; Inductivism; Confirmation; Corroboration. (shrink)
Hypothetico-deductive (H-D) confirmation builds on the idea that confirming evidence consists of successful predictions that deductively follow from the hypothesis under test. This article reviews scope, history and recent development of the venerable H-D account: First, we motivate the approach and clarify its relationship to Bayesian confirmation theory. Second, we explain and discuss the tacking paradoxes which exploit the fact that H-D confirmation gives no account of evidential relevance. Third, we review several recent proposals that aim at a sounder and (...) more comprehensive formulation of H-D confirmation. Finally, we conclude that the reputation of hypothetico-deductive confirmation as outdated and hopeless is undeserved: not only can the technical problems be addressed satisfactorily, the hypothetico-deductive method is also highly relevant for scientific practice. (shrink)
Three related intuitions are explicated in this paper. The first is the idea that there must be some kind of probabilistic version of the HD-method, a ‘Hypothetico-Probabilistic (HP-) method’, in terms of something like probabilistic consequences, instead of deductive consequences. According to the second intuition, the comparative application of this method should also be functional for some probabilistic kind of empirical progress, and according to the third intuition this should be functional for something like probabilistic truth approximation. In all three (...) cases, the guiding idea is to explicate these intuitions by explicating the crucial notions as appropriate ‘concretizations’ of their deductive analogs, being ‘idealizations’. It turns out that the comparative version of the proposed HP-method amounts to the likelihood comparison (LC-) method applied to the cumulated evidence. This method turns out to be not only functional for probabilistic empirical progress but also for probabilistic truth approximation. The latter is based on a probabilistic threshold theorem constituting for this reason the analog of the deductive success theorem. (shrink)
Medical diagnosis has been traditionally recognized as a privileged field of application for so called probabilistic induction. Consequently, the Bayesian theorem, which mathematically formalizes this form of inference, has been seen as the most adequate tool for quantifying the uncertainty surrounding the diagnosis by providing probabilities of different diagnostic hypotheses, given symptomatic or laboratory data. On the other side, it has also been remarked that differential diagnosis rather works by exclusion, e.g. by modus tollens, i.e. deductively. By drawing on a (...) case history, this paper aims at clarifying some points on the issue. Namely: 1) Medical diagnosis does not represent, strictly speaking, a form of induction, but a type, of what in Peircean terms should be called ‘abduction’ (identifying a case as the token of a specific type); 2) in performing the single diagnostic steps, however, different inferential methods are used for both inductive and deductive nature: modus tollens, hypothetical-deductive method, abduction; 3) Bayes’ theorem is a probabilized form of abduction which uses mathematics in order to justify the degree of confidence which can be entertained on a hypothesis given the available evidence; 4) although theoretically irreconcilable, in practice, both the hypothetical- deductive method and the Bayesian one, are used in the same diagnosis with no serious compromise for its correctness; 5) Medical diagnosis, especially differential diagnosis, also uses a kind of “probabilistic modus tollens”, in that, signs (symptoms or laboratory data) are taken as strong evidence for a given hypothesis not to be true: the focus is not on hypothesis confirmation, but instead on its refutation [Pr (¬ H/E1, E2, …, En)]. Especially at the beginning of a complicated case, odds are between the hypothesis that is potentially being excluded and a vague “other”. This procedure has the advantage of providing a clue of what evidence to look for and to eventually reduce the set of candidate hypotheses if conclusive negative evidence is found. 6) Bayes’ theorem in the hypothesis-confirmation form can more faithfully, although idealistically, represent the medical diagnosis when the diagnostic itinerary has come to a reduced set of plausible hypotheses after a process of progressive elimination of candidate hypotheses; 7) Bayes’ theorem is however indispensable in the case of litigation in order to assess doctor’s responsibility for medical error by taking into account the weight of the evidence at his disposal. (shrink)
This article shows that a slight variation of the argument in Milne 1996 yields the log‐likelihood ratio l rather than the log‐ratio measure r as “the one true measure of confirmation. ” *Received December 2006; revised December 2007. †To contact the author, please write to: Formal Epistemology Research Group, Zukunftskolleg and Department of Philosophy, University of Konstanz, P.O. Box X906, 78457 Konstanz, Germany; e‐mail: franz.huber@uni‐konstanz.de.
This book brings together important essays by one of the leading philosophers of science at work today. Elisabeth A. Lloyd examines several of the central topics in philosophy of biology, including the structure of evolutionary theory, units of selection, and evolutionary psychology, as well as the Science Wars, feminism and science, and sexuality and objectivity. Lloyd challenges the current evolutionary accounts of the female orgasm and analyses them for bias. She also offers an innovative analysis of the concept of objectivity. (...) Lloyd analyses the structure of evolutionary theory and unlocks the puzzle of the units of selection debates into four distinct aspects, illuminating several mysteries in the biology literature. Central to all essays in this book is the author's abiding concern for evidence and empirical data. (shrink)
Carnap's inductive logic (or confirmation) project is revisited from an "increase in firmness" (or probabilistic relevance) point of view. It is argued that Carnap's main desiderata can be satisfied in this setting, without the need for a theory of "logical probability." The emphasis here will be on explaining how Carnap's epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap's goal of bridging confirmation and credence, in favor of bridging (...) confirmation and evidential support. (shrink)
After some general remarks about the interrelation between philosophical and statistical thinking, the discussion centres largely on significance tests. These are defined as the calculation of p-values rather than as formal procedures for ‘acceptance‘ and ‘rejection‘. A number of types of null hypothesis are described and a principle for evidential interpretation set out governing the implications of p- values in the specific circumstances of each application, as contrasted with a long-run interpretation. A number of more complicated situ- ations are discussed (...) in which modification of the simple p-value may be essential. (shrink)
The growing availability of computer power and statistical software has greatly increased the ease with which practitioners apply statistical methods, but this has not been accompanied by attention to checking the assumptions on which these methods are based. At the same time, disagreements about inferences based on statistical research frequently revolve around whether the assumptions are actually met in the studies available, e.g., in psychology, ecology, biology, risk assessment. Philosophical scrutiny can help disentangle 'practical' problems of model validation, and conversely, (...) a methodology of statistical model validation can shed light on a number of issues of interest to philosophers of science. (shrink)
Experimental research is commonly held up as the paradigm of "good" science. Although experiment plays many roles in science, its classical role is testing hypotheses in controlled laboratory settings. Historical science is sometimes held to be inferior on the grounds that its hypothesis cannot be tested by controlled laboratory experiments. Using contemporary examples from diverse scientific disciplines, this paper explores differences in practice between historical and experimental research vis-à-vis the testing of hypotheses. It rejects the claim that historical research is (...) epistemically inferior. For as I argue, scientists engage in two very different patterns of evidential reasoning and, although there is overlap, one pattern predominates in historical research and the other pattern predominates in classical experimental research. I show that these different patterns of reasoning are grounded in an objective and remarkably pervasive time asymmetry of nature. (shrink)
In this paper I discuss how descriptive studies of science, increasingly emphasised by philosophers of science, can be used to test normative theories of science. I claim that we can use cases of scientific practice as counter examples; if the practice of a given scientist can be shown to be justified and it diverges from the prescriptions of a scientific theory then the theory should be rejected. This approach differs from those offered by previous philosophers of science and at the (...) same time brings the philosophy of science more into line with other areas of philosophy. (shrink)
That some propositions are testable, while others are not, was a fundamental idea in the philosophical program known as logical empiricism. That program is now widely thought to be defunct. Quine’s (1953) “Two Dogmas of Empiricism” and Hempel’s (1950) “Problems and Changes in the Empiricist Criterion of Meaning” are among its most notable epitaphs. Yet, as we know from Mark Twain’s comment on an obituary that he once had the pleasure of reading about himself, the report of a death can (...) be an exaggeration. The research program that began in Vienna and Berlin continues, even though many of the specific formulations that came out of those circles are flawed and need to be replaced. (shrink)
: Any precise version of H-D needs to handle various problems, most notably, the problem of selective confirmation: Precise formulations of H-D should not have the consequence that where S confirms T, for any T', S confirms T&T'. It is the perceived failure of H-D to solve such problems that has lead John Earman to recently conclude that H-D is "very nearly a dead horse". This suggests the following state of play: H-D is an intuitively plausible idea that breaks down (...) in the attempt to give it a precise formulation. Indeed I think that fairly captures the view among specialists in the field of confirmation theory. Here I argue that the truth about H-D is largely the reverse: H-D can be given a precise formulation that avoids the longstanding technical problems, however, it relies on a fundamentally unsound philosophical intuition. The bulk of this paper involves reviewing the problems affecting previous attempts at giving precise formulations of H-D and displaying some recent versions that can handle these problems. It then briefly explains why the basic intuition behind H-D is itself unsound, namely, because H-D involves a tacit assumption of inductive scepticism. Finally, the historical relation between H-D and the positivists' quest for a criterion of empirical significance will be reconsidered with the surprising result that having glossed H-D as fundamentally unsound it is concluded that a sound version of the criterion of empirical significance is now available. The demarcation criterion, the positivists' philosopher's stone that serves to separate claims with empirical significance from claims lacking empirical significance having finally been found, it is argued that we should regard empirical significance as just one among a variety of virtues and not follow the positivists in taking it to be a sin qua non for all meaningful statements. (shrink)
Once upon a time, logic was the philosopher’s tool for analyzing scientific reasoning. Nowadays, probability and statistics have largely replaced logic, and their most popular application—Bayesianism—has replaced the qualitative deductive relationship between a hypothesis h and evidence e with a quantitative measure of h’s probability in light of e.
This investigation is an attempt to spell out a formal semantic theory for inductive logic. The logic is probabilistic. It roughly resembles the logic of confirmation functions developed by Rudolf Carnap. ;Carnap's logic specifies an object language--the language of monadic predicate logic--and defines meta-linguistic probability functions on sentences of the object language. These probability functions express a semantic relationship between sentences, just as logical consequence is a semantic relationship between sentences in deductive logic. The semantic conditional probability functions express the (...) degree of partial entailment or degree of confirmation that premises afford a conclusion. ;The system I develop is roughly of this kind, but on a stronger object language--one representing all of first-order logic and set-theory. It avoids the main problems with Carnap's approach, e.g., universally quantified sentences need not get probability 0 for infinite domains of objects. Also, unlike Carnap's system, the system developed here will not require that the logical structure of sentences be the sole determinant of all semantic probability relations. Only the semantic probabilities Carnap calls direct inferences--i.e., the conditional probability of the outcomes of instances, given a general statistical statement and the membership of the instances in an appropriate reference class--will depend on logical form alone. But in general the semantic probability of one sentence given another is not solely a matter of logical form. ;The system provides a rigorous formal account of the direct inferences as logical partial entailments. Logical entailments and logical partial entailments are to be the purely logical foundation on which a Bayesian logicist account of theory confirmation, or inverse inference, is constructed. The account is Bayesian in that the degree of confirmation of a hypothesis on evidence is related to direct inferences and to the prior plausibility of the hypothesis by way of Bayes' Theorem. The account is logicist in that conditional probabilities are developed as semantic relationships between sentences, and the semantics specifies certain purely logical relationships between sentences, the logical partial entailments. (shrink)
The degree of corroboration of a scientific hypothesis is an issue that has been repeatedly discussed in modern theory of sciences . In a preceding paper it was shown that the formulae advanced by Popper to calculate the degree of corroboration C are not very satisfactory because the probability values required in the computation of C are not available as a rule. Another equation to measure the degree of corroboration B was proposed ), whereby only the number n of unsuccessful (...) efforts to falsify a scientific hypothesis by means of adequate experiments or observations is needed to be known. Shortly after the publication of these ideas I discovered that Nicolaus Cusanus in his book "De docta ignorantia" had proposed a model of scientific "verisimilitude" which leads to a quite similar relationship between B and the number n of independent proofs or observations. The "polygonal" model of verisimilitude ) mentioned by Cusanus is presumably the first quantitative estimate proposed for this problem in philosophical literature. (shrink)
Reconstructing the Past seeks to clarify and help resolve the vexing methodological issues that arise when biologists try to answer such questions as whether human beings are more closely related to chimps than they are to gorillas. It explores the case for considering the philosophical idea of simplicity/parsimony as a useful principle for evaluating taxonomic theories of evolutionary relationships. For the past two decades, evolutionists have been vigorously debating the appropriate methods that should be used in systematics, the field that (...) aims at reconstructing phylogenetic relationships among species. This debate over phylogenetic inference, Elliott Sober observes, raises broader questions of hypothesis testing and theory evaluation that run head on into long standing issues concerning simplicity/parsimony in the philosophy of science. Sober treats the problem of phylogenetic inference as a detailed case study in which the philosophical idea of simplicity/parsimony can be tested as a principle of theory evaluation. Bringing together philosophy and biology, as well as statistics, Sober builds a general framework for understanding the circumstances in which parsimony makes sense as a tool of phylogenetic inference. Along the way he provides a detailed critique of parsimony in the biological literature, exploring the strengths and limitations of both statistical and nonstatistical cladistic arguments. (shrink)
New computer systems of discovery create a research program for logic and philosophy of science. These systems consist of inference rules and control knowledge that guide the discovery process. Their paths of discovery are influenced by the available data and the discovery steps coincide with the justification of results. The discovery process can be described in terms of fundamental concepts of artificial intelligence such as heuristic search, and can also be interpreted in terms of logic. The traditional distinction that places (...) studies of scientific discovery outside the philosophy of science, in psychology, sociology, or history, is no longer valid in view of the existence of computer systems of discovery. It becomes both reasonable and attractive to study the schemes of discovery in the same way as the criteria of justification were studied: empirically as facts, and logically as norms. (shrink)