In the latter half of the twentieth century, philosophers of science have argued (implicitly and explicitly) that epistemically rational individuals might compose epistemically irrational groups and that, conversely, epistemically rational groups might be composed of epistemically irrational individuals. We call the conjunction of these two claims the Independence Thesis, as they together imply that methodological prescriptions for scientific communities and those for individual scientists might be logically independent of one another. We develop a formal model of scientific inquiry, define four (...) criteria for individual and group epistemic rationality, and then prove that the four definitions diverge, in the sense that individuals will be judged rational when groups are not and vice versa. We conclude by explaining implications of the inconsistency thesis for (i) descriptive history and sociology of science and (ii) normative prescriptions for scientific communities. (shrink)
Jim Joyce argues for two amendments to probabilism. The first is the doctrine that credences are rational, or not, in virtue of their accuracy or “closeness to the truth” (1998). The second is a shift from a numerically precise model of belief to an imprecise model represented by a set of probability functions (2010). We argue that both amendments cannot be satisfied simultaneously. To do so, we employ a (slightly-generalized) impossibility theorem of Seidenfeld, Schervish, and Kadane (2012), who show that (...) there is no strictly proper scoring rule for imprecise probabilities. -/- The question then is what should give way. Joyce, who is well aware of this no-go result, thinks that a quantifiability constraint on epistemic accuracy should be relaxed to accommodate imprecision. We argue instead that another Joycean assumption — called strict immodesty— should be rejected, and we prove a representation theorem that characterizes all “mildly” immodest measures of inaccuracy. (shrink)
Modeling and computer simulations, we claim, should be considered core philosophical methods. More precisely, we will defend two theses. First, philosophers should use simulations for many of the same reasons we currently use thought experiments. In fact, simulations are superior to thought experiments in achieving some philosophical goals. Second, devising and coding computational models instill good philosophical habits of mind. Throughout the paper, we respond to the often implicit objection that computer modeling is “not philosophical.”.
Current scientific research almost always requires collaboration among several (if not several hundred) specialized researchers. When scientists co-author a journal article, who deserves credit for discoveries or blame for errors? How should scientific institutions promote fruitful collaborations among scientists? In this book, leading philosophers of science address these critical questions.
We evaluate the asymptotic performance of boundedly-rational strategies in multi-armed bandit problems, where performance is measured in terms of the tendency (in the limit) to play optimal actions in either (i) isolation or (ii) networks of other learners. We show that, for many strategies commonly employed in economics, psychology, and machine learning, performance in isolation and performance in networks are essentially unrelated. Our results suggest that the appropriateness of various, common boundedly-rational strategies depends crucially upon the social context (if any) (...) in which such strategies are to be employed. (shrink)
Epistemic decision theory (EDT) employs the mathematical tools of rational choice theory to justify epistemic norms, including probabilism, conditionalization, and the Principal Principle, among others. Practitioners of EDT endorse two theses: (1) epistemic value is distinct from subjective preference, and (2) belief and epistemic value can be numerically quantified. We argue the first thesis, which we call epistemic puritanism, undermines the second.
A dynamical system is called chaotic if small changes to its initial conditions can create large changes in its behavior. By analogy, we call a dynamical system structurally chaotic if small changes to the equations describing the evolution of the system produce large changes in its behavior. Although there are many definitions of “chaos,” there are few mathematically precise candidate definitions of “structural chaos.” I propose a definition, and I explain two new theorems that show that a set of models (...) is structurally chaotic if it contains a chaotic function. I conclude by discussing the relationship between structural chaos and structural stability. (shrink)
Several current debates in the epistemology of testimony are implicitly motivated by concerns about the reliability of rules for changing one’s beliefs in light of others’ claims. Call such rules testimonial norms (tns). To date, epistemologists have neither (i) characterized those features of communities that influence the reliability of tns, nor (ii) evaluated the reliability of tns as those features vary. These are the aims of this paper. I focus on scientific communities, where the transmission of highly specialized information is (...) both ubiquitous and critically important. Employing a formal model of scientific inquiry, I argue that miscommunication and the “communicative structure” of science strongly influence the reliability of tns, where reliability is made precise in three ways. (shrink)
In medicine and the social sciences, researchers must frequently integrate the findings of many observational studies, which measure overlapping collections of variables. For instance, learning how to prevent obesity requires combining studies that investigate obesity and diet with others that investigate obesity and exercise. Recently developed causal discovery algorithms provide techniques for integrating many studies, but little is known about what can be learned from such algorithms. This article argues that there are causal facts that one could learn by conducting (...) a large study but which could not be learned by combining many smaller studies. Moreover, I characterize the frequency with which combining many studies increases underdetermination and exactly how much information is lost. 1 Introduction2 Causal Inference from Observational Data3 Piecemeal Causal Inference4 The Extent and Frequency of the Problem of Piecemeal Induction5 Conclusion. (shrink)
It is common to assume that the problem of induction arises only because of small sample sizes or unreliable data. In this paper, I argue that the piecemeal collection of data can also lead to underdetermination of theories by evidence, even if arbitrarily large amounts of completely reliable experimental and observational data are collected. Specifically, I focus on the construction of causal theories from the results of many studies (perhaps hundreds), including randomized controlled trials and observational studies, where the studies (...) focus on overlapping, but not identical, sets of variables. Two theorems reveal that, for any collection of variables V, there exist fundamentally different causal theories over V that cannot be distinguished unless all variables are simultaneously measured. Underdetermination can result from piecemeal measurement, regardless of the quantity and quality of the data. Moreover, I generalize these results to show that, a priori, it is impossible to choose a series of small (in terms of number of variables) observational studies that will be most informative with respect to the causal theory describing the variables under investigation. This final result suggests that scientific institutions may need to play a larger role in coordinating differing research programs during inquiry. (shrink)
Abstract:According to Quine, Charles Parsons, Mark Steiner, and others, Russell’s logicist project is important because, if successful, it would show that mathematical theorems possess desirable epistemic properties often attributed to logical theorems, such as aprioricity, necessity, and certainty. Unfortunately, Russell never attributed such importance to logicism, and such a thesis contradicts Russell’s explicitly stated views on the relationship between logic and mathematics. This raises the question: what did Russell understand to be the philosophical importance of logicism? Building on recent work (...) by Andrew Irvine and Martin Godwyn, I argue that Russell thought a systematic reduction of mathematics increases the certainty of known mathematical theorems (even basic arithmetical facts) by showing mathematical knowledge to be coherently organized. The paper outlines Russell’s theory of coherence, and discusses its relevance to logicism and the certainty attributed to mathematics. (shrink)
Over the past two decades, several consistent procedures have been designed to infer causal conclusions from observational data. We prove that if the true causal network might be an arbitrary, linear Gaussian network or a discrete Bayes network, then every unambiguous causal conclusion produced by a consistent method from non-experimental data is subject to reversal as the sample size increases any finite number of times. That result, called the causal flipping theorem, extends prior results to the effect that causal discovery (...) cannot be reliable on a given sample size. We argue that since repeated flipping of causal conclusions is unavoidable in principle for consistent methods, the best possible discovery methods are consistent methods that retract their earlier conclusions no more than necessary. A series of simulations of various methods across a wide range of sample sizes illustrates concretely both the theorem and the principle of comparing methods in terms of retractions. (shrink)
Ockham’s razor is the principle that, all other things being equal, scientists ought to prefer simpler theories. In recent years, philosophers have argued that simpler theories make better predictions, possess theoretical virtues like explanatory power, and have other pragmatic virtues like computational tractability. However, such arguments fail to explain how and why a preference for simplicity can help one find true theories in scientific inquiry, unless one already assumes that the truth is simple. One new solution to that problem is (...) the Ockham efficiency theorem, which states that scientists who heed Ockham’s razor retract their opinions less often and sooner than do their non-Ockham competitors. The theorem neglects, however, to consider competitors following random strategies and in many applications random strategies are known to achieve better worst-case loss than deterministic strategies. In this paper, we describe two ways to extend the result to a very general class of random, empirical strategies. The first extension concerns expected retractions, retraction times, and errors and the second extension concerns retractions in chance, times of retractions in chance, and chances of errors. (shrink)
In medicine and the social sciences, researchers often measure only a handful of variables simultaneously. The underlying assumption behind this methodology is that combining the results of dozens of smaller studies can, in principle, yield as much information as one large study, in which dozens of variables are measured simultaneously. Mayo-Wilson :864–874, 2011, Br J Philos Sci 65:213–249, 2013. https://doi.org/10.1093/bjps/axs030) shows that assumption is false when causal theories are inferred from observational data. This paper extends Mayo-Wilson’s results to cases in (...) which experimental data is available. I prove several new theorems that show that, as the number of variables under investigation grows, experiments do not improve, in the worst-case, one’s ability to identify the true causal model if one can measure only a few variables at a time. However, stronger statistical assumptions significantly aid causal discovery in piecemeal inquiry, even if such assumptions are unhelpful when all variables can be measured simultaneously. (shrink)
What justifies the use of Bayesian statistics in science? The traditional answer is that Bayesian statistics is simply an instance of orthodox expected utility theory. Thus, Bayesian statistical methods, like principles of utility theory, are justified by norms of individual rationality. In particular, most Bayesians argue that a scientist's credences must satisfy the probability axioms if she adheres to norms of practical and epistemic rationality. We argue that, to justify Bayesian statistics as a tool for science, it is necessary that (...) a scientist's public credences obey the probability axioms. We claim that norms of collective science help justify this restricted view, termed public probabilism. (shrink)