This book explores a question central to philosophy--namely, what does it take for a belief to be justified or rational? According to a widespread view, whether one has justification for believing a proposition is determined by how probable that proposition is, given one's evidence. In this book this view is rejected and replaced with another: in order for one to have justification for believing a proposition, one's evidence must normically support it--roughly, one's evidence must make the falsity of that proposition (...) abnormal in the sense of calling for special, independent explanation. This conception of justification bears upon a range of topics in epistemology and beyond. Ultimately, this way of looking at justification guides us to a new, unfamiliar picture of how we should respond to our evidence and manage our own fallibility. This picture is developed here. (shrink)
There is something puzzling about statistical evidence. One place this manifests is in the law, where courts are reluctant to base affirmative verdicts on evidence that is purely statistical, in spite of the fact that it is perfectly capable of meeting the standards of proof enshrined in legal doctrine. After surveying some proposed explanations for this, I shall outline a new approach – one that makes use of a notion of normalcy that is distinct from the idea of statistical frequency. (...) The puzzle is not, however, merely a legal one. Our unwillingness to base beliefs on statistical evidence is by no means limited to the courtroom, and is at odds with almost every general principle that epistemologists have proposed as to how we ought to manage our beliefs. (shrink)
According to a captivating picture, epistemic justification is essentially a matter of epistemic or evidential likelihood. While certain problems for this view are well known, it is motivated by a very natural thought—if justification can fall short of epistemic certainty, then what else could it possibly be? In this paper I shall develop an alternative way of thinking about epistemic justification. On this conception, the difference between justification and likelihood turns out to be akin to the more widely recognised difference (...) between ceteris paribus laws and brute statistical generalisations. I go on to discuss, in light of this suggestion, issues such as classical and lottery-driven scepticism as well as the lottery and preface paradoxes. (shrink)
Is it right to convict a person of a crime on the basis of purely statistical evidence? Many who have considered this question agree that it is not, posing a direct challenge to legal probabilism – the claim that the criminal standard of proof should be understood in terms of a high probability threshold. Some defenders of legal probabilism have, however, held their ground: Schoeman (1987) argues that there are no clear epistemic or moral problems with convictions based on purely (...) statistical evidence, and speculates that our aversion to such convictions may be nothing more than an irrational bias. More recently, Hedden and Colyvan (2019, section VI) describe our reluctance to convict on the basis of purely statistical evidence as an ‘intuition’, but suggest that there may be no ‘in principle’ problem with such convictions (see also Papineau, forthcoming, section 6). In this paper, I argue that there is, in some cases, an in principle problem with a conviction based upon statistical evidence alone – namely, it commits us to a precedent which, if consistently followed through, could lead to the deliberate conviction of an innocent person. I conclude with some reflections on the idea that the criminal justice system should strive to maximise the accuracy of its verdicts – and the related idea that we should each strive to maximise the accuracy of our beliefs. (shrink)
Martin Ferguson Smith's work on Lucretius is both well known and highly regarded. However, his 1969 translation of _De Rerum Natura_--long out of print--is virtually unknown. Readers will share our excitement in the discovery of this accurate and fluent prose rendering. For this edition, Professor Smith provides a revised translation, new Introduction, headnotes and bibliography.
One of the most intriguing claims in Sven Rosenkranz’s Justification as Ignorance is that Timothy Williamson’s celebrated anti-luminosity argument can be resisted when it comes to the condition ~ K ~ KP—the condition that one is in no position to know that one is in no position to know P. In this paper, I critically assess this claim.
A ‘lottery belief’ is a belief that a particular ticket has lost a large, fair lottery, based on nothing more than the odds against it winning. The lottery paradox brings out a tension between the idea that lottery beliefs are justified and the idea that that one can always justifiably believe the deductive consequences of things that one justifiably believes – what is sometimes called the principle of closure. Many philosophers have treated the lottery paradox as an argument against the (...) second idea – but I make a case here that it is the first idea that should be given up. As I shall show, there are a number of independent arguments for denying that lottery beliefs are justified. (shrink)
Theories of epistemic justification are commonly assessed by exploring their predictions about particular hypothetical cases – predictions as to whether justification is present or absent in this or that case. With a few exceptions, it is much less common for theories of epistemic justification to be assessed by exploring their predictions about logical principles. The exceptions are a handful of ‘closure’ principles, which have received a lot of attention, and which certain theories of justification are well known to invalidate. But (...) these closure principles are only a small sample of the logical principles that we might consider. In this paper, I will outline four further logical principles that plausibly hold for justification and two which plausibly do not. While my primary aim is just to put these principles forward, I will use them to evaluate some different approaches to justification and (tentatively) conclude that a ‘normic’ theory of justification best captures its logic. (shrink)
Although clinical ethics consultation is a high-stakes endeavor with an increasing prominence in health care systems, progress in developing standards for quality is challenging. In this article, we describe the results of a pilot project utilizing portfolios as an evaluation tool. We found that this approach is feasible and resulted in a reasonably wide distribution of scores among the 23 submitted portfolios that we evaluated. We discuss limitations and implications of these results, and suggest that this is a significant step (...) on the pathway to an eventual certification process for clinical ethics consultants. (shrink)
In this paper, we present the results of two surveys that investigate subjects’ judgments about what can be known or justifiably believed about lottery outcomes on the basis of statistical evidence, testimonial evidence, and “mixed” evidence, while considering possible anchoring and priming effects. We discuss these results in light of seven distinct hypotheses that capture various claims made by philosophers about lay people’s lottery judgments. We conclude by summarizing the main findings, pointing to future research, and comparing our findings to (...) recent studies by Turri and Friedman. (shrink)
According to the principle of Conjunction Closure, if one has justification for believing each of a set of propositions, one has justification for believing their conjunction. The lottery and preface paradoxes can both be seen as posing challenges for Closure, but leave open familiar strategies for preserving the principle. While this is all relatively well-trodden ground, a new Closure-challenging paradox has recently emerged, in two somewhat different forms, due to Marvin Backes (2019a) and Francesco Praolini (2019). This paradox synthesises elements (...) of the lottery and the preface and is designed to close off the familiar Closure-preserving strategies. By appealing to a normic theory of justification, I will defend Closure in the face of this new paradox. Along the way I will draw more general conclusions about justification, normalcy and defeat, which bear upon what Backes (2019b) has dubbed the ‘easy defeat’ problem for the normic theory. (shrink)
Our understanding of subjunctive conditionals has been greatly enhanced through the use of possible world semantics and, more precisely, by the idea that they involve variably strict quantification over possible worlds. I propose to extend this treatment to ceteris paribus conditionals – that is, conditionals that incorporate a ceteris paribus or ‘other things being equal’ clause. Although such conditionals are commonly invoked in scientific theorising, they traditionally arouse suspicion and apprehensiveness amongst philosophers. By treating ceteris paribus conditionals as a species (...) of variably strict conditional I hope to shed new light upon their content and their logic. (shrink)
The notion of risk plays a central role in economics, finance, health, psychology, law and elsewhere, and is prevalent in managing challenges and resources in day-to-day life. In recent work, Duncan Pritchard (2015, 2016) has argued against the orthodox probabilistic conception of risk on which the risk of a hypothetical scenario is determined by how probable it is, and in favour of a modal conception on which the risk of a hypothetical scenario is determined by how modally close it is. (...) In this article, we use Pritchard’s discussion as a springboard for a more wide-ranging discussion of the notion of risk. We introduce three different conceptions of risk: the standard probabilistic conception, Pritchard’s modal conception, and a normalcy conception that is new (though it has some precursors in the psychological literature on risk perception). Ultimately, we argue that the modal conception is ill-suited to the roles that a notion of risk is required to play and explore the prospects for a form of pluralism about risk, embracing both the probabilistic and the normalcy conceptions. (shrink)
In this paper I draw attention to a peculiar epistemic feature exhibited by certain deductively valid inferences. Certain deductively valid inferences are unable to enhance the reliability of one's belief that the conclusion is true—in a sense that will be fully explained. As I shall show, this feature is demonstrably present in certain philosophically significant inferences—such as GE Moore's notorious 'proof' of the existence of the external world. I suggest that this peculiar epistemic feature might be correlated with the much (...) discussed phenomenon that Crispin Wright and Martin Davies have called 'transmission failure'—the apparent failure, on the part of some deductively valid inferences to transmit one's justification for believing the premises. (shrink)
In this paper I respond to Marcello Di Bello’s criticisms of the ‘normic account’ of the criminal standard of proof. In so doing, I further elaborate on what the normic account predicts about certain significant legal categories of evidence, including DNA and fingerprint evidence and eyewitness identifications.
Many epistemologists have responded to the lottery paradox by proposing formal rules according to which high probability defeasibly warrants acceptance. Douven and Williamson present an ingenious argument purporting to show that such rules invariably trivialise, in that they reduce to the claim that a probability of 1 warrants acceptance. Douven and Williamson’s argument does, however, rest upon significant assumptions – amongst them a relatively strong structural assumption to the effect that the underlying probability space is both finite and uniform. In (...) this paper, I will show that something very like Douven and Williamson’s argument can in fact survive with much weaker structural assumptions – and, in particular, can apply to infinite probability spaces. (shrink)
In ‘The normative role of knowledge’ (2012), Declan Smithies defends a ‘JK-rule’ for belief: One has justification to believe that P iff one has justification to believe that one is in a position to know that P. Similar claims have been defended by others (Huemer, 2007, Reynolds, forthcoming). In this paper, I shall argue that the JK-rule is false. The standard and familiar way of arguing against putative rules for belief or assertion is, of course, to describe putative counterexamples. My (...) argument, though, won’t be like this – indeed I doubt that there are any intuitively compelling counterexamples to the JK-rule. Nevertheless, the claim that there are counterexamples to the JK-rule can, I think, be given something approaching a formal proof. My primary aim here is to sketch this proof. I will briefly consider some broader implications for how we ought to think about the epistemic standards governing belief and assertion. (shrink)
Say that two goals are normatively coincident just in case one cannot aim for one goal without automatically aiming for the other. While knowledge and justification are distinct epistemic goals, with distinct achievement conditions, this paper begins from the suggestion that they are nevertheless normatively coincident—aiming for knowledge and aiming for justification are one and the same activity. A number of surprising consequences follow from this—both specific consequences about how we can ascribe knowledge and justification in lottery cases and more (...) general consequences about the nature of justification and the relationship between justification and evidential probability. Many of these consequences turn out to be at variance with conventional, prevailing views. (shrink)
Tom Stoppard’s “Rosencrantz and Guildenstern Are Dead” opens with a puzzling scene in which the title characters are betting on coin throws and observe a seemingly astonishing run of 92 heads in a row. Guildenstern grows uneasy and proposes a number of unsettling explanations for what is occurring. Then, in a sudden change of heart, he appears to suggest that there is nothing surprising about what they are witnessing, and nothing that needs any explanation. He says ‘…each individual coin spun (...) individually is as likely to come down heads as tails and therefore should cause no surprise each time it does.’ In this article I argue that Guildenstern is right – there is nothing surprising about throwing 92 heads in a row. I go on to consider the relationship between surprise, probability and belief. (shrink)
The standard of proof applied in civil trials is the preponderance of evidence, often said to be met when a proposition is shown to be more than 50% likely to be true. A number of theorists have argued that this 50%+ standard is too weak – there are circumstances in which a court should find that the defendant is not liable, even though the evidence presented makes it more than 50% likely that the plaintiff’s claim is true. In this paper, (...) I will recapitulate the familiar arguments for this thesis, before defending a more radical one: The 50%+ standard is also too strong – there are circumstances in which a court should find that a defendant is liable, even though the evidence presented makes it less than 50% likely that the plaintiff’s claim is true. I will argue that the latter thesis follows naturally from the former once we accept that the parties in a civil trial are to be treated equally. I will conclude by sketching an alternative interpretation of the civil standard of proof. (shrink)
My concern in this paper is with the claim that knowledge is a mental state – a claim that Williamson places front and centre in Knowledge and Its Limits. While I am not by any means convinced that the claim is false, I do think it carries certain costs that have not been widely appreciated. One source of resistance to this claim derives from internalism about the mental – the view, roughly speaking, that one’s mental states are determined by one’s (...) internal physical state. In order to know that something is the case it is not, in general, enough for one’s internal physical state to be a certain way – the wider world must also be a certain way. If we accept that knowledge is a mental state, we must give up internalism. One might think that this is no cost, since much recent work in the philosophy of mind has, in any case, converged on the view that internalism is false. This thought, though, is too quick. As I will argue here, the claim that knowledge is a mental state would take us to a view much further from internalism than anything philosophers of mind have converged upon. (shrink)
The _Principle of Indifference_ was once regarded as a linchpin of probabilistic reasoning, but has now fallen into disrepute as a result of the so-called _problem of multiple of partitions_. In ‘Evidential symmetry and mushy credence’ Roger White suggests that we have been too quick to jettison this principle and argues that the problem of multiple partitions rests on a mistake. In this paper I will criticise White’s attempt to revive POI. In so doing, I will argue that what underlies (...) the problem of multiple partitions is a fundamental tension between POI and the very idea of _evidential incomparability_. (shrink)
The epistemology of religion is the branch of epistemology concerned with the rationality, the justificatory status and the knowledge status of religious beliefs – most often the belief in the existence of an omnipotent, omniscient and loving God as conceived by the major monotheistic religions. While other sorts of religious beliefs – such as belief in an afterlife or in disembodied spirits or in the occurrence of miracles – have also been the focus of considerable attention from epistemologists, I shall (...) concentrate here on belief in God. There were a number of significant works in the epistemology of religion written during the early and mid Twentieth Century. The late Twentieth Century, however, saw a surge of interest in this area, fuelled by the work of philosophers such as William Alston, Alvin Plantinga and Linda Zagzebski amongst others. Alston, Plantinga and Zagzebski succeeded in importing, into the epistemology of religion, various new ideas from mainstream epistemology – in particular, externalist approaches to justification, such as reliabilism, and virtue theoretic approaches to knowledge (see, for instance, Alston, 1986, 1991, Plantinga, 1988, 2000, Zagzebski, 1993a, 1993b). This laid fertile ground for new research – questions about the justificatory and knowledge status of belief in God begin to look very different when viewed through the lens of theories such as these. I will begin by surveying some of this groundbreaking work in the present article, before moving on to work from the last five years – a period in which the epistemology of religion has again received impetus from a number of ideas from mainstream epistemology; ideas such as pragmatic encroachment, phenomenal conservatism and externalist theories of evidence. (shrink)
In this paper I defend the claim that justification is closed under conjunction, and confront its most alarming consequence – that one can have justification for believing propositions that are unlikely to be true, given one’s evidence.
Entitlement is defined as a sort of epistemic justification that one can possess by default – a sort of epistemic justification that does not need to be earned or acquired. Epistemologists who accept the existence of entitlement generally have a certain anti-sceptical role in mind for it – entitlement is intended to help us resist what would otherwise be compelling radical sceptical arguments. But this role leaves various details unspecified and, thus, leaves scope for a number of different potential conceptions (...) of entitlement. At one extreme there are conceptions that portray entitlement as a weak, attenuated epistemic status and, at the other, we have conceptions that portray entitlement as something potent and strong. Certain intermediate conceptions are also possible. In this paper, I shall argue that the weak and intermediate conceptions of entitlement do not survive careful scrutiny, and the stronger conceptions – while they do, in a way, strain credulity – are the only conceptions that are ultimately viable. (shrink)
In this paper I will compare two competing accounts of assertion: the knowledge account and the justified belief account. When it comes to the evidence that is typically used to assess accounts of assertion – including the evidence form lottery propositions, the evidence from Moore’s paradoxical propositions and the evidence from conversational patterns – I will argue that the justified belief account has at least as much explanatory power as its rival. I will argue, finally, that a close look at (...) the ways in which assertions can be challenged and retracted reveals a certain advantage for the justified belief account. The paper will touch upon a number of further topics along the way, including the logical interaction between knowledge and justified belief, the nature of defeat, and the hypothesis that knowledge and justified belief are normatively coincident goals. (shrink)
Entitlement is conceived as a kind of positive epistemic status, attaching to certain propositions, that involves no cognitive or intellectual accomplishment on the part of the beneficiary — a status that is in place by default. In this paper I will argue that the notion of entitlement — or something very like it — falls out of an idea that may at first blush seem rather disparate: that the evidential support relation can be understood as a kind of variably strict (...) conditional (in the sense of Lewis 1973). Lewis provided a general recipe for deriving what he termed inner modalities from any variably strict conditional governed by a logic meeting certain constraints. On my proposal, entitlement need be nothing more exotic than the inner necessity associated with evidential support. Understanding entitlement in this way helps to answer some common concerns — in particular, the concern that entitlement could only be a pragmatic, and not genuinely epistemic, status. (shrink)
It is generally accepted that appropriate documentation of activities and recommendations of ethics consultants in patients’ medical records is critical. Despite this acceptance, the bioethics literature is largely devoid of guidance on key elements of an ethics chart note, the degree of specificity that it should contain, and its stylistic tenor. We aim to provide guidance for a variety of persons engaged in clinical ethics consultation: new and seasoned ethics committee members who are new to ethics consultation, students and trainees (...) in clinical ethics, and those who have significant experience with ethics consultation so that they can reflect on their practice. Toward the goal of promoting quality charting practices in ethics consultations, we propose recommendations on a broad array of questions concerning clinical ethics consultation chart notes, including whether and when to write a chart note, and practical considerations for the tenor, purpose, and content of a chart note. Our broader aim is to promote discussion about good charting practices in clinical ethics, with the hope of contributing to clear standards of excellence in clinical ethics consultation. (shrink)
In ‘Single premise deduction and risk’ (2008) Maria Lasonen-Aarnio argues that there is a kind of epistemically threatening risk that can accumulate over the course of drawing single premise deductive inferences. As a result, we have a new reason for denying that knowledge is closed under single premise deduction—one that mirrors a familiar reason for denying that knowledge is closed under multiple premise deduction. This sentiment has more recently been echoed by others (see Schechter 2011). In this paper, I will (...) argue that, although there is a kind of risk that can accumulate over the course of drawing single premise deductive inferences, it is importantly different to the kind of risk that multiple premise deductive inferences can introduce. Having distinguished these two kinds of risk, I shall offer some reasons for thinking that the kind associated with single premise deductions is, in fact, epistemically benign—it poses no threat, in and of itself, to the knowledge status of a belief. If this is right, then Lasonen-Aarnio’s argument against single premise closure is unsuccessful. (shrink)
All standard epistemic logics legitimate something akin to the principle of closure, according to which knowledge is closed under competent deductive inference. And yet the principle of closure, particularly in its multiple premise guise, has a somewhat ambivalent status within epistemology. One might think that serious concerns about closure point us away from epistemic logic altogether—away from the very idea that the knowledge relation could be fruitfully treated as a kind of modal operator. This, however, need not be so. The (...) abandonment of closure may yet leave in place plenty of formal structure amenable to systematic logical treatment. In this paper we describe a family of weak epistemic logics in which closure fails, and describe two alternative semantic frameworks in which these logics can be modelled. One of these—which we term plurality semantics—is relatively unfamiliar. We explore under what conditions plurality frames validate certain much-discussed principles of epistemic logic. It turns out that plurality frames can be interpreted in a very natural way in light of one motivation for rejecting closure, adding to the significance of our technical work. The second framework that we employ—neighbourhood semantics—is much better known. But we show that it too can be interpreted in a way that comports with a certain motivation for rejecting closure. (shrink)
In this paper, I offer reasons for thinking that two prominent sceptical arguments in the literature – the underdetermination-based sceptical argument and the closure-based sceptical argument – are less philosophically interesting than is commonly supposed. The underdetermination-based argument begs the question against a non-sceptic and can be dismissed with little fanfare. The closure-based argument, though perhaps not question-begging per se, does rest upon contentious assumptions that a non-sceptic is under no pressure to accept.
According to the JUSTIFIED FAIR COINS principle, if I know that a coin is fair, and I lack justification for believing that it won’t be flipped, then I lack justification for believing that it won’t land tails. What this principle says, in effect, is that the only way to have justification for believing that a fair coin won’t land tails, is by having justification for believing that it won’t be flipped at all. Although this seems a plausible and innocuous principle, (...) in a recent paper Dorr, Goodman and Hawthorne use it in devising an intriguing puzzle which places all justified beliefs about the future in jeopardy. They point out, further, that one very widespread theory of justification predicts that JUSTIFIED FAIR COINS is false, giving us additional reason to reject it. In this paper, I will attempt to turn this dialectic around. I will argue that JUSTIFIED FAIR COINS does not inevitably lead to scepticism about the future, and the fact that it is incompatible with a widespread theory of justifi... (shrink)
While valuable work has been done addressing clinical ethics within established healthcare systems, we anticipate that the projected growth in acquisitions of community hospitals and facilities by large tertiary hospitals will impact the field of clinical ethics and the day-to-day responsibilities of clinical ethicists in ways that have yet to be explored. Toward the goal of providing clinical ethicists guidance on a range of issues that they may encounter in the systematization process, we discuss key considerations and potential challenges in (...) implementing system-wide ethics consultation services. Specifically, we identify four models for organizing, developing, and enhancing ethics consultation activities within a system created through acquisitions: train-the-trainer, local capacity-building, circuit-riding, and consolidated accountability. We note each model’s benefits and challenges. To our knowledge, this is the first paper to consider the broader landscape of issues affected by consolidation. We anticipate that clinical ethicists, volunteer consultants, and hospital administrators will benefit from our recommendations. (shrink)
There are a number of apparent parallels between belief in God and belief in the existence of an external world beyond our experiences. Both beliefs would seem to condition one's overall view of reality and one's place within it – and yet it is difficult to see how either can be defended. Neither belief is likely to receive a purely a priori defence and any empirical evidence that one cites either in favour of the existence of God or the existence (...) of the external world would seem to blatantly beg the question against a doubter. I will explore just how far this parallel can be pushed by examining some strategies for resisting external world scepticism1. (shrink)
NOTE: This paper is a reworking of some aspects of an earlier paper – ‘What else justification could be’ and also an early draft of chapter 2 of Between Probability and Certainty. I'm leaving it online as it has a couple of citations and there is some material here which didn't make it into the book (and which I may yet try to develop elsewhere). My concern in this paper is with a certain, pervasive picture of epistemic justification. On this (...) picture, acquiring justification for believing something is essentially a matter of minimising one’s risk of error – so one is justified in believing something just in case it is sufficiently likely, given one’s evidence, to be true. This view is motivated by an admittedly natural thought: If we want to be fallibilists about justification then we shouldn’t demand that something be certain – that we completely eliminate error risk – before we can be justified in believing it. But if justification does not require the complete elimination of error risk, then what could it possibly require if not its minimisation? If justification does not require epistemic certainty then what could it possibly require if not epistemic likelihood? When all is said and done, I’m not sure that I can offer satisfactory answers to these questions – but I will attempt to trace out some possible answers here. The alternative picture that I’ll outline makes use of a notion of normalcy that I take to be irreducible to notions of statistical frequency or predominance. (shrink)
Mistakes and errors happen in most spheres of human life and activity, including in medicine. A mistake can be as simple and benign as the collection of an extra and unnecessary urine sample. Or a mistake can cause serious but reversible harm, such as an overdose of insulin in a patient with diabetes, resulting in hypoglycemia, seizures, and coma. Or a mistake can result in serious and permanent damage for the patient, such as the failure to consider epiglottitis in an (...) initial differential diagnosis, resulting in a chronic vegetative state for a seven-year-old boy. Or a mistake can be an error in judgment that leads to a patient's death. (shrink)