We are often uncertain how to behave morally in complex situations. In this controversial study, Ted Lockhart contends that moral philosophy has failed to address how we make such moral decisions. Adapting decision theory to the task of decision-making under moral uncertainly, he proposes that we should not always act how we feel we ought to act, and that sometimes we should act against what we feel to be morally right. Lockhart also discusses abortion extensively and proposes new ways to (...) deal with the ethical and moral issues which surround it. (shrink)
How is it that thoroughly physical material beings such as ourselves can think, dream, feel, create and understand ideas, theories and concepts? How does mere matter give rise to all these non-material mental states, including consciousness itself? An answer to this central question of our existence is emerging at the busy intersection of neuroscience, psychology, artificial intelligence, and robotics.In this groundbreaking work, philosopher and cognitive scientist Andy Clark explores exciting new theories from these fields that reveal minds like ours to (...) be prediction machines - devices that have evolved to anticipate the incoming streams of sensory stimulation before they arrive. These predictions then initiate actions that structure our worlds and alter the very things we need to engage and predict. Clark takes us on a journey in discovering the circular causal flows and the self-structuring of the environment that define "the predictive brain." What emerges is a bold, new, cutting-edge vision that reveals the brain as our driving force in the daily surf through the waves of sensory stimulation. (shrink)
How should we make decisions when we're uncertain about what we ought, morally, to do? Decision-making in the face of fundamental moral uncertainty is underexplored terrain: MacAskill, Bykvist, and Ord argue that there are distinctive norms by which it is governed, and which depend on the nature of one's moral beliefs.
The main aim of this book is to introduce the topic of limited awareness, and changes in awareness, to those interested in the philosophy of decision-making and uncertain reasoning. (This is for the series Elements of Decision Theory published by Cambridge University Press and edited by Martin Peterson).
Scientific knowledge is the most solid and robust kind of knowledge that humans have because of its inherent self-correcting character. Nevertheless, anti-evolutionists, climate denialists, and anti-vaxxers, among others, question some of the best-established scientific findings, making claims unsupported by empirical evidence. A common aspect of these claims is reference to the uncertainties of science concerning evolution, climate change, vaccination, and so on. This is inaccurate: whereas the broad picture is clear, there will always exist uncertainties about the details of the (...) respective phenomena. This book shows that uncertainty is an inherent feature of science that does not devalue it. In contrast, uncertainty advances science because it motivates further research. This is the first book on this topic that draws on philosophy of science to explain what uncertainty in science is and how it makes science advance. It contrasts evolution, climate change, and vaccination, where the uncertainties are exaggerated, and genetic testing and forensic science, where the uncertainties are usually overlooked. The goal is to discuss the scientific, psychological, and philosophical aspects of uncertainty in order to explain what it really is, what kinds of problems it actually poses, and why in the end it makes science advance. Contrary to public representations of scientific findings and conclusions that produce an intuitive but distorted view of science as certain, people need to understand and learn to live with uncertainty in science. This book is intended for anyone who wants to get a clear view of the nature of science. (shrink)
In some severely uncertain situations, exemplified by climate change and novel pandemics, policymakers lack a reasoned basis for assigning probabilities to the possible outcomes of the policies they must choose between. I outline and defend an uncertainty averse, egalitarian approach to policy evaluation in these contexts. The upshot is a theory of distributive justice which offers especially strong reasons to guard against individual and collective misfortune.
Luc Bovens has recently advanced a novel argument for affirmative action, grounded in the plausible idea that it is hard for an employer to evaluate the qualifications of candidates from underrepresented groups. Bovens claims that this provides a profit-maximizing employer with reason to shortlist prima facie less-qualified candidates from underrepresented groups. In this paper, I illuminate three flaws in Bovens’s argument. First, it suffers from model error: A rational employer does not incur costs to scrutinize candidates when it knows their (...) qualifications with perfect certainty, nor does it refuse to hire better-qualified candidates just because they did not require extra scrutiny. Second, Bovens’s core premise--that there is greater variance in the evaluation of underrepresented candidates than there is the evaluation of other candidates--hurts underrepresented candidates rather than helps them. Third, candidates who are not shortlisted for the reasons Bovens gives have a plausible complaint about unfairness in the hiring process. (shrink)
In this essay, we explore an issue of moral uncertainty: what we are permitted to do when we are unsure about which moral principles are correct. We develop a novel approach to this issue that incorporates important insights from previous work on moral uncertainty, while avoiding some of the difficulties that beset existing alternative approaches. Our approach is based on evaluating and choosing between option sets rather than particular conduct options. We show how our approach is particularly well-suited (...) to address this issue of moral uncertainty with respect to agents that have credence in moral theories that are not fully consequentialist. (shrink)
While the foundations of climate science and ethics are well established, fine-grained climate predictions, as well as policy-decisions, are beset with uncertainties. This chapter maps climate uncertainties and classifies them as to their ground, extent and location. A typology of uncertainty is presented, centered along the axes of scientific and moral uncertainty. This typology is illustrated with paradigmatic examples of uncertainty in climate science, climate ethics and climate economics. Subsequently, the chapter discusses the IPCC’s preferred way of (...) representing uncertainties and evaluates its strengths and weaknesses from a risk management perspective. Three general strategies for decision-makers to cope with climate uncertainty are outlined, the usefulness of which largely depends on whether or not decision-makers find themselves in a context of deep uncertainty. The chapter concludes by offering two recommendations to ease the work of policymakers, faced with the various uncertainties engrained in climate discourse. (shrink)
Defenders of deontological constraints in normative ethics face a challenge: how should an agent decide what to do when she is uncertain whether some course of action would violate a constraint? The most common response to this challenge has been to defend a threshold principle on which it is subjectively permissible to act iff the agent's credence that her action would be constraint-violating is below some threshold t. But the threshold approach seems arbitrary and unmotivated: what would possibly determine where (...) the threshold should be set, and why should there be any precise threshold at all? Threshold views also seem to violate ought agglomeration, since a pair of actions each of which is below the threshold for acceptable moral risk can, in combination, exceed that threshold. In this paper, I argue that stochastic dominance reasoning can vindicate and lend rigor to the threshold approach: given characteristically deontological assumptions about the moral value of acts, it turns out that morally safe options will stochastically dominate morally risky alternatives when and only when the likelihood that the risky option violates a moral constraint is greater than some precisely definable threshold (in the simplest case, .5). I also show how, in combination with the observation that deontological moral evaluation is relativized to particular choice situations, this approach can overcome the agglomeration problem. This allows the deontologist to give a precise and well-motivated response to the problem of uncertainty. (shrink)
This book is an extensive survey and critical examination of the literature on the use of expert opinion in scientific inquiry and policy making. The elicitation, representation, and use of expert opinion is increasingly important for two reasons: advancing technology leads to more and more complex decision problems, and technologists are turning in greater numbers to "expert systems" and other similar artifacts of artificial intelligence. Cooke here considers how expert opinion is being used today, how an expert's uncertainty is (...) or should be represented, how people do or should reason with uncertainty, how the quality and usefulness of expert opinion can be assessed, and how the views of several experts might be combined. He argues for the importance of developing practical models with a transparent mathematic foundation for the use of expert opinion in science, and presents three tested models, termed "classical," "Bayesian," and "psychological scaling." Detailed case studies illustrate how they can be applied to a diversity of real problems in engineering and planning. (shrink)
The new paradigm in the psychology of reasoning adopts a Bayesian, or prob- abilistic, model for studying human reasoning. Contrary to the traditional binary approach based on truth functional logic, with its binary values of truth and falsity, a third value that represents uncertainty can be introduced in the new paradigm. A variety of three-valued truth table systems are available in the formal literature, including one proposed by de Finetti. We examine the descriptive adequacy of these systems for natural (...) language indicative condi- tionals and bets on conditionals. Within our framework the so-called “defective” truth table, in which participants choose a third value when the antecedent of the indicative conditional is false, becomes a coherent response. We show that only de Finetti’s system has a good descriptive fit when uncer- tainty is the third value. (shrink)
Jennifer Rose Carr’s (2020) article “Normative Uncertainty Without Theories” proposes a method to maximize expected value under normative uncertainty without Intertheoretic Value Comparison (hereafter IVC). Carr argues that this method avoids IVC because it avoids theories: the agent’s credence is distributed among normative hypotheses of a particular type, which don’t constitute theories. However, I argue that Carr’s method doesn’t avoid or help to solve what I consider as the justificatory problem of IVC, which isn’t specific to comparing theories (...) as such. This threatens the implementability of Carr’s method. Fortunately, I also show how Carr’s method can nevertheless be implemented. I identify a type of epistemic states where the justificatory problem of IVC is not a necessary obstacle to maximizing expected value. In such states, the uncertainty stems from indecisive normative intuitions, and the agent justifiably constructs each normative hypothesis on the basis of a consistent subset of her intuitions by reference to the same unit of value. This part of my argument complements not only Carr’s (2020) argument, but also some moderate defenses of IVC. The combination of Carr’s paper and mine helps to illuminate the conditions for maximizing expected value under normative uncertainty without unjustified value comparison. (shrink)
Economists have always recognised that human endeavours are constrained by our limited and uncertain knowledge, but only recently has an accepted theory of uncertainty and information evolved. This theory has turned out to have surprisingly practical applications: for example in analysing stock market returns, in evaluating accident prevention measures, and in assessing patent and copyright laws. This book presents these intellectual advances in readable form for the first time. It unifies many important but partial results into a satisfying single (...) picture, making it clear how the economics of uncertainty and information generalises and extends standard economic analysis. Part One of the volume covers the economics of uncertainty: how each person adapts to a given fixed state of knowledge by making an optimal choice among the immediate 'terminal' actions available. These choices in turn determine the overall market equilibrium reflecting the social distribution of risk bearing. In Part Two, covering the economics of information, the state of knowledge is no longer held fixed. Instead, individuals can to a greater or lesser extent overcome their ignorance by 'informational' actions. The text also addresses at appropriate points many specific topics such as insurance, the Capital Asset Pricing model, auctions, deterrence of entry, and research and invention. (shrink)
In this paper, I enter the debate between those who hold that our normative uncertainty matters for what we ought to do, and those who hold that only our descriptive uncertainty matters. I argue that existing views in both camps have unacceptable implications in cases where our descriptive beliefs depend on our normative beliefs. I go on to propose a fix which is available only to those who hold that normative uncertainty matters, ultimately leaving the challenge as (...) a threat to recent skepticism about such views. (shrink)
We are often unsure about what we ought to do. This can be because we lack empirical knowledge, such as the extent to which future generations will be harmed by climate change. It can also be because we lack normative knowledge, such as the relative moral importance of the interests of present people and the interests of future people. However, though the question of how one ought to act under empirical uncertainty has been addressed extensively by both economists and (...) philosophers---with expected utility theory providing the standard formal framework---the question of how one ought to act under normative uncertainty is comparatively neglected. My thesis attempts to address this gap. -/- In my thesis I develop a view that I call metanormativism: that there are second-order norms that govern action that are relative to a decision-maker's uncertainty about first-order norms. -/- In the first part of the thesis, I defend one specific metanormative view: that under normative uncertainty decision-makers should maximise expected choice-worthiness, treating normative uncertainty analogously with empirical uncertainty. Drawing on the analogy between decision-making under normative uncertainty and social choice theory, I defend this view at length in response to the problem of merely ordinal theories and the problem of intertheoretic value comparisons. -/- In the second part of the thesis, I explore the implications of metanormativism for other philosophical issues. I argue that it has important consequences regarding the theory of rational action in the face of incomparable values, the causal/evidential debate in decision-theory, and our assessment of the value of moral philosophy. (shrink)
What should we do when we are not certain about what we morally should do? There is a long history of theorizing about decision-making under empirical uncertainty, but surprisingly little has been written about the moral uncertainty expressed by this question. Only very recently have philosophers started to systematically address the nature of such uncertainty and its impacts on decision-making. This paper addresses the main problems raised by moral uncertainty and critically examines some proposed solutions.
Sometimes it’s not certain which of several mutually exclusive moral views is correct. Like almost everyone, I think that there’s some sense in which what one should do depends on which of these theories is correct, plus the way the world is non-morally. But I also think there’s an important sense in which what one should do depends upon the probabilities of each of these views being correct. Call this second claim “moral uncertaintism”. In this paper, I want to address (...) an argument against moral uncertaintism offered in the pages of this journal by Brian Weatherson, and seconded elsewhere by Brian Hedden, the crucial premises of which are: that acting on moral uncertaintist norms necessarily involves motivation by reasons or rightness as such, and that such motivation is bad. I will argue that and are false, and that at any rate, the quality of an agent’s motivation is not pertinent to the truth or falsity of moral uncertaintism in the way that Weatherson’s and Hedden’s arguments require. (shrink)
Some philosophers have recently argued that decision-makers ought to take normative uncertainty into account in their decisionmaking. These philosophers argue that, just as it is plausible that we should maximize expected value under empirical uncertainty, it is plausible that we should maximize expected choice-worthiness under normative uncertainty. However, such an approach faces two serious problems: how to deal with merely ordinal theories, which do not give sense to the idea of magnitudes of choice-worthiness; and how, even when (...) theories do give sense to magnitudes of choice-worthiness, to compare magnitudes of choice-worthiness across different theories. Some critics have suggested that these problems are fatal to the project of developing a normative account of decision-making under normative uncertainty. The primary purpose of this article is to show that this is not the case. To this end, I develop an analogy between decision-making under normative uncertainty and the problem of social choice, and then argue that the Borda Rule provides the best way of making decisions in the face of merely ordinal theories and intertheoretic incomparability. (shrink)
This article described three heuristics that are employed in making judgements under uncertainty: representativeness, which is usually employed when people are asked to judge the probability that an object or event A belongs to class or process B; availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development; and adjustment from an anchor, which is usually employed in numerical prediction when a relevant (...) value is available. These heuristics are highly economical and usually effective, but they lead to systematic and predictable errors. A better understanding of these heuristics and of the biases to which they lead could improve judgements and decisions in situations of uncertainty. (shrink)
How should deontological theories that prohibit actions of type K — such as intentionally killing an innocent person — deal with cases of uncertainty as to whether a particular action is of type K? Frank Jackson and Michael Smith, who raise this problem in their paper "Absolutist Moral Theories and Uncertainty" (2006), focus on a case where a skier is about to cause the death of ten innocent people — we don’t know for sure whether on purpose or (...) not — by causing an avalanche; and we can only save the people by shooting the skier. One possible deontological attitude towards such uncertainty is what Jackson and Smith call the threshold view, according to which whether or not the deontological constraint applies depends on our degree of (justified) certainty meets a given threshold. Jackson and Smith argue against the threshold view that it leads to implausible paradoxical moral dilemmas in a special kind of case. In this response, we show that the threshold view can avoid these implausible moral dilemmas, as long as the relevant deontological constraint is grounded in individualistic patient-based considerations, such as what an individual person is entitled to object to. (shrink)
In this article, I present a new interpretation of the pro-life view on the status of early human embryos. In my understanding, this position is based not on presumptions about the ontological status of embryos and their developmental capabilities but on the specific criteria of rational decisions under uncertainty and on a cautious response to the ambiguous status of embryos. This view, which uses the decision theory model of moral reasoning, promises to reconcile the uncertainty about the ontological (...) status of embryos with the certainty about normative obligations. I will demonstrate that my interpretation of the pro-life view, although seeming to be stronger than the standard one, has limited scope and cannot be used to limit destructive research on human embryos. (shrink)
There is now considerable evidence that human sentence processing is expectation based: As people read a sentence, they use their statistical experience with their language to generate predictions about upcoming syntactic structure. This study examines how sentence processing is affected by readers' uncertainty about those expectations. In a self-paced reading study, we use lexical subcategorization distributions to factorially manipulate both the strength of expectations and the uncertainty about them. We compare two types of uncertainty: uncertainty about (...) the verb's complement, reflecting the next prediction step; and uncertainty about the full sentence, reflecting an unbounded number of prediction steps. We find that uncertainty about the full structure, but not about the next step, was a significant predictor of processing difficulty: Greater reduction in uncertainty was correlated with increased reading times. We additionally replicated previously observed effects of expectation violation, orthogonal to the effect of uncertainty. This suggests that both surprisal and uncertainty affect human RTs. We discuss the consequences for theories of sentence comprehension. (shrink)
In their insightful article, Brent Kious and Margaret Battin (2019) correctly identify an inconsistency between an involuntary psychiatric commitment for suicide prevention and physician aid in dying (PAD). They declare that it may be possible to resolve the problem by articulating “objective standards for evaluating the severity of others’ suffering,” but ultimately they admit that this task is beyond the scope of their article since the solution depends on “a deep and difficult” question about comparing the worseness of two possible (...) scenarios: letting someone die (who could have been helped) with not letting someone die (whose suffering could only be alleviated by death). In our commentary, we argue that creating such standards is more difficult than the authors assume because of the many types of deep uncertainties we have to deal with: (1) diagnostic, (2) motivational, and (3) existential. (shrink)
This paper explores the role of moral uncertainty in explaining the morally disruptive character of new technologies. We argue that existing accounts of technomoral change do not fully explain its disruptiveness. This explanatory gap can be bridged by examining the epistemic dimensions of technomoral change, focusing on moral uncertainty and inquiry. To develop this account, we examine three historical cases: the introduction of the early pregnancy test, the contraception pill, and brain death. The resulting account highlights what we (...) call “differential disruption” and provides a resource for fields such as technology assessment, ethics of technology, and responsible innovation. (shrink)
For decades, cigarette companies helped to promote the impression that there was no scientific consensus concerning the safety of their product. The appearance of controversy, however, was misleading, designed to confuse the public and to protect industry interests. Created scientific controversies emerge when expert communities are in broad agreement but the public perception is one of profound scientific uncertainty and doubt. In the first book-length analysis of the concept of a created scientific controversy, David Harker explores issues including climate (...) change, Creation science, the anti-vaccine movement and genetically modified crops. Drawing on work in cognitive psychology, social epistemology, critical thinking and philosophy of science, he shows readers how to better understand, evaluate, and respond to the appearance of scientific controversy. His book will be a valuable resource for students of philosophy of science, environmental and health sciences, and social and natural sciences. (shrink)
Given the deep disagreement surrounding population axiology, one should remain uncertain about which theory is best. However, this uncertainty need not leave one neutral about which acts are better or worse. We show that as the number of lives at stake grows, the Expected Moral Value approach to axiological uncertainty systematically pushes one towards choosing the option preferred by the Total and Critical Level views, even if one’s credence in those theories is low.
Uncertainty in the field of child psychiatry may at times lead to groundless assumptions about the aetiology and pathology of psychiatric disorders of childhood. Treatment based on non-validated assumptions may be ineffective and may cause more harm than good. The case is presented of infantile autism which was at first attributed by clinicians to a specific negative effect of parents on their children. Evidence grounded on research did subsequently refute the assumption implicating the parents in the aetiology of this (...) disorder. An explanatory assumption can become evidence if it is tested and found valid. To avoid serious errors in the understanding and treatment of child psychiatric disorders, the clinician should always consider critically assumptions and opinion, provided in lieu of evidence. (shrink)
In ‘Normative Uncertainty as a Voting Problem’, William MacAskill argues that positive credence in ordinal-structured or intertheoretically incomparable normative theories does not prevent an agent from rationally accounting for her normative uncertainties in practical deliberation. Rather, such an agent can aggregate the theories in which she has positive credence by methods borrowed from voting theory—specifically, MacAskill suggests, by a kind of weighted Borda count. The appeal to voting methods opens up a promising new avenue for theories of rational choice (...) under normative uncertainty. The Borda rule, however, is open to at least two serious objections. First, it seems implicitly to ‘cardinalize’ ordinal theories, and so does not fully face up to the problem of merely ordinal theories. Second, the Borda rule faces a problem of option individuation. MacAskill attempts to solve this problem by invoking a measure on the set of practical options. But it is unclear that there is any natural way of defining such a measure that will not make the output of the Borda rule implausibly sensitive to irrelevant empirical features of decision-situations. After developing these objections, I suggest an alternative: the McKelvey uncovered set, a Condorcet method that selects all and only the maximal options under a strong pairwise defeat relation. This decision rule has several advantages over Borda and mostly avoids the force of MacAskill’s objection to Condorcet methods in general. (shrink)
Moral dilemmas can arise from uncertainty, including uncertainty of the real values involved. One interesting example of this is that of experimentation on human embryos and foetuses, If these have a moral stauts similar to that of human persons then there will be server constraitns on what may be done to them. If embryous have a moral status similar to that of other small clusters of cells, then constraints will be motivated largely by consideration for the persons into (...) whom the embryos may develop. If the truth lies somewhere between these two extremes, the embryo having neither the full moral weight of persons, nor a completely negligible moral weight, then different kinds of constraints will be appropriate. On the face of it, in order to know what kinds of experiements, if any, we are morally justified in performing on embryos we have to know what the moral weight of the embryo is. But then an impasse threatens, for it seems implausible that we can settle with certainty the exact moral status of the human embryo. It is the purpose of this paper to show that moral uncertainty need not make rational moral justification impossible. I develop a framework which distinguishes between what is morally right/wrong, and what is morally justified/unjustified, and applies standard decision theoretic tools to the case of moral uncertainties. (This was the first published account of what has subsequently become known as Expected Moral Value Theory. An earlier version of the paper, "A decision theoretic argument against human embryo experimentation", was published in M. Fricke (ed.), Essays in honor of Bob Durrant. (University of Otago Press, 1986) 111-27.). (shrink)
This paper concerns how extant theorists of predictive coding conceptualize and explain possible instances of cognitive penetration. §I offers brief clarification of the predictive coding framework and relevant mechanisms, and a brief characterization of cognitive penetration and some challenges that come with defining it. §II develops more precise ways that the predictive coding framework can explain, and of course thereby allow for, genuine top-down causal effects on perceptual experience, of the kind discussed in the context of cognitive penetration. §III develops (...) these insights further with an eye towards tracking one extant criterion for cognitive penetration, namely, that the relevant cognitive effects on perception must be sufficiently direct. Throughout these discussions, we extend the analyses of the predictive coding models, as we know them. So one open question that surfaces is how much of the extended analyses are genuinely just part of the predictive coding models, or something that must be added to them in order to generate these additional explanatory benefits. In §IV, we analyze and criticize a claim made by some theorists of predictive coding, namely, that (interesting) instances of cognitive penetration tend to occur in perceptual circumstances involving substantial noise or uncertainty. It is here that our analysis is most critical. We argue that, when applied, the claim fails to explain (or perhaps even be consistent with) a large range of important and uncontroversially interesting possible cases of cognitive penetration. We conclude with a general speculation about how the recent work on the predictive mind may influence the current dialectic concerning top-down effects on perception. (shrink)
Suppose you believe you’re morally required to φ but that it’s not a big deal; and yet you think it might be deeply morally wrong to φ. You are in a state of moral uncertainty, holding high credence in one moral view of your situation, while having a small credence in a radically opposing moral view. A natural thought is that in such a case you should not φ, because φing would be too morally risky. The author argues that (...) this natural thought is misguided. If φing is in fact morally required, then you should φ, and this is so even taking into account your moral uncertainty. The author argues that if the natural thought were correct, then being caught in the grip of a false moral view would be exculpatory: people who do morally wrong things thinking they are acting morally rightly would be blameless. But being caught in the grip of a false moral view is not exculpatory. So the natural thought is false. The author develops the claim that you should act as morality actually requires as a candidate answer to the question “how should one act in the face of moral uncertainty?” This answer has been dismissed in discussion up to this point. The author argues that not only is this answer a serious contender; it is the correct answer. (shrink)
ABSTRACTThe financial crisis of 2008 was unforeseen partly because the academic theories that underpin policy making do not sufficiently account for uncertainty and complexity or learned and evolved human capabilities for managing them. Mainstream theories of decision making tend to be strongly normative and based on wishfully unrealistic “idealized” modeling. In order to develop theories of actual decision making under uncertainty, we need new methodologies that account for how human actors often manage uncertain situations “well enough.” Some possibly (...) helpful methodologies, drawing on digital science, focus on the role of emotions in determining people's choices; others examine how people construct narratives that enable them to act; still others combine qualitative with quantitative data. (shrink)
This article argues that the decision problem in the original position should be characterized as a decision problem under uncertainty even when it is assumed that the denizens of the original position know that they have an equal chance of ending up in any given individual’s place. It supports this claim by arguing that (a) the continuity axiom of decision theory does not hold between all of the outcomes the denizens of the original position face and that (b) neither (...) us nor the denizens of the original position can know the exact point at which discontinuity sets in, because the language we employ in comparing different outcomes is ineradicably vague. It is also argued that the account underlying (b) can help proponents of superiority in value theory defend their view against arguments offered by Norcross and Griffin. (shrink)
Given the deep disagreement surrounding population axiology, one should remain uncertain about which theory is best. However, this uncertainty need not leave one neutral about which acts are better or worse. We show that, as the number of lives at stake grows, the Expected Moral Value approach to axiological uncertainty systematically pushes one toward choosing the option preferred by the Total View and critical-level views, even if one’s credence in those theories is low.
The aim of this paper is to examine whether it would be advantageous to introduce knowledge norms instead of the currently assumed rational credence norms into the debate about decision making under normative uncertainty. There is reason to think that this could help us better accommodate cases in which agents are rationally highly confident in false moral views. I show how Moss’ view of probabilistic knowledge can be fruitfully employed to develop a decision theory that delivers plausible verdicts in (...) these cases. I also argue that, for this new view to be better than existing alternatives, it must adopt a particular solution to the new evil demon problem, which asks whether agents and their BIV-counterparts are equally justified. In order to get an attractive decision theory for cases of moral uncertainty, we must reject the claim that agents and their BIV-counterparts are equally justified. Moreover, the resulting view must be supplemented with a moral epistemology that explains how it is possible to be rationally morally uncertain. This is especially challenging if we assume that moral truths are knowable a priori. (shrink)
This volume collects Gigerenzer's recent articles on the psychology of rationality. This volume should appeal, like the earlier volumes, to a broad mixture of cognitive psychologists, philosophers, economists, and others who study decision making.
Normative judgments involve two gradable features. First, the judgments themselves can come in degrees; second, the strength of reasons represented in the judgments can come in degrees. Michael Smith has argued that non-cognitivism cannot accommodate both of these gradable dimensions. The degrees of a non-cognitive state can stand in for degrees of judgment, or degrees of reason strength represented in judgment, but not both. I argue that (a) there are brands of noncognitivism that can surmount Smith’s challenge, and (b) any (...) brand of non-cognitivism that has even a chance of solving the Frege–Geach Problem and some related problems involving probabilistic consistency can also thereby solve Smith’s problem. Because only versions of non-cognitivism that can solve the Frege–Geach Problem are otherwise plausible, all otherwise plausible versions of noncognitivism can meet Smith’s challenge. (shrink)
Everettian accounts of quantum mechanics entail that people branch; every possible result of a measurement actually occurs, and I have one successor for each result. Is there room for probability in such an account? The prima facie answer is no; there are no ontic chances here, and no ignorance about what will happen. But since any adequate quantum mechanical theory must make probabilistic predictions, much recent philosophical labor has gone into trying to construct an account of probability for branching selves. (...) One popular strategy involves arguing that branching selves introduce a new kind of subjective uncertainty. I argue here that the variants of this strategy in the literature all fail, either because the uncertainty is spurious, or because it is in the wrong place to yield probabilistic predictions. I conclude that uncertainty cannot be the ground for probability in Everettian quantum mechanics. (shrink)
This book offers a philosophically-based, yet clinically-oriented perspective on current medical reasoning aiming at 1) identifying important forms of uncertainty permeating current clinical reasoning and practice 2) promoting the application of an abductive methodology in the health context in order to deal with those clinical uncertainties 3) bridging the gap between biomedical knowledge, clinical practice, and research and values in both clinical and philosophical literature. With a clear philosophical emphasis, the book investigates themes lying at the border between several (...) disciplines, such as medicine, nursing, logic, epistemology, and philosophy of science; but also ethics, epidemiology, and statistics. At the same time, it critically discusses and compares several professional approaches to clinical practice such as the one of medical doctors, nurses and other clinical practitioners, showing the need for developing a unified framework of reasoning, which merges methods and resources from many different clinical but also non-clinical disciplines. In particular, this book shows how to leverage nursing knowledge and practice, which has been considerably neglected so far, to further shape the interdisciplinary nature of clinical reasoning. Furthermore, a thorough philosophical investigation on the values involved in health care is provided, based on both the clinical and philosophical literature. The book concludes by proposing an integrative approach to health and disease going beyond the so-called “classical biomedical model of care”. (shrink)