This paper shows that strict evidentialism about normative reasons for belief is inconsistent with taking truth to be the source of normative reasons for belief. It does so by showing that there are circumstances in which one can know what truth requires one to believe, yet still lack evidence for the contents of that belief.
Philosophers interested in the fitting attitude analysis of final value have devoted a great deal of attention to the wrong kind of reasons problem. This paper offers an example of the reverse difficulty, the wrong kind of value problem. This problem creates deeper challenges for the fitting attitude analysis and provides independent grounds for rejecting it, or at least for doubting seriously its correctness.
In this paper I introduce an objection to normative evidentialism about reasons for belief. The objection arises from difficulties that evidentialism has with explaining our reasons for belief in unstable belief contexts with a single fixed point. I consider what other kinds of reasons for belief are relevant in such cases.
In this paper it is argued that the buck-passing analysis (BPA) of final value is not a plausible analysis of value and should be abandoned. While considering the influential wrong kind of reason problem and other more recent technical objections, this paper contends that there are broader reasons for giving up on buck-passing. It is argued that the BPA, even if it can respond to the various technical objections, is not an attractive analysis of final value. It is not attractive (...) for two reasons: the first being that the BPA lacks the features typical of successful conceptual analyses and the second being that it is unable to deliver on the advantages that its proponents claim for it. While not offering a knock-down technical refutation of the BPA, this paper aims to show that there is little reason to think that the BPA is correct, and that it should therefore be given up as an analysis of final value. (shrink)
In this paper I argue against the stronger of the two views concerning the right and wrong kind of reasons for belief, i.e. the view that the only genuine normative reasons for belief are evidential. The project in this paper is primarily negative, but with an ultimately positive aim. That aim is to leave room for the possibility that there are genuine pragmatic reasons for belief. Work is required to make room for this view, because evidentialism of a strict variety (...) remains the default view in much of the debate concerning normative reasons for belief. Strict versions of evidentialism are inconsistent with the view that there are genuine pragmatic reasons for belief. (shrink)
This is chapter 5 of the book project _The true and the good: a new theory of theoretical reason_, in which I explore the claim that both alethic and pragmatic reasons for belief are basic, but that they share a pragmatic foundation in a pluralist theory of wellbeing in which being in a positive epistemic state is a non-derivative component of wellbeing. This chapter argues that all three of fittingness first, reasons first, and value first views are false. It does (...) so by showing that fittingness and reasons both have unalike variance conditions with respect to value, i.e. that sometimes the value of something can switch from good to bad or bad to good without there being any change in whether it is fitting to favour it or whether there are reason to favour it. This is a form of an argument from under-generation following on earlier work by Krister Bykvist and AndrewReisner, respectively. It is also more tentatively that reasons cannot be analysed in terms of correctness. Because the the arguments in this paper concern the extensional adequacy of the various bi-conditionals linking fittingness, reasons, and value, they suffice for rejecting even modest versions of '-first' views that do not purport to provide analyses, but rather only sets of correctness conditions for, e.g., reasons and value in terms of fittingness, or any two in terms of the third. (Updated 10 May 2022). (shrink)
This chapter sets out a theory of how to weigh alethic and pragmatic (non-alethic) reasons for belief, or more precisely, to say how alethic and non-alethic considerations jointly determine what one ought to believe. It replaces my earlier (2008) weighing account. It is part of _The true and the good: a new theory of theoretical reason_, which develops a view, welfarist pluralism, which comprises central two theses. One is that there are both irreducibly alethic or epistemic reasons for belief and (...) irreducibly pragmatic (and non-alethic) reasons for belief. The other is that despite this, the source of all normativity is pragmatic in a particular way, i.e. that all reasons are reasons in virtue of their being conducive to wellbeing. The pluralist theory of reasons emerges from the irreducibly plural nature of the components of wellbeing, on of which is being in a positively-valenced epistemic state. This view also offers some insight into outstanding problems concerning the scope, chronicity, and normativity of the requirements of theoretical rationality as well. (Updated 10 May 2022). (shrink)
This entry discusses the notion of a unit of normativity. This notion may be understood in two distinct ways. One way to understand a unit of normativity is as some particular type of assignment of normative status, e.g., a requirement, an ought, a reason, or a permission. A second way to understand a unit of normativity is as a measure of a quantity of normativity, perhaps associated with the numerical assignment given to the strength of reasons. This entry outlines some (...) basic differences among units of normativity in the first sense, noting that they vary slightly depending on whether one is talking about normativity in a more general or more robust sense. This entry also discusses in more detail the question of whether there might be a unit of normativity in the second sense. It discusses the relevant metaphysical questions. It also provides an explanation of why reasons can be assigned numerical strengths, even if there are no units of normativity in the sense of measurements of quantities of normativity. (shrink)
People’s concept of free will is often assumed to be incompatible with the deterministic, scientific model of the universe. Indeed, many scholars treat the folk concept of free will as assuming a special form of nondeterministic causation, possibly the notion of uncaused causes. However, little work to date has directly probed individuals’ beliefs about what it means to have free will. The present studies sought to reconstruct this folk concept of free will by asking people to define the concept (Study (...) 1) and by confronting them with a neuroscientific claim that free will is an illusion (Study 2), which invited them to either reconcile or contrast free will with determinism. The results suggest that the core of people’s concept of free will is a choice that fulfills one’s desires and is free from internal or external constraints. No evidence was found for metaphysical assumptions about dualism or indeterminism. (shrink)
According to previous research, threatening people’s belief in free will may undermine moral judgments and behavior. Four studies tested this claim. Study 1 used a Velten technique to threaten people’s belief in free will and found no effects on moral behavior, judgments of blame, and punishment decisions. Study 2 used six different threats to free will and failed to find effects on judgments of blame and wrongness. Study 3 found no effects on moral judgment when manipulating general free will beliefs (...) but found strong effects when manipulating the perceived choice capacity of the judged agent. Study 4 used pretested narratives that varied agents’ apparent free will and found that perceived choice capacity mediated the relationship between free will and blame. These results suggest that people’s general beliefs about whether free will exists have no impact on moral judgments but specific judgments about the agent’s choice capacity do. (shrink)
The strong weak truth table (sw) reducibility was suggested by Downey, Hirschfeldt, and LaForte as a measure of relative randomness, alternative to the Solovay reducibility. It also occurs naturally in proofs in classical computability theory as well as in the recent work of Soare, Nabutovsky, and Weinberger on applications of computability to differential geometry. We study the sw-degrees of c.e. reals and construct a c.e. real which has no random c.e. real (i.e., Ω number) sw-above it.
Architectural Philosophy is the first book to outline a philosophical account of architecture and to establish the singularity of architectural practice and ...
Moral judgments about an agent's behavior are enmeshed with inferences about the agent's mind. Folk psychology—the system that enables such inferences—therefore lies at the heart of moral judgment. We examine three related folk-psychological concepts that together shape people's judgments of blame: intentionality, choice, and free will. We discuss people's understanding and use of these concepts, address recent findings that challenge the autonomous role of these concepts in moral judgment, and conclude that choice is the fundamental concept of the three, defining (...) the core of folk psychology in moral judgment. (shrink)
We show that in the c.e. weak truth table degrees if b < c then there is an a which contains no hypersimple set and b < a < c. We also show that for every w < c in the c.e. wtt degrees such that w is hypersimple, there is a hypersimple a such that w < a < c. On the other hand, we know that there are intervals which contain no hypersimple set.
Hershberger is the winner of a 2015 Insight Award from theSociety for Photographic Education for his work on this book andfor his overall contributions to the field! Photographic Theory: An Historical Anthology presents acompendium of readings spanning ancient times to the digital agethat are related to the history, nature, and current status ofdebates in photographic theory. Offers an authoritative and academically up-to-date compendiumof the history of photographic theory Represents the only collection to include ancient, Renaissance,and 19th-, 20th-, and 21st-century writings (...) related to thesubject Stresses the drama of historical and contemporary debateswithin theoretical circles Features comprehensive coverage of recent trends in digitalphotography Fills a much-needed gap in the existing literature. (shrink)
Recent philosophy of psychology has seen the rise of so-called "dual-component" and "two-dimensional" theories of mental content as what I call a "Middle Way" between internalism (the view that contents of states like belief are "narrow") and externalism (the view that by and large, such contents are "wide"). On these Middle Way views, mental states are supposed to have two kinds of content: the "folk-psychological" kind, which we ordinarily talk about and which is wide; and some non-folk-psychological kind which is (...) narrow. Jerry Fodor is responsible for one of the most influential arguments that we need to believe in some such non-folk-psychological kind of content. In this paper I argue that the ideas behind Fodor's premises are mutually inconsistent - so it would be irrational to believe in a Middle Way theory of mental content no matter how many of Fodor's premises you find plausible. Common opinion notwithstanding, we have to choose between internalism and externalism, full-stop. (shrink)
Internalism about mental content holds that microphysical duplicates must be mental duplicates full-stop. Anyone particle-for-particle indiscernible from someone who believes that Aristotle was wise, for instance, must share that same belief. Externalism instead contends that many perfectly ordinary propositional attitudes can be had only in certain sorts of physical, sociolinguistic, or historical context. To have a belief about Aristotle, for instance, a person must have been causally impacted in the right way by Aristotle himself (e.g., by hearing about him, or (...) reading some of his works).An interesting third view, which I call. (shrink)
Walter Benjamin's Politics of 'bad tasteMichael Mac Modernity as an unfinished Project: Benjamin and Political RomanticismRobert Sinnerbrink Violence, ...
We say that A≤LRB if every B-random set is A-random with respect to Martin–Löf randomness. We study this relation and its interactions with Turing reducibility, classes, hyperimmunity and other recursion theoretic notions.
We say that A ≤LR B if every B-random number is A-random. Intuitively this means that if oracle A can identify some patterns on some real γ. In other words. B is at least as good as A for this purpose. We study the structure of the LR degrees globally and locally (i.e., restricted to the computably enumberable degrees) and their relationship with the Turing degrees. Among other results we show that whenever α in not GL₂ the LR degree of (...) α bounds $2^{\aleph _{0}}$ degrees (so that, in particular, there exist LR degrees with uncountably many predecessors) and we give sample results which demonstrate how various techniques from the theory of the c.e. degrees can be used to prove results about the c.e. LR degrees. (shrink)
Let us say that any degree d > 0satisfies the minimal complementation property if for every degree 0 < a < d there exists a minimal degree b < d such that a ∨ b = d . We show that every degree d ≥ 0′ satisfies MCP.
We prove a number of results in effective randomness, using methods in which Π⁰₁ classes play an essential role. The results proved include the fact that every PA Turing degree is the join of two random Turing degrees, and the existence of a minimal pair of LR degrees below the LR degree of the halting problem.
Misleading information pervades marketing communications, and is a long-standing issue in business ethics. Regulators place a heavy burden on consumers to detect misleading information, and a number of studies have shown training can improve their ability to do so. However, the possible side effects have largely gone unexamined. We provide evidence for one such side-effect, whereby training consumers to detect a specific tactic, leaves them more vulnerable to a second tactic included in the same ad, relative to untrained controls. We (...) update standard notions of persuasion knowledge using a goal systems approach that allows for multiple vigilance goals to explain such side-effects in terms of goal shielding, which is a generally adaptive process by which activation and/or fulfillment of a low-level goal inhibits alternative detection goals. Furthermore, the same goal systems logic is used to develop a more general form of training that activates a higher-level goal. This more general training improved detection of a broader set of tactics without the negative goal shielding side effect. (shrink)
Ethics, Capitalism, and Multinationals.E. F. Andrews - forthcoming - Ethics and the Multinational Enterprise: Proceedings of the Sixth National Conference on Business Ethics, University Press of America, Lanham, Md.details