How much does ethics demand of us? On what authority does it demand it? How does what ethics demand relate to other requirements, such as those of prudence, law, and social convention? Does ethics really demand anything at all? Questions of this sort lie at the heart of the work of the Danish philosopher and theologian K. E. Logstrup, and in particular his key text The Ethical Demand. In The Radical Demand in Logstrup's Ethics, Robert Stern offers a full account (...) of that text, and situates Logstrup's distinctive position in relation to Kant, Kierkegaard, Levinas, Darwall and Luther. For Logstrup, the ethical situation is primarily one in which the fate of the other person is placed in your hands, where it is then your responsibility to do what is best for them. The demand therefore does not come from the other person as such, as what they ask you to do may be different from what you should do. It is also not laid down by social rules, nor by God or by any formal principle of practical reason, such as Kant's principle of universalizability. Rather, it comes from what is required to care for the other, and the directive power of their needs in the situation. Logstrup therefore rejects accounts of ethical obligation based on the commands of God, or on abstract principles governing practical reason, or on social norms; instead he develops a different picture, at the basis of which is our interdependence, which he argues gives his ethics a grounding in the nature of life itself. (shrink)
Robert Stern investigates how scepticism can be countered by using transcendental arguments concerning the necessary conditions for the possibility of experience, language, or thought. He shows that the most damaging sceptical questions concern neither the certainty of our beliefs nor the reliability of our belief-forming methods, but rather how we can justify our beliefs.
The aim of this article is twofold. First, it is argued that while the principle of ‘ought implies can’ is certainly plausible in some form, it is tempting to misconstrue it, and that this has happened in the way it has been taken up in some of the current literature. Second, Kant's understanding of the principle is considered. Here it is argued that these problematic conceptions put the principle to work in a way that Kant does not, so that there (...) is an important divergence here which can easily be overlooked. (shrink)
In many histories of modern ethics, Kant is supposed to have ushered in an anti-realist or constructivist turn by holding that unless we ourselves 'author' or lay down moral norms and values for ourselves, our autonomy as agents will be threatened. In this book, Robert Stern challenges the cogency of this 'argument from autonomy', and claims that Kant never subscribed to it. Rather, it is not value realism but the apparent obligatoriness of morality that really poses a challenge to our (...) autonomy: how can this be accounted for without taking away our freedom? The debate the book focuses on therefore concerns whether this obligatoriness should be located in ourselves, in others or in God. Stern traces the historical dialectic that drove the development of these respective theories, and clearly and sympathetically considers their merits and disadvantages; he concludes by arguing that the choice between them remains open. (shrink)
Fourteen new essays by a distinguished team of authors offer a broad and stimulating re-examination of transcendental arguments. This is the philosophical method of arguing that what is doubted or denied by the opponent must be the case, as a condition for the possibility of experience, language, or thought.The line-up of contributors features leading figures in the field from both sides of the Atlantic; they discuss the nature of transcendental arguments, and consider their role and value. In particular, they consider (...) how successful such arguments are as a response to sceptical problems. The editor's introduction provides historical context and philosophical orientation for the discussions. This is the first major appraisal of transcendental arguments since the 1970s; they have continued to play a significant role in philosophy, and recent developments in epistemology and metaphysics have raised new questions and challenges for them. Transcendental Arguments will be essential reading for anyone interested in this area of philosophy, and the starting-point for future work. (shrink)
Recent approaches to causal modelling rely upon the causal Markov condition, which specifies which probability distributions are compatible with a directed acyclic graph. Further principles are required in order to choose among the large number of DAGs compatible with a given probability distribution. Here we present a principle that we call frugality. This principle tells one to choose the DAG with the fewest causal arrows. We argue that frugality has several desirable properties compared to the other principles that have been (...) suggested, including the well-known causal faithfulness condition. _1_ Introduction _2_ The Causal Markov Condition _3_ Faithfulness _4_ Frugality _4.1_ Basic independences and frugality _4.2_ General properties of directed acyclic graphs satisfying frugality _4.3_ Connection to minimality assumptions _5_ Frugality as a Parsimony Principle _6_ Conclusion Appendix. (shrink)
Schupbach and Sprenger introduce a novel probabilistic approach to measuring the explanatory power that a given explanans exerts over a corresponding explanandum. Though we are sympathetic to their general approach, we argue that it does not adequately capture the way in which the causal explanatory power that c exerts on e varies with background knowledge. We then amend their approach so that it does capture this variance. Though our account of explanatory power is less ambitious than Schupbach and Sprenger’s in (...) the sense that it is limited to causal explanatory power, it is also more ambitious because we do not limit its domain to cases where c genuinely explains e. Instead, we claim that c causally explains e if and only if our account says that c explains e with some positive amount of causal explanatory power. 1Introduction 2The Logic of Explanatory Power 3Subjective and Nomic Distributions 3.1Actual degrees of belief 3.2The causal distribution 4Background Knowledge 4.1Conditionalization and colliders 4.2A helpful intervention 5Causal Explanatory Power 5.1The applicability of explanatory power 5.2Statistical relevance ≠ causal explanatory power 5.3Interventionist explanatory power 5.4E illustrated 6Conclusion. (shrink)
Jim Joyce has argued that David Lewis’s formulation of causal decision theory is inadequate because it fails to apply to the “small world” decisions that people face in real life. Meanwhile, several authors have argued that causal decision theory should be developed such that it integrates the interventionist approach to causal modeling because of the expressive power afforded by the language of causal models, but, as of now, there has been little work towards this end. In this paper, I propose (...) a variant of Lewis’s causal decision theory that is intended to meet both of these demands. Specifically, I argue that Lewis’s causal decision theory can be rendered applicable to small world decisions if one analyzes his dependency hypotheses as causal hypotheses that depend on the interventionist causal modeling framework for their semantics. I then argue that this interventionist variant of Lewis’s causal decision theory is preferable to interventionist causal decision theories that purportedly generalize Lewis’s through the use of conditional probabilities. This is because Lewisian interventionist decision theory captures the causal decision theorist’s conviction that any correlation between what the agent does and cannot cause should be irrelevant to the agent’s choice, while purported generalizations do not. (shrink)
Kim’s causal exclusion argument purports to demonstrate that the non-reductive physicalist must treat mental properties (and macro-level properties in general) as causally inert. A number of authors have attempted to resist Kim’s conclusion by utilizing the conceptual resources of Woodward’s (2005) interventionist conception of causation. The viability of these responses has been challenged by Gebharter (2017a), who argues that the causal exclusion argument is vindicated by the theory of causal Bayesian networks (CBNs). Since the interventionist conception of causation relies crucially (...) on CBNs for its foundations, Gebharter’s argument appears to cast significant doubt on interventionism’s antireductionist credentials. In the present article, we both (1) demonstrate that Gebharter’s CBN-theoretic formulation of the exclusion argument relies on some unmotivated and philosophically significant assumptions (especially regarding the relationship between CBNs and the metaphysics of causal relevance), and (2) use Bayesian networks to develop a general theory of causal inference for multi-level systems that can serve as the foundation for an antireductionist interventionist account of causation. (shrink)
[INTRODUCTION] Like the terms 'dialectic', 'Aufhebung' (or 'sublation'), and 'Geist', the term 'concrete universal' has a distinctively Hegelian ring to it. But unlike these others, it is particularly associated with the British strand in Hegel's reception history, as having been brought to prominence by some of the central British Idealists. It is therefore perhaps inevitable that, as their star has waned, so too has any use of the term, while an appreciation of the problematic that lay behind it has seemingly (...) vanished: if the British Idealists get any sort of mention in a contemporary metaphysics book (which is rarely), it will be Bradley's view of relations or truth that is discussed, not their theory of universals, so that the term has a rather antique air, buried in the dusty volumes of Mind from the turn of the nineteenth century. This is not surprising: the episode known as British Idealism can appear to be a period that is lost to us, in its language, points of historical reference (Lotze, Sigwart, Jevons), and central preoccupations (the Absolute). Even while interest in Hegel continues to grow, interest in his Logic has grown more slowly than in the rest of his work, with Book III of the Logic remaining as the daunting peak of that challenging text - while it is here that the British Idealists focussed their attention and claimed to have uncovered that 'exotic' but 'vanished specimen', the concrete universal. Finally, as the trend of reading Hegel pushes ever further in a non-metaphysical direction, it might be thought that the future of the concrete universal is hardly likely to be brighter than its recent past - for it may seem hard to imagine how a conception championed by the British Idealists, who were apparently shameless in their metaphysical commitments, can find favour in these more austere and responsible times. In this paper, however, I want to make a case for holding that there is something enlightening to be found in how some of the British Idealists approached the 'concrete universal', both interpretatively and philosophically. At the interpretative level, I will argue that while not everything these Idealists are taken to mean by the term is properly to be found in Hegel, their work nonetheless relates to a crucial and genuine strand in Hegel's position, so that their discussion of this issue is an important moment in the reception history of his thought. At a philosophical level, I think that the question that concerned Hegel and these British Idealists retains much of its interest, as does their shared approach to it: namely, how far does our thought involve a mere abstraction from reality, and what are the metaphysical and epistemological implications if it turns out it does not? As such, I will suggest, taking seriously what these British Idealists have to say about the concrete universal can help us both in our understanding of Hegel, and in our appreciation of the contribution Hegel's position can make to our thinking on the issues that surround this topic. (shrink)
Meek and Glymour use the graphical approach to causal modeling to argue that one and the same norm of rational choice can be used to deliver both causal-decision-theoretic verdicts and evidential-decision-theoretic verdicts. Specifically, they argue that if an agent maximizes conditional expected utility, then the agent will follow the causal decision theorist’s advice when she represents herself as intervening, and will follow the evidential decision theorist’s advice when she represents herself as not intervening. Since Meek and Glymour take no stand (...) on whether agents should represent themselves as intervening, they provide more general advice than standard causal decision theorists and evidential decision theorists. But I argue here that even Meek and Glymour’s advice is not sufficiently general. This is because their advice is not sensitive to the distinct ways in which agents can fail to intervene, and there are decision-making contexts in which agents can reasonably have non-extreme confidence that they are intervening. I then show that the most natural extension of Meek and Glymour’s framework fails, but offer a generalization of my “Interventionist Decision Theory” that does not suffer from the same problems. (shrink)
Gordon Belot argues that Bayesian theory is epistemologically immodest. In response, we show that the topological conditions that underpin his criticisms of asymptotic Bayesian conditioning are self-defeating. They require extreme a priori credences regarding, for example, the limiting behavior of observed relative frequencies. We offer a different explication of Bayesian modesty using a goal of consensus: rival scientific opinions should be responsive to new facts as a way to resolve their disputes. Also we address Adam Elga’s rebuttal to Belot’s analysis, (...) which focuses attention on the role that the assumption of countable additivity plays in Belot’s criticisms. (shrink)
A modest transcendental argument is one that sets out merely to establish how things need to appear to us or how we need to believe them to be, rather than how things are. Stroud's claim to have established that all transcendental arguments must be modest in this way is criticised and rejected. However, a different case for why we should abandon ambitious transcendental arguments is presented: namely, that when it comes to establishing claims about how things are, there is no (...) reason to prefer transcendental arguments to arguments that rely on the evidence of the senses, making the former redundant in a way that modest transcendental arguments, which have a different kind of sceptical target, are not. (shrink)
There are cases of ineffable learning — i. e., cases where an agent learns something, but becomes certain of nothing that she can express — where it is rational to update by Jeffrey conditionalization. But there are likewise cases of ineffable learning where updating by Jeffrey conditionalization is irrational. In this paper, we first characterize a novel class of cases where it is irrational to update by Jeffrey conditionalization. Then we use the d-separation criterion to develop a causal understanding of (...) when and when not to Jeffrey conditionalize that bars updating by Jeffrey conditionalization in these cases. Finally, we reflect on how the possibility of so-called “unfaithful” causal systems bears on the normative force of the causal updating norm that we advocate. (shrink)
In their 2010 book, Biology’s First Law, D. McShea and R. Brandon present a principle that they call ‘‘ZFEL,’’ the zero force evolutionary law. ZFEL says (roughly) that when there are no evolutionary forces acting on a population, the population’s complexity (i.e., how diverse its member organisms are) will increase. Here we develop criticisms of ZFEL and describe a different law of evolution; it says that diversity and complexity do not change when there are no evolutionary causes.
Although logical consistency is desirable in scientific research, standard statistical hypothesis tests are typically logically inconsistent. To address this issue, previous work introduced agnostic hypothesis tests and proved that they can be logically consistent while retaining statistical optimality properties. This article characterizes the credal modalities in agnostic hypothesis tests and uses the hexagon of oppositions to explain the logical relations between these modalities. Geometric solids that are composed of hexagons of oppositions illustrate the conditions for these modalities to be logically (...) consistent. Prisms composed of hexagons of oppositions show how the credal modalities obtained from two agnostic tests vary according to their threshold values. Nested hexagons of oppositions summarize logical relations between the credal modalities in these tests and prove new relations. (shrink)
This volume presents a selection of Robert Stern's work on the theme of Kantian ethics. It begins by focusing on the relation between Kant's account of obligation and his view of autonomy, arguing that this leaves room for Kant to be a realist about value. Stern then considers where this places Kant in relation to the question of moral scepticism, and in relation to the principle of 'ought implies can', and examines this principle in its own right. The papers then (...) move beyond Kant himself to his wider influence and to critics of his work, and the volume concludes with a consideration of a broadly Kantian critique of divine command ethics offered by Stephen Darwall. General themes considered in this volume include value, perfectionism, agency, autonomy, moral motivation, moral scepticism, and obligation, as well as the historical place of Kant's ethics and its influence on thinkers up to the present day. (shrink)
Schupbach and Sprenger introduce a novel probabilistic approach to measuring the explanatory power that a given explanans exerts over a corresponding explanandum. Though we are sympathetic to their general approach, we argue that it does not adequately capture the way in which the causal explanatory power that c exerts on e varies with background knowledge. We then amend their approach so that it does capture this variance. Though our account of explanatory power is less ambitious than Schupbach and Sprenger’s in (...) the sense that it is limited to causal explanatory power, it is also more ambitious because we do not limit its domain to cases where c genuinely explains e. Instead, we claim that c causally explains e if and only if our account says that c explains e with some positive amount of causal explanatory power. (shrink)
This paper considers the prospects for the current revival of interest in Hegel, and the direction it might take. Looking back to Richard J. Bernstein's paper from 1977, on ‘Why Hegel Now?’, it contrasts his optimistic assessment of a rapprochement between Hegel and analytic philosophy with Sebastian Gardner's more pessimistic view, where Gardner argues that Hegel's idealist account of value makes any such rapprochement impossible. The paper explores Hegel's account of value further, arguing for a middle way between these extremes (...) of optimism and pessimism, proposing an Aristotelian reading which is more metaphysical than Bernstein recognizes, but not as at odds with thinking in current analytic philosophy as Gardner suggests, as it finds a counterpart in the work of Philippa Foot, Michael Thompson, Rosalind Hursthouse and others. (shrink)
The Sleeping Beauty problem has spawned a debate between “thirders” and “halfers” who draw conflicting conclusions about Sleeping Beauty's credence that a coin lands heads. Our analysis is based on a probability model for what Sleeping Beauty knows at each time during the experiment. We show that conflicting conclusions result from different modeling assumptions that each group makes. Our analysis uses a standard “Bayesian” account of rational belief with conditioning. No special handling is used for self-locating beliefs or centered propositions. (...) We also explore what fair prices Sleeping Beauty computes for gambles that she might be offered during the experiment. (shrink)
This is an introduction to a special issue of the British Journal for the History of Philosophy, on the relation between idealism and pragmatism. It sets out the way in which the two traditions can be related, and then outlines the papers contained in the special issue.
It is a consequence of the theory of imprecise credences that there exist situations in which rational agents inevitably become less opinionated toward some propositions as they gather more evidence. The fact that an agent's imprecise credal state can dilate in this way is often treated as a strike against the imprecise approach to inductive inference. Here, we show that dilation is not a mere artifact of this approach by demonstrating that opinion loss is countenanced as rational by a substantially (...) broader class of normative theories than has been previously recognised. Specifically, we show that dilation-like phenomena arise even when one abandons the basic assumption that agents have (precise or imprecise) credences of any kind, and follows directly from bedrock norms for rational comparative confidence judgements of the form `I am at least as confident in p as I am in q'. We then use the comparative confidence framework to develop a novel understanding of what exactly gives rise to dilation-like phenomena. By considering opinion loss in this more general setting, we are able to provide a novel assessment of the prospects for an account of inductive inference that is not saddled with the inevitability of rational opinion loss. (shrink)
In this article, I want to argue that scepticism for Kant must be seen in ancient and not just modern terms, and that if we take this into account we will need to take a different view of Kant's response to Hume from the one that is standardly presented in the literature. This standard view has been put forward recently by Paul Guyer, and it is therefore his view that I want to look at in some detail, and to try (...) to correct. (shrink)
Though common sense says that causes must temporally precede their effects, the hugely influential interventionist account of causation makes no reference to temporal precedence. Does common sense lead us astray? In this paper, I evaluate the power of the commonsense assumption from within the interventionist approach to causal modeling. I first argue that if causes temporally precede their effects, then one need not consider the outcomes of interventions in order to infer causal relevance, and that one can instead use temporal (...) and probabilistic information to infer exactly when X is causally relevant to Y in each of the senses captured by Woodward’s interventionist treatment. Then, I consider the upshot of these findings for causal decision theory, and argue that the commonsense assumption is especially powerful when an agent seeks to determine whether so-called “dominance reasoning” is applicable. (shrink)
_The Phenomenology of Spirit_ is Hegel's most important and famous work. It is essential to understanding Hegel's philosophical system and why he remains a major figure in Western Philosophy. This _GuideBook_ introduces and assesses: * Hegel's life and the background to the _Phenomenology of Spirit_ * the ideas and the text of the _Phenomenology of Spirit_ * the continuing importance of Hegel's work to philosophy.
Bayesians standardly claim that there is rational pressure for agents’ credences to cohere across time because they face bad (epistemic or practical) consequences if they fail to diachronically cohere. But as David Christensen has pointed out, groups of individual agents also face bad consequences if they fail to interpersonally cohere, and there is no general rational pressure for one agent's credences to cohere with another’s. So it seems that standard Bayesian arguments may prove too much. Here, we agree with Christensen (...) that there is no general rational pressure to diachronically cohere, but we argue that there are particular cases in which there is rational pressure to diachronically cohere, as well as particular cases in which interpersonal probabilistic coherence is rationally required. More generally, we suggest that Bayesian arguments for coherence apply whenever a collection (of agents or time slices) has a shared dimension of value and an ability to coordinate their actions in a range of cases relevant to that value. Typically, this shared value and ability to coordinate is very strong across the time slices of one human being, and very weak across different human beings, but there are special cases where these can switch—i.e., some groups of humans will have as much reason for their beliefs to cohere across a particular range of cases as the time slices of one human usually do, but some time slices of a human will have as much freedom to differ in their beliefs from the others as the members of a group usually do. (shrink)
Since Barry Stroud's classic paper in 1968, the general discussion on transcendental arguments tends to focus on examples from theoretical philosophy. It also tends to be pessimistic, or at least extremely reluctant, about the potential of this kind of arguments. Nevertheless, transcendental reasoning continues to play a prominent role in some recent approaches to moral philosophy. Moreover, some authors argue that transcendental arguments may be more promising in moral philosophy than they are in theoretical contexts. Against this background, the current (...) volume focuses on transcendental arguments in practical philosophy. Experts from different countries and branches of philosophy share their views about whether there are actually differences between "theoretical" and "practical" uses of transcendental arguments. They examine and compare different versions of transcendental arguments in moral philosophy, explain their structure, and assess their respective problems and promises. This book offers all those interested in ethics, meta-ethics, or epistemology a more comprehensive understanding of transcendental arguments. It also provides them with new insights into uses of transcendental reasoning in moral philosophy. (shrink)
This collection of essays by leading international philosophers considers central themes in the ethics of Danish philosopher Knud Ejler Løgstrup (1905–1981). Løgstrup was a Lutheran theologian much influenced by phenomenology and by strong currents in Danish culture, to which he himself made important contributions. The essays in What Is Ethically Demanded? K. E. Løgstrup’s Philosophy of Moral Life are divided into four sections. The first section deals predominantly with Løgstrup’s relation to Kant and, through Kant, the system of morality in (...) general. The second section focuses on how Løgstrup stands in connection with Kierkegaard, Heidegger, and Levinas. The third section considers issues in the development of Løgstrup’s ethics and how it relates to other aspects of his thought. The final section covers certain central themes in Løgstrup’s position, particularly his claims about trust and the unfulfillability of the ethical demand. The volume includes a previously untranslated early essay by Løgstrup, “The Anthropology of Kant’s Ethics,” which defines some of his basic ethical ideas in opposition to Kant’s. The book will appeal to philosophers and theologians with an interest in ethics and the history of philosophy. (shrink)
This paper introduces pragmatic hypotheses and relates this concept to the spiral of scientific evolution. Previous works determined a characterization of logically consistent statistical hypothesis tests and showed that the modal operators obtained from this test can be represented in the hexagon of oppositions. However, despite the importance of precise hypothesis in science, they cannot be accepted by logically consistent tests. Here, we show that this dilemma can be overcome by the use of pragmatic versions of precise hypotheses. These pragmatic (...) versions allow a level of imprecision in the hypothesis that is small relative to other experimental conditions. The introduction of pragmatic hypotheses allows the evolution of scientific theories based on statistical hypothesis testing to be interpreted using the narratological structure of hexagonal spirals, as defined by Pierre Gallais. (shrink)
One argument put forward by Christine Korsgaard in favour of her constructivist appeal to the nature of agency, is that it does better than moral realism in answering moral scepticism. However, realists have replied by pressing on her the worry raised by H. A. Prichard, that any attempt to answer the moral sceptic only succeeds in basing moral actions in non-moral ends, and so is self-defeating. I spell out these issues in more detail, and suggest that both sides can learn (...) something by seeing how the sceptical problematic arises in Kant. Doing so, I argue, shows how Korsgaard might raise the issue of scepticism against the realist whilst avoiding the Prichardian response.1. (shrink)
My aim in this paper is to consider a particular line of criticism that has been used by constructivists to argue against moral realism, which is to claim that if moral realism were true, this would then threaten or undermine our autonomy as agents. I call this the argument from autonomy. I argue that the best way to understand the argument from autonomy is to relate it to the issue of obligatoriness; but that there are a variety of strategies to (...) be explored concerning obligation before it is clear that the right response to this issue is a constructivist one, or that the realist is hereby compelled to surrender their position. (shrink)
This paper examines college athletes’ perceived support for concussion reporting from coaches and teammates and its variation by year-in-school, finding significant differences in perceived coach support. It also examines the effects of perceived coach support on concussion reporting behaviors, finding that greater perceived coach support is associated with fewer undiagnosed concussions and returning to play while symptomatic less frequently in the two weeks preceding the survey. Coaches play a critical role in athlete concussion reporting.
Concussion is a form of traumatic brain injury that has been defined as a “trauma-induced alteration in mental status that may or may not involve loss of consciousness.” Terms such as getting a “ding” or getting your “bell rung” are sometimes used as colloquialisms for concussion, but inappropriately downplay the seriousness of the injury. It is estimated that between 1.6 and 3.8 million concussions occur annually in the United States as a result of participation in sports or recreational activities. To (...) date, there are no objective, biological markers for concussion; rather, the current diagnosis of concussion is dependent upon symptom reporting by the athlete. In the acute phase, concussions can result in a broad spectrum of symptoms that can be transient or last for days, weeks, or even months. Symptom prolongation is generally referred to as post-concussion syndrome. (shrink)