The all-affected principle, by which all those affected by the policies of the state ought to be included in the demos governing it, is often considered prima facie attractive but, upon closer examination, implausible. The main alternative, according to which all those and only those affected by possible consequences of possible decisions ought to be included in the demos, is equally implausible. I suggest a reformulated principle: the demos includes all those affected by foreseeable consequences of decisions that the state (...) has legal authority and capacity to take. This avoids the problems of the standard version and the main alternative. (shrink)
According to neo-republicans, democracy is morally justified because it is among the prerequisites for freedom as non-domination. The claim that democracy secures freedom as non-domination needs to explain why democratic procedures contribute to non-domination and for whom democracy secures non-domination. This requires an account of why domination is countered by democratic procedures and an account of to whom domination is countered by access to democratic procedures. Neo-republican theory of democracy is based on a detailed discussion of the former but a (...) scant discussion of the latter. We address this lacuna by interpreting the two most influential principles of inclusion, the all-subjected principle and the all-affected principle, in light of neo-republican commitments. The preliminary conclusion is that both principles are able to capture relations of domination between the democratic state and the people controlled by it in the relevant sense. Yet, the state has virtually unlimited powers to control residents, but only limited powers to interfere in the lives of non-residents. Republican aspirations are therefore more in tune with the all-subjected principle according to which only residents in the territory of the state should be granted rights to political participation. (shrink)
The “demos paradox” is the idea that the composition of a demos could never secure democratic legitimacy because the composition of a demos cannot itself be democratically decided. Those who view this problem as unsolvable argue that this insight allows them to adopt a critical perspective towards common ideas about who has legitimate standing to participate in democratic decision-making. We argue that the opposite is true and that endorsing the demos paradox actually undermines our ability to critically engage with common (...) ideas about legitimate standing. We challenge the conception of legitimacy that lurks behind the demos paradox and argue that the real impossibility is to endorse democracy without also being committed to significant procedure-independent standards for the legitimate composition of the demos. We show that trying to solve the problem of the demos by appeal to some normative conception of democratic legitimacy is a worthwhile project that is not undermined by paradox. (shrink)
Machine learning algorithms are increasingly used to support decision-making in the exercise of public authority. Here, we argue that an important consideration has been overlooked in previous discussions: whether the use of ML undermines the democratic legitimacy of public institutions. From the perspective of democratic legitimacy, it is not enough that ML contributes to efficiency and accuracy in the exercise of public authority, which has so far been the focus in the scholarly literature engaging with these developments. According to one (...) influential theory, exercises of administrative and judicial authority are democratically legitimate if and only if administrative and judicial decisions serve the ends of the democratic law maker, are based on reasons that align with these ends and are accessible to the public. These requirements are not satisfied by decisions determined through ML since such decisions are determined by statistical operations that are opaque in several respects. However, not all ML-based decision support systems pose the same risk, and we argue that a considered judgment on the democratic legitimacy of ML in exercises of public authority need take the complexity of the issue into account. This paper outlines considerations that help guide the assessment of whether a ML undermines democratic legitimacy when used to support public decisions. We argue that two main considerations are pertinent to such normative assessment. The first is the extent to which ML is practiced as intended and the extent to which it replaces decisions that were previously accessible and based on reasons. The second is that uses of ML in exercises of public authority should be embedded in an institutional infrastructure that secures reason giving and accessibility. (shrink)
Should artificial intelligences ever be included as co-authors of democratic decisions? According to the conventional view in democratic theory, the answer depends on the relationship between the political unit and the entity that is either affected or subjected to its decisions. The relational conditions for inclusion as stipulated by the all-affected and all-subjected principles determine the spatial extension of democratic inclusion. Thus, AI qualifies for democratic inclusion if and only if AI is either affected or subjected to decisions by the (...) political unit in relevant ways. This paper argues that the conventional view is too simple; that it neglects democratic reasons to recognize only agents and/or moral patients as participants in decision-making. The claim defended is that AAP and ASP implicitly affirm requirements for agency and patiency. In ASP, the entity included must be an agent understood either in terms of legal status, capacity to comply with the law or ability to recognize legitimate authority. In AAP, the entity included must be a patient, understood either in terms of capacity for sentience or consciousness. Thus, the idea here is to explore the potential democratic inclusion of artificial intelligences by an updated account of the relevant conditions of agency and patiency that are implicit in democratic theory. Although conceivable that AI is or will be either affected or subjected in relevant ways to decisions made by political units, it is far less clear that AI will ever be agents or patients in the sense required for democratic inclusion. (shrink)
Jonas Olson presents a critical survey of moral error theory, the view that there are no moral facts and so all moral claims are false. Part I explores the historical context of the debate; Part II assesses J. L. Mackie's famous arguments; Part III defends error theory against challenges and considers its implications for our moral thinking.
Rosenberg applies current thinking in philosophy of science to neoclassical economics in order to assess its claims to scientific standing. Although philosophers have used history and psychology as paradigms for the examination of social science, there is good reason to believe that economics is a more appropriate subject for analysis: it is the most systematized and quantified of the social sciences; its practitioners have reached a measure of consensus on important aspects of their subject; and it encompasses a large (...) number of apparently law-like propositions. (shrink)
Douglas W. Hands's “What Economics Is Not: An Economist's Response to Rosenberg“ is an unsympathetic criticism of the explanatory hypotheses of “If Economics Isn't Science, What Is It?”. Before replying to his objection, I summarize the claims of that paper.
Recent debates in psychopathy studies have articulated concerns about false-positives in assessment and research sampling. These are pressing concerns for research progress, since scientific quality depends on sample quality, that is, if we wish to study psychopathy we must be certain that the individuals we study are, in fact, psychopaths. Thus, if conventional assessment tools yield substantial false-positives, this would explain why central research is laden with discrepancies and nonreplicable findings. This paper draws on moral psychology in order to develop (...) tentative theory-driven exclusion criteria applicable in research sampling. Implementing standardized procedures to discriminate between research participants has the potential to yield more homogenous and discrete samples, a vital prerequisite for research progress in etiology, epidemiology, and treatment strategies. (shrink)
Moral particularism is commonly presented as an alternative to ‘principle- or rule-based’ approaches to ethics, such as consequentialism or Kantianism. This paper argues that particularists' aversions to consequentialism stem not from a structural feature of consequentialism per se, but from substantial and structural axiological views traditionally associated with consequentialism. Given a particular approach to value, there need be no conflict between moral particularism and consequentialism. We consider and reject a number of challenges holding that there is after all such a (...) conflict. We end by suggesting that our proposed position appears quite appealing since it preserves attractive elements from particularism as well as consequentialism. (shrink)
Recognition and Freedom offers up-to-date discussions of Axel Honneth’s political thought by ten experts in the field. It also includes an interview with Honneth and an essay by him on education and democracy, previously unpublished in English.
From ancient Greece to Renaissance Italy to the Modern period, the classical ideal, with its elusive goal of perfecting nature, has held a tenacious grip on Western culture. Nowhere has its hold on the artistic imagination been more pervasive than in France between the seventeenth and nineteenth centuries. The art and life of Raphael formed the bedrock of the classical tradition in French art, yet no comprehensive study of Raphael's impact on the art theory, criticism, and practice of classicism exists. (...) This book fills that gap. Transcending limited notions of artistic influence, the book demonstrates that Raphael had as much impact as a symbol as he did as a paradigm of the classical tradition. Focusing on French art and theory from the classical to the Romantic era, _Raphael and France_ is part of the ongoing revision of views of that period which has been taking place for the last twenty years. The book demonstrates that the shifts from classical to Rococo to neoclassical aesthetics were not as abrupt or as all-encompassing as has been assumed. By tracing the continuity and transformation of the classical ideal, with Raphael's art and image as central paradigms, Rosenberg achieves a broader, more accurate, and comprehensive view of French artistic developments during this period. Rosenberg draws on careful readings of primary sources, including the correspondence and lectures of the French Academy, some of which are unpublished; most of the major theoretical treatises by French and foreign authors; and contemporary criticism and works of art. In the process, he strikes a methodological balance between traditional art-historical approaches and insights provided by more contemporary approaches, such as semiotics and post-structuralism. As the notion of isolated genius as the prime force in art has given way to a broader, more contextual view of art and history, interest in past traditions once regarded as outmoded or dead has grown tremendously. This book makes a timely contribution to this widening area of inquiry. (shrink)
Jonas Olson writes that "a plausible moral error theory must be an error theory about all irreducible normativity". I agree. But unlike Olson, I think we cannot believe this error theory. I first argue that Olson should say that reasons for belief are irreducibly normative. I then argue that if reasons for belief are irreducibly normative, we cannot believe an error theory about all irreducible normativity. I then explain why I think Olson's objections to this argument fail. I end (...) by showing that Olson cannot defend his view as a partly revisionary alternative to an error theory about all irreducible normativity. (shrink)
Social and behavioral scientists — that is, students of human nature — nowadays hardly ever use the term ‘human nature’. This reticence reflects both a becoming modesty about the aims of their disciplines and a healthy skepticism about whether there is any one thing really worthy of the label ‘human nature’. For some feature of humankind to be identified as accounting for our ‘nature’, it would have to reflect some property both distinctive of our species and systematically influential enough to (...) explain some very important aspect of our behavior. Compare: molecular structure gives the essence or the nature of water just because it explains most of its salient properties. Few students of the human sciences currently hold that there is just one or a small number of such features that can explain our actions and/or our institutions. And even among those who do, there is reluctance to label their theories as claims about ‘human nature’. Among anthropologists and sociologists, the label seems too universal and indiscriminant to be useful. The idea that there is a single underlying character that might explain similarities threatens the differences among people and cultures that these social scientists seek to uncover. Even economists, who have explicitly attempted to parlay rational choice theory into an account of all human behavior, do not claim that the maximization of transitive preferences is ‘human nature’. I think part of the reason that social scientists are reluctant to use ‘human nature’ is that the term has traditionally labeled a theory with normative implications as well as descriptive ones. (shrink)
The murder of six million Jewish men, women, and children during World War II was an act of such barbarity as to constitute one of the central events of our time; yet a list of the major concerns of professional philosophers since 1945 would exclude the Holocaust. This collection of twenty-three essays, most of which were written expressly for this volume, is the first book to focus comprehensively on the profound issues and philosophical significance of the Holocaust.The essays, written for (...) general as well as professional readers, convey an extraordinary range of factual information and philosophical reflection in seeking to identify the haunting meanings of the Holocaust. Among the questions addressed are: How should philosophy approach the Holocaust? What part did the philosophical climate play in allowing Hitlerism its temporary triumph? What is the philosophical climate today and what are its probable cultural effects? Can philosophy help our culture to become a bulwark against future agents of evil? The multiple dimensions of the Holocaust-historical, sociological, psychological, religious, moral, and literary-are collected here for concentrated philosophical interpretations. Author note: Alan Rosenberg is a Lecturer in the Philosophy Department at Queens College of the City University of New York. Gerald E. Myers is Professor of Philosophy at Queens College and CUNY Graduate Center. (shrink)
This book explores the ways in which humor can enhance the learning environment. Drawing upon empirical research and brain-based concepts, Jonas presents a theoretical model of humor, along with practical examples for enhancing learning in schools and classrooms.
Since the 19th century, we have come to think of disease in terms of specific entities—entities defined and legitimated in terms of characteristic somatic mechanisms. Since the last third of that century, we have expanded would-be disease categories to include an ever-broader variety of emotional pain, idiosyncrasy, and culturally unsettling behaviors. Psychiatry has been the residuary legatee of these developments, developments that have always been contested at the ever-shifting boundary between disease and deviance, feeling and symptom, the random and the (...) determined, the stigmatized and the value-free. Even in our era of reductionist hopes, psychopharmaceutical practice, and corporate strategies, the legitimacy of many putative disease categories will remain contested.The use of the specific disease entity model will always be a reductionist means to achieve necessarily holistic ends, both in terms of cultural norms and the needs of suffering individuals. Bureaucratic rigidities and stake- holder conflicts structure and intensify such boundary conflicts, as do the interests and activism of an interested lay public. (shrink)
Experiences—visual, emotional, or otherwise—play a role in providing us with justification to believe claims about the world. Some accounts of how experiences provide justification emphasize the role of the experiences’ distinctive phenomenology, i.e. ‘what it is like’ to have the experience. Other accounts emphasize the justificatory role to the experiences’ etiology. A number of authors have used cases of cognitively penetrated visual experience to raise an epistemic challenge for theories of perceptual justification that emphasize the justificatory role of phenomenology rather (...) than etiology. Proponents of the challenge argue that cognitively penetrated visual experiences can fail to provide the usual justification because they have improper etiologies. However, extant arguments for the challenge’s key claims are subject to formidable objections. In this paper, I present the challenge’s key claims, raise objections to previous attempts to establish them, and then offer a novel argument in support of the challenge. My argument relies on an analogy between cognitively penetrated visual and emotional experiences. I argue that some emotional experiences fail to provide the relevant justification because of their improper etiologies and conclude that analogous cognitively penetrated visual experiences fail to provide the relevant justification because of their etiologies, as well. (shrink)
This paper questions the adequacy of the explicit cancellability test for conversational implicature as it is commonly understood. The standard way of understanding this test relies on two assumptions: first, that that one can test whether a certain content is conversationally implicated, by checking whether that content is cancellable, and second, that a cancellation is successful only if it results in a felicitous utterance. While I accept the first of these assumptions, I reject the second one. I argue that a (...) cancellation can succeed even if it results in an infelicitous utterance, and that unless we take this possibility into account we run the risk of misdiagnosing philosophically significant cases. (shrink)
Trevor Teitel has recently argued that combining the assumption that modality reduces to essence with the assumption that possibly some objects contingently exist leads to problems if one wishes to uphold that the logic of metaphysical modality is S5. In this paper I will argue that there is a way for the essentialist to evade the problem described by Teitel. The proposed solution crucially involves the assumption that some propositions possibly fail to exist. I will show how this assumption affords (...) a motivated contingentist response to Teitel’s argument. (shrink)
We argue that a fashionable interpretation of the theory of natural selection as a claim exclusively about populations is mistaken. The interpretation rests on adopting an analysis of fitness as a probabilistic propensity which cannot be substantiated, draws parallels with thermodynamics which are without foundations, and fails to do justice to the fundamental distinction between drift and selection. This distinction requires a notion of fitness as a pairwise comparison between individuals taken two at a time, and so vitiates the interpretation (...) of the theory as one about populations exclusively. (shrink)
The first decade of event-related potential (ERP) research had established that the most consistent correlates of the onset of visual consciousness are the early visual awareness negativity (VAN), a posterior negative component in the N2 time range, and the late positivity (LP), an anterior positive component in the P3 time range. Two earlier extensive reviews ten years ago had concluded that VAN is the earliest and most reliable correlate of visual phenomenal consciousness, whereas LP probably reflects later processes associated with (...) reflective/access consciousness. This article provides an update to those earlier reviews. ERP and MEG studies that have appeared since 2010 and directly compared ERPs between aware and unaware conditions are reviewed, and important new developments in the field are discussed. The result corroborates VAN as the earliest and most consistent signature of visual phenomenal consciousness, and casts further doubt on LP as an ERP correlate of phenomenal consciousness. (shrink)
Rosenberg’s general argumentative strategy in favour of panpsychism is an extension of a traditional pattern. Although his argument is complex and intricate, I think a model that is historically significant and fundamentally similar to the position Rosenberg advances might help us understand the case for panpsychism. Thus I want to begin by considering a Leibnizian argument for panpsychism.
Is a government required or permitted to redistribute the gains and losses that differences in biological endowments generate? In particular, does the fact that individuals possess different biological endowments lead to unfair advantages within a market economy? These are questions on which some people are apt to have strong intuitions and ready arguments. Egalitarians may say yes and argue that as unearned, undeserved advantages and disadvantages, biological endowments are never fair, and that the market simply exacerbates these inequities. Libertarians may (...) say no, holding that the possession of such endowments deprives no one of an entitlement and that any system but a market would deprive agents of the rights to their endowments. Biological endowments may well lead to advantages or disadvantages on their view, but not to unfair ones. I do not have strong intuitions about answers to these questions, in part because I believe that they are questions of great difficulty. To begin, alternative answers rest on substantial assumptions in moral philosophy that seem insufficiently grounded. Moreover, the questions involve several problematical assumptions about the nature of biological endowments. Finally, I find the questions to be academic, in the pejorative sense of this term. For aside from a number of highly debilitating endowments, the overall moral significance of differences between people seems so small, so I interdependent and so hard to measure, that these differences really will 1 not enter into practical redistributive calculations, even if it is theoretically i permissible that they do so. Before turning to a detailed discussion of biological endowments and their moral significance, I sketch my doubts about the fundamental moral theories that dictate either the impermissibility or the obligation to compensate for different biological endowments. (shrink)
Reality is hierarchically structured, or so proponents of the metaphysical posit of grounding argue. The less fundamental facts obtain in virtue of, or are grounded in, the more fundamental facts. But what exactly is it for one fact to be more fundamental than another? The aim of this paper is to provide a measure of relative fundamentality. I develop and defend an account of the metaphysical hierarchy that assigns to each fact a set of ordinals representing the levels on which (...) it occurs. The account allows one to compare any two facts with respect to their fundamentality and it uses immediate grounding as its sole primitive. In the first section, I will set the stage and point to some shortcomings of a rival account proposed by Karen Bennett. The second section will present my own proposal and the third section will discuss how it can be extended to non-foundationalist settings. The fourth section discusses potential objections. (shrink)
In the Museum of Science and Technology in San Jose, California, there is a display dedicated to advances in biotechnology. Most prominent in the display is a double helix of telephone books stacked in two staggered spirals from the floor to the ceiling twenty-five feet above. The books are said to represent the current state of our knowledge of the eukaryotic genome: the primary sequences of DNA polynucleotides for the gene products which have been discovered so far in the twenty (...) years since cloning and sequencing the genome became possible. (shrink)
" "Unlike the scattered works, anthologies, and essays that are currently available, Hans Jonas: The Integrity of Thinking provides a much-needed single, coherent overview of the various fields to which Jonas's attention was drawn, bringing ...
Let intentionalism be the view that what proposition is expressed in context by a sentence containing indexicals depends on the speaker’s intentions. It has recently been argued that intentionalism makes communicative success mysterious and that there are counterexamples to the intentionalist view in the form of cases of mismatch between the intended interpretation and the intuitively correct interpretation. In this paper, I argue that these objections can be met, once we acknowledge that we may distinguish what determines the correct interpretation (...) from the evidence that is available to the audience, as well as from the standards by which we judge whether or not a given interpretation is reasonable. With these distinctions in place, we see that intentionalism does not render communicative success mysterious, and that cases of mismatch between the intended interpretation and the intuitively correct one can easily be accommodated. The distinction is also useful in treating the Humpty Dumpty problem for intentionalism, since it turns out that this can be treated as an extreme special case of mismatch. (shrink)
The debate on the ethical aspects of moral bioenhancement focuses on the desirability of using biomedical as opposed to traditional means to achieve moral betterment. The aim of this paper is to systematically review the ethical reasons presented in the literature for and against moral bioenhancement.
Perception purports to help you gain knowledge of the world even if the world is not the way you expected it to be. Perception also purports to be an independent tribunal against which you can test your beliefs. It is natural to think that in order to serve these and other central functions, perceptual representations must not causally depend on your prior beliefs and expectations. In this paper, I clarify and then argue against the natural thought above. All perceptual systems (...) must solve an under-determination problem: the sensory data they receive could be caused by indefinitely many arrangements of distal objects and properties. Using a Bayesian approach to perceptual processing, I argue that in order to solve the under-determination problem, perceptual capacities must rely on prior beliefs or expectations of some kind. I then argue that perceptual states or processes can help ground knowledge of the world whether the ‘beliefs’ necessary for perceptual processing are encoded as sub-personal states within a perceptual system or cognitive states, such as person-level beliefs. My argument has two main parts. First, I give a preliminary argument that cognitive influence on perception can be appropriate, and I respond to three lines of objection. Second, I argue that cognitively influenced perceptual states can be instances of seeing that p, which makes the relevant states well suited to help ground knowledge that p. I conclude that a cognitively penetrated perceptual state or process can help ground knowledge under some circumstances. (shrink)
This paper concerns how extant theorists of predictive coding conceptualize and explain possible instances of cognitive penetration. §I offers brief clarification of the predictive coding framework and relevant mechanisms, and a brief characterization of cognitive penetration and some challenges that come with defining it. §II develops more precise ways that the predictive coding framework can explain, and of course thereby allow for, genuine top-down causal effects on perceptual experience, of the kind discussed in the context of cognitive penetration. §III develops (...) these insights further with an eye towards tracking one extant criterion for cognitive penetration, namely, that the relevant cognitive effects on perception must be sufficiently direct. Throughout these discussions, we extend the analyses of the predictive coding models, as we know them. So one open question that surfaces is how much of the extended analyses are genuinely just part of the predictive coding models, or something that must be added to them in order to generate these additional explanatory benefits. In §IV, we analyze and criticize a claim made by some theorists of predictive coding, namely, that (interesting) instances of cognitive penetration tend to occur in perceptual circumstances involving substantial noise or uncertainty. It is here that our analysis is most critical. We argue that, when applied, the claim fails to explain (or perhaps even be consistent with) a large range of important and uncontroversially interesting possible cases of cognitive penetration. We conclude with a general speculation about how the recent work on the predictive mind may influence the current dialectic concerning top-down effects on perception. (shrink)
According to the communication desideratum (CD), a notion of semantic content must be adequately related to communication. In the recent debate on indexical reference, (CD) has been invoked in arguments against the view that intentions determine the semantic content of indexicals and demonstratives (intentionalism). In this paper, I argue that the interpretations of (CD) that these arguments rely on are questionable, and suggest an alternative interpretation, which is compatible with (strong) intentionalism. Moreover, I suggest an approach that combines elements of (...) intentionalism with other subjectivist approaches, and discuss the role of intuitions in developing and evaluating theories of indexical reference. (shrink)
It is widely acknowledged that some truths or facts don’t have a minimal full ground [see e.g. Fine ]. Every full ground of them contains a smaller full ground. In this paper I’ll propose a minimality constraint on immediate grounding and I’ll show that it doesn’t fall prey to the arguments that tell against an unqualified minimality constraint. Furthermore, the assumption that all cases of grounding can be understood in terms of immediate grounding will be defended. This assumption guarantees that (...) the proposed minimality constraint is significant for all cases of grounding. With its help one can get a clear grip on the relevance of grounding, a feature that will be put to use in the penultimate section. (shrink)