Settler-colonialism is founded in environmental racism, and environmental justice is foundational to all forms of decolonialization. Native American groups located in the Gulf Coast Region of the United States are particularly vulnerable to environmental justice issues such as climate change and oil spills due to their geographic location and reliance on the coastal region for economic and social resources. This study used the framework of historical oppression, resilience, and transcendence to explore the historic and contemporary forms of environmental injustice experienced (...) by a Native American tribe in the Gulf Coast region of the United States. This critical ethnography analyzeda series of individual, family, and focus group semi-structured qualitative interviews with a total of 208 participants. Following the critical ethnographic method, data were interpreted through reconstructive analysis using NVivo. Findings of this study reveal the continuing impact of the BP oil spill and difficulty accessing resources following the spill, complicated by the tribe’s lack of federal recognition. Additional themes include the continuing impact of coastal erosion, historical and contemporary land loss, geographic marginalization, and concerns about a loss of tribal identity when tribal members are forced to relocate. Lack of federal tribal recognition has exacerbated all of these issues for this tribe. This study supports national findings that Native American groups experience extensive historic and contemporary environmental injustices and contextualizes these findings for a Native American tribe in the Gulf Coast region of the United States. Recognizing Native American sovereignty is key to addressing the environmental justice issues described. (shrink)
William Lillie. G.'E. Moore: Principia Ethica. Sir David Ross: The Right and the Good. The Foundations of Ethics. C. L. Stevenson: Ethics and Language. R. M. Hare: The Language of Morals In the following list, which makes no claim to ...
Cet article étudie la vue cartésienne de l'homme et la connaissance obtenue par la notion de l'union de l'âme et du corps. Le but est d'analyser les conséquences de la distinction cartésienne entre des notions primitives différentes et incomparables, et des différents genres de connaître qui s'en suivent, conséquences qui à cause de l'influence de la version Ryleienne du dualisme cartésien sont restées largement ignorées dans les débats anglo-américains récents. This paper examines Descartes's view of man and the understanding involved (...) in the notion of the mind-body union. The aim is to spell out the implications of Descartes's distinction between different and incomparable primary notion and related kinds of knowledge, which due to the misleading but influential Rylean version of Descartes's mind-body dualism have remained largely unnoticed in the contemporary Anglo-American debate. (shrink)
This paper identifies a puzzle that emerges when recent work on the suspension of judgement is integrated with evidentialist solutions to the wrong kind of reasons problem: it looks like there is no such thing as a reason to suspend judgement. Two possible responses to this puzzle are considered: one recharacterizes the suspension of judgement as a mental action, and the other recharacterizes it as a second-order attitude. It is argued that these responses sidestep the puzzle only with unacceptable compromise (...) to the view of suspension of judgement.Cet article relève une impasse qui apparaît quand les travaux récents sur la suspension du jugement sont intégrés aux solutions évidentialistes au problème de la «mauvaise sorte de raison» : il semble qu’il n’existe aucune raison pour suspendre le jugement. Deux réponses possibles à cette impasse sont considérées ici : l’une redéfinit la suspension du jugement comme une action mentale, l’autre la redéfinit comme une attitude de second ordre. L’article fait valoir que ces réponses n’évitent l’impasse qu’en compromettant de manière inacceptable la notion de suspension du jugement. (shrink)
This collection of 30 essays covers living a moral and ethical life as a lawyer and Christian, following the example of J. Reuben Clark, Jr. The mission and history of the BYU Law School is also adressed.
Can original philosophy be done while simultaneously engaging in the history of philosophy? Such a possibility is questioned by analytic philosophers who contend that history contaminates good philosophy, and by historians of philosophy who insist that theoretical predecessors cannot be ignored. Believing that both camps are misguided, the contributors to this book present a case for historical philosophy as a valuable enterprise. The contributors include: Todd L. Adams, Lilli Alanen, Jos? Bernardete, Jonathan Bennett, John I. Biro, Phillip Cummins, Georges Dicker, (...) Daniel A. Dombrowski, Daniel Garber, Josiah Gould, Jorge J.E. Gracie, Daniel Graham, Charles Griswold, James Lawler, Rudolf Luthe, Edward H. Madden, George Mavrodes, Gerald E. Myers, Jonathan R?e, Frithjof Rodi, Kenneth L. Schmitz, Vladimir Shtinov, David G. Stern, Robert Turnbull, James Van Cleve, Frederick Van De Pitte, Henry Beatch, and Richard Watson. (shrink)
Four essays of interest to the philosopher of science. The collection includes three short essays by L. P. Coonen, D. M. Lilly and C. DeKoninck. In the major essay, "Evolution: Scientific and Philosophical Dimensions," R. J. Nogar first presents a detailed analysis of the current status of the concept of evolution, showing that its meaning varies greatly from discipline to discipline. He argues that in view of the great stability of organic species, the consideration of evolutionary processes exclusively as space-time (...) distributions in inadequate, and needs supplementing by a concept of "nature," to explicate the relation between generator and generated, and the facts of heredity.--K. P. F. (shrink)
I explore some of the ways that assumptions about the nature of substance shape metaphysical debates about the structure of Reality. Assumptions about the priority of substance play a role in an argument for monism, are embedded in certain pluralist metaphysical treatments of laws of nature, and are central to discussions of substantivalism and relationalism. I will then argue that we should reject such assumptions and collapse the categorical distinction between substance and property.
The ever-increasing application of algorithms to decision-making in a range of social contexts has prompted demands for algorithmic accountability. Accountable decision-makers must provide their decision-subjects with justifications for their automated system’s outputs, but what kinds of broader principles should we expect such justifications to appeal to? Drawing from political philosophy, I present an account of algorithmic accountability in terms of the democratic ideal of ‘public reason’. I argue that situating demands for algorithmic accountability within this justificatory framework enables us to (...) better articulate their purpose and assess the adequacy of efforts toward them. (shrink)
Platonism is the most pervasive philosophy of mathematics. Indeed, it can be argued that an inarticulate, half-conscious Platonism is nearly universal among mathematicians. The basic idea is that mathematical entities exist outside space and time, outside thought and matter, in an abstract realm. In the more eloquent words of Edward Everett, a distinguished nineteenth-century American scholar, "in pure mathematics we contemplate absolute truths which existed in the divine mind before the morning stars sang together, and which will continue to exist (...) there when the last of their radiant host shall have fallen from heaven." In What is Mathematics, Really?, renowned mathematician Rueben Hersh takes these eloquent words and this pervasive philosophy to task, in a subversive attack on traditional philosophies of mathematics, most notably, Platonism and formalism. Virtually all philosophers of mathematics treat it as isolated, timeless, ahistorical, inhuman. Hersh argues the contrary, that mathematics must be understood as a human activity, a social phenomenon, part of human culture, historically evolved, and intelligible only in a social context. Mathematical objects are created by humans, not arbitrarily, but from activity with existing mathematical objects, and from the needs of science and daily life. Hersh pulls the screen back to reveal mathematics as seen by professionals, debunking many mathematical myths, and demonstrating how the "humanist" idea of the nature of mathematics more closely resembles how mathematicians actually work. At the heart of the book is a fascinating historical account of the mainstream of philosophy--ranging from Pythagoras, Plato, Descartes, Spinoza, and Kant, to Bertrand Russell, David Hilbert, Rudolph Carnap, and Willard V.O. Quine--followed by the mavericks who saw mathematics as a human artifact, including Aristotle, Locke, Hume, Mill, Peirce, Dewey, and Lakatos. In his epilogue, Hersh reveals that this is no mere armchair debate, of little consequence to the outside world. He contends that Platonism and elitism fit well together, that Platonism in fact is used to justify the claim that "some people just can't learn math." The humanist philosophy, on the other hand, links mathematics with geople, with society, and with history. It fits with liberal anti-elitism and its historical striving for universal literacy, universal higher education, and universal access to knowledge and culture. Thus Hersh's argument has educational and political ramifications. Written by the co-author of The Mathematical Experience, which won the American Book Award in 1983, this volume reflects an insider's view of mathematical life, based on twenty years of doing research on advanced mathematical problems, thirty-five years of teaching graduates and undergraduates, and many long hours of listening, talking to, and reading philosophers. A clearly written and highly iconoclastic book, it is sure to be hotly debated by anyone with a passionate interest in mathematics or the philosophy of science. (shrink)
Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent indirect discrimination-by-proxy, such (...) as redlining. Such organisations might also lack the knowledge and capacity to identify and manage fairness issues that are emergent properties of complex sociotechnical systems. This paper presents and discusses three potential approaches to deal with such knowledge and information deficits in the context of fairer machine learning. Trusted third parties could selectively store data necessary for performing discrimination discovery and incorporating fairness constraints into model-building in a privacy-preserving manner. Collaborative online platforms would allow diverse organisations to record, share and access contextual and experiential knowledge to promote fairness in machine learning systems. Finally, unsupervised learning and pedagogically interpretable algorithms might allow fairness hypotheses to be built for further selective testing and exploration. Real-world fairness challenges in machine learning are not abstract, constrained optimisation problems, but are institutionally and contextually grounded. Computational fairness tools are useful, but must be researched and developed in and with the messy contexts that will shape their deployment, rather than just for imagined situations. Not doing so risks real, near-term algorithmic harm. (shrink)
Kim’s causal exclusion argument purports to demonstrate that the non-reductive physicalist must treat mental properties (and macro-level properties in general) as causally inert. A number of authors have attempted to resist Kim’s conclusion by utilizing the conceptual resources of Woodward’s (2005) interventionist conception of causation. The viability of these responses has been challenged by Gebharter (2017a), who argues that the causal exclusion argument is vindicated by the theory of causal Bayesian networks (CBNs). Since the interventionist conception of causation relies crucially (...) on CBNs for its foundations, Gebharter’s argument appears to cast significant doubt on interventionism’s antireductionist credentials. In the present article, we both (1) demonstrate that Gebharter’s CBN-theoretic formulation of the exclusion argument relies on some unmotivated and philosophically significant assumptions (especially regarding the relationship between CBNs and the metaphysics of causal relevance), and (2) use Bayesian networks to develop a general theory of causal inference for multi-level systems that can serve as the foundation for an antireductionist interventionist account of causation. (shrink)
Jim Joyce has argued that David Lewis’s formulation of causal decision theory is inadequate because it fails to apply to the “small world” decisions that people face in real life. Meanwhile, several authors have argued that causal decision theory should be developed such that it integrates the interventionist approach to causal modeling because of the expressive power afforded by the language of causal models, but, as of now, there has been little work towards this end. In this paper, I propose (...) a variant of Lewis’s causal decision theory that is intended to meet both of these demands. Specifically, I argue that Lewis’s causal decision theory can be rendered applicable to small world decisions if one analyzes his dependency hypotheses as causal hypotheses that depend on the interventionist causal modeling framework for their semantics. I then argue that this interventionist variant of Lewis’s causal decision theory is preferable to interventionist causal decision theories that purportedly generalize Lewis’s through the use of conditional probabilities. This is because Lewisian interventionist decision theory captures the causal decision theorist’s conviction that any correlation between what the agent does and cannot cause should be irrelevant to the agent’s choice, while purported generalizations do not. (shrink)
Descartes's conception of matter changed the account of physical nature in terms of extension and related quantitative terms. Plants and animals were turned into species of machines, whose natural functions can be explained mechanistically. This article reflects on the consequences of this transformation for the psychology of human soul. In so far the soul is rational it lacks extension, yet it is also united with the body and affected by it, and so it is able to act on extended matter. (...) The article examines Descartes's concept of scientia and his different uses of nature, and argues that there is much more continuity between Aristotelian and Cartesian psychology than is usually recognized when it comes to an explanation of the functions of the embodied human soul. If this makes psychology unfit for inclusion in the new science of nature, its object is still a natural phenomenon and has an important place within scientia as Descartes conceived of it. (shrink)
Although excess blood collection has characterized U.S. national disasters, most dramatically in the case of September 11, periodic shortages of blood have recurred for decades. In response, I propose a new model of medical philanthropy, one that specifically uses charitable contributions to health care as blood donation incentives. I explain how the surge in blood donations following 9/11 was both transient and disaster-specific, failing to foster a greater continuing commitment to donate blood. This underscores the importance of considering blood donation (...) incentives. I defend charitable incentives as an alternative to financial incentives, which I contend would further extend neoliberal market values into health care. I explain my model's potential appeal to private foundations or public–private partnerships as a means for expanding both the pool of blood donors and the prosocial benefit of each act of blood donation. Finally I link my analysis to the empirical literature on blood donation incentives. (shrink)
At the bottom of all human activities are “values,” the conviction that some things “ought to be” and others not. Science, however, with its immense interest in mere facts seems to lack all understanding of such‘requiredness.’… A science … which would seriously admit nothing but indifferent facts … could not fail to destroy itself.
G.A. Cohen’s value conservatism entails that we ought to preserve some existing sources of value in lieu of more valuable replacements, thereby repudiating maximizing consequentialism. Cohen motivates value conservatism through illustrative cases. The consequentialist, however, can explain many Cohen-style cases by taking extrinsic properties, such as historical significance, to be sources of final value. Nevertheless, it may be intuitive that there’s stronger reason to preserve than to promote certain sources of value, especially historically significant things. This motivates an argument that (...) the weights of our reasons to preserve such things are especially strong relative to the amounts of value they bear. The value conservative can then explain these intuitions in non-consequentialist terms. There may be reason to preserve historically significant things as a matter of recognition respect for a cultural and historical heritage, or because it is virtuous to cultivate the right kind of connection with such a heritage. (shrink)
This paper defends an ontology of weak entity realism for homeostatic property cluster (HPC) theories of natural kinds, adapted from Bird’s (Synthese 195(4):1397–1426, 2018) taxonomy of such theories. Weak entity realism about HPC kinds accepts the existence of natural kinds. Weak entity realism denies two theses: that (1) HPC kinds have mind-independent essences, and that (2) HPC kinds reduce to entities, such as complex universals, posited only by metaphysical theories. Strong entity realism accepts (1) and (2), whereas moderate entity realism (...) accepts only (1). Given its commitment to (2), strong entity realism is more theoretically complex than weak entity realism, with little explanatory payoff. Given their commitment to (1), moderate and strong entity realisms cannot explain how the identity conditions of HPC kinds are to be straightforwardly knowable. I argue that weak entity realism avoids such epistemic difficulties. I further rebut two plausible criticisms of weak entity realism, namely that weak entity realism cannot account for quantification over kinds, and that weak entity realism cannot provide identity conditions for HPC kinds which are both scientifically useful and objective. Given the theoretical costs of strong and moderate entity realism, and weak entity realism’s adequate response to its most plausible challenges, weak entity realism about HPCs is to be preferred, especially for biological and chemical kinds. (shrink)
Meek and Glymour use the graphical approach to causal modeling to argue that one and the same norm of rational choice can be used to deliver both causal-decision-theoretic verdicts and evidential-decision-theoretic verdicts. Specifically, they argue that if an agent maximizes conditional expected utility, then the agent will follow the causal decision theorist’s advice when she represents herself as intervening, and will follow the evidential decision theorist’s advice when she represents herself as not intervening. Since Meek and Glymour take no stand (...) on whether agents should represent themselves as intervening, they provide more general advice than standard causal decision theorists and evidential decision theorists. But I argue here that even Meek and Glymour’s advice is not sufficiently general. This is because their advice is not sensitive to the distinct ways in which agents can fail to intervene, and there are decision-making contexts in which agents can reasonably have non-extreme confidence that they are intervening. I then show that the most natural extension of Meek and Glymour’s framework fails, but offer a generalization of my “Interventionist Decision Theory” that does not suffer from the same problems. (shrink)
It is explained that, in the sense of the sociologist Erving Goffman, mathematics has a front and a back. Four pervasive myths about mathematics are stated. Acceptance of these myths is related to whether one is located in the front or the back.
McGee argues that it is sometimes reasonable to accept both x and x-> without accepting y->z, and that modus ponens is therefore invalid for natural language indicative conditionals. Here, we examine McGee's counterexamples from a Bayesian perspective. We argue that the counterexamples are genuine insofar as the joint acceptance of x and x-> at time t does not generally imply constraints on the acceptability of y->z at t, but we use the distance-based approach to Bayesian learning to show that applications (...) of modus ponens are nevertheless guaranteed to be successful in an important sense. Roughly, if an agent becomes convinced of the premises of a modus ponens argument, then she should likewise become convinced of the argument's conclusion. Thus we take McGee's counterexamples to disentangle and reveal two distinct ways in which arguments can convince. Any general theory of argumentation must take stock of both. (shrink)
Alan Millar's paper (2011) involves two parts, which I address in order, first taking up the issues concerning the goal of inquiry, and then the issues surrounding the appeal to reflective knowledge. I argue that the upshot of the considerations Millar raises count in favour of a more important role in value-driven epistemology for the notion of understanding and for the notion of epistemic justification, rather than for the notions of knowledge and reflective knowledge.
In this paper, I use interventionist causal models to identify some novel Newcomb problems, and subsequently use these problems to refine existing interventionist treatments of causal decision theory. The new Newcomb problems that make trouble for existing interventionist treatments involve so-called ‘exotic choice’—that is, decision-making contexts where the agent has evidence about the outcome of her choice. I argue that when choice is exotic, the interventionist can adequately capture causal decision-theoretic reasoning by introducing a new interventionist approach to updating on (...) exotic evidence. But I also argue that this new updating procedure is principled only if the interventionist trades in the typical interventionist conception of choice for an alternative Ramseyan conception. I end by arguing that the guide to exotic choice developed here may, despite its name, be useful in some everyday contexts. (shrink)
Policy often focuses on reducing health care disparities through interventions at the patient and provider level. While unquestionably important, system-wide reforms to reduce uninsurance, improve geographic availability of services, increase workforce diversity, and promote clinical best practices are essential for progress in reducing disparities.
There are two main strands in the afterlife of Descartes’s famous redefinition of mind in terms of thinking likely to color one’s reading of his notion of mind or self. The one stressed most by his posterity and developed from early on in the empiricist tradition sees consciousness as its main characteristic. The other focuses on reason and rationality. This paper discusses the textual support for the first reading promoted by Ryle and his followers and aligns itself with the second (...) arguing that it is the exercise of its rational, cognitive capacities that are essential to the Cartesian mind and not consciousness, which is merely a presupposition for its rational activity. It examines the interrelation and respective roles of awareness on the one hand, and reason on the other in Descartes’s account of mind or self. But it also suggests that the role given by Descartes to the will in judgment and his separation of will and intellect into two distinct powers may be seen as contributing to a transformation of the very notion of reason and of self as a cognitive and moral agent. (shrink)
Though common sense says that causes must temporally precede their effects, the hugely influential interventionist account of causation makes no reference to temporal precedence. Does common sense lead us astray? In this paper, I evaluate the power of the commonsense assumption from within the interventionist approach to causal modeling. I first argue that if causes temporally precede their effects, then one need not consider the outcomes of interventions in order to infer causal relevance, and that one can instead use temporal (...) and probabilistic information to infer exactly when X is causally relevant to Y in each of the senses captured by Woodward’s interventionist treatment. Then, I consider the upshot of these findings for causal decision theory, and argue that the commonsense assumption is especially powerful when an agent seeks to determine whether so-called “dominance reasoning” is applicable. (shrink)