In the present paper I wish to argue that psychological egoism may well have a basis in the empirical facts of human psychology. Certain contemporary learning theorists, e.g., Hull and Skinner, have put forward behavioristic theories of the origin and functioning of human motives which posit a certain number of basically "selfish, " unlearned primary drives or motives (like hunger, thirst, sleep, elimination, and sex), explain all other, higher-order, drives or motives as derived genetically from the primary ones via certain (...) "laws of reinforcement," and, further, deny the "functional autonomy" of those higher-order drives or motive. Now it is a hotly debated issue in contemporary Learning Theory whether any theory such as we have described briefly above could adequately explain adult human behavior. I shall, however, argue only that a theory of the above kind may well be true, and that from such a theory, fortified only by one additional psychological premise, the truth of egoism (non-altruism) logically follows. I hope to show, thereby, that the question of psychological egoism is still an open empirical issue, however fallacious be the philosophical arguments for it. (shrink)
Against its prominent compatiblist and libertarian opponents, I defend Galen Strawson’s Basic Argument for the impossibility of moral responsibility. Against John Martin Fischer, I argue that the Basic Argument does not rely on the premise that an agent can be responsible for an action only if he is responsible for every factor contributing to that action. Against Alfred Mele and Randolph Clarke, I argue that it is absurd to believe that an agent can be responsible for an action when no (...) factor contributing to that action is up to that agent. Against Derk Pereboom and Clarke, I argue that the versions of agent-causal libertarianism they claim can immunize the agent to the Basic Argument actually fail to do so. Against Robert Kane, I argue that the Basic Argument does not rely on the premise that simply the presence of indeterministic factors in the process of bringing an action about is itself what rules out the agent’s chance for being responsible for that action. (shrink)
In this paper I attempt to show, against certain versions of trope theory, that properties with analyzable particularity cannot be merely exactly similar: such properties are either particularized properties (tropes) that are dissimilar to every any other trope, or else universalized properties (universals). I argue that each of the most viable standard and nonstandard particularizers that can be employed to secure the numerical difference between exactly similar properties can only succeed in grounding the particularity of properties, that is, in having (...) properties be tropes, at the expense of ruling out the possibility of their exact similarity. Here are the four nonstandard particularizers that I examine: the genealogy of a property, the history of a property, the causal effects of a property, and the duration of a property. And here are the two standard particularizers that I examine: the bearer of a property, by which I mean either a bare particular or a spatiotemporal location, and the property itself, by which I mean that the property is self-particularized. In my concluding remarks, I explain that the only remaining hope for preserving the possibility of exactly similar tropes is regarding properties as primitively particular, and that this must mean not that properties are self-particularized but that they are particularized due to nothing. I close by arguing that this may not help trope theory after all. (shrink)
Against its prominent compatiblist and libertarian opponents, I defend Galen Strawson's Basic Argument for the impossibility of moral responsibility. Against John Martin Fischer, I argue that the Basic Argument does not rely on the premise that an agent can be responsible for an action only if he is responsible for every factor contributing to that action. Against Alfred Mele and Randolph Clarke, I argue that it is absurd to believe that an agent can be responsible for an action when no (...) factor contributing to that action is up to that agent. Against Derk Pereboom and Clarke, I argue that the versions of agent-causal libertarianism they claim can immunize the agent to the Basic Argument actually fail to do so. Against Robert Kane, I argue that the Basic Argument does not rely on the premise that simply the presence of indeterministic factors in the process of bringing an action about is itself what rules out the agent's chance for being responsible for that action. (shrink)
Integrating cosmological and ontological lines of reasoning, I argue that there is a self-necessary being that (a) serves as the sufficient condition for everything, that (b) has the most perfect collection of whatever attributes of perfection there might be, and that (c) is an independent, eternal, unique, simple, indivisible, immutable, all-actual, all-free, all-present, all-powerful, all-knowing, all-good, personal creator of every expression of itself that everything is. My cosmo-ontological case for such a being, an everything-maker with the core features ascribed to (...) the God of classical theism, addresses the standard worries plaguing these lines of reasoning: (1) the richness required of such a being dissolves it into many beings; (2) the metaphysical possibility of such a being is assumed on insufficient grounds; (3) the features we ascribe to such a being are mere human-all-too-human projections. (shrink)
En la presente investigación se reflexionará sobre la guerra y su naturaleza en el contexto del sistema filosófico de Georg Wilhelm Friedrich Hegel, estableciendo como su fundamento ontológico la categoría del ser-para-sí o autoafirmación, aquella necesidad de toda autoconciencia que pretende ser libre, y que se alcanza a través del proceso dialéctico del reconocimiento. Se contrastará esta concepción con la clásica noción de guerra justa planteada por Tomás de Aquino, examinando el lugar que la negatividad del mal tiene en ambos (...) sistemas metafísicos, con el fin de determinar si es posible entender la eticidad de ambas guerras en un mismo sentido. (shrink)
This paper engages the controversy as to whether there is a link between Berkeley’s refutation of abstraction and his refutation of materialism. I argue that there is a strong link. In the opening paragraph I show that materialism being true requires and is required by the possibility of abstraction, and that the obviousness of this fact suggests that the real controversy is whether there is a link between Berkeley’s refutation of materialism and his refutation of the possibility of framing abstract (...) incomplete ideas and abstract general ideas. Although Berkeley can still defeat materialism without relying on his arguments that directly refute the possibility of framing abstract incomplete ideas and abstract general ideas, I contend that there is still a strong link between his refutation of materialism and his refutation of the possibility of framing these ideas. First, I show that the truth of the canonic version of materialism, according to which primary qualities are mindindependent and inhere in material substances, requires the possibility of the mind framing both of these ideas. Second, I show that there is a sense in which the truth of materialism is required by the possibility of either of these ideas. (shrink)
My general aim is to clarify the foundational difference between Stephen Jay Gould and Richard Dawkins concerning what biological entities are the units of selection in the process of evolution by natural selection. First, I recapitulate Gould’s central objection to Dawkins’s view that genes are the exclusive units of selection. According to Gould, it is absurd for Dawkins to think that genes are the exclusive units of selection when, after all, genes are not the exclusive interactors: those agents directly engaged (...) with, directly impacted by, environmental pressures. Second, I argue that Gould’s objection still goes through even when we take into consideration Sterelny and Kitcher’s defense of gene selectionism in their admirable paper “The Return of the Gene.” Third, I propose a strategy for defending Dawkins that I believe obviates Gould’s objection. Drawing upon Elisabeth Lloyd’s careful taxonomy of the various understandings of the unit of selection at play in the philosophy of biology literature, my proposal involves realizing that Dawkins endorses a different understanding of the unit of selection than Gould holds him to, an understanding that does not require genes to be the exclusive interactors. (shrink)
Philosophers frequently treat certainty as some sort of absolute, while ordinary men typically do not. According to the Theory of Important Criteria, on which the present paper is based, this difference is not to be explained in terms of ambiguity or vagueness in the word?certain?, but rather in terms of disagreement between ordinary men and philosophers as to the importance of one of the criteria of the ordinary sense of?certain?. I argue that there is reason to think that certainty is (...) some sort of absolute, and thus that no empirical statement is certain. And in any case, the problem of empirical certainty is not a pseudo?problem, as metaphilosophers like Wittgenstein and Wisdom have thought. (shrink)
This paper is intended primarily as a reference tool for participants in the debate between realism and nominalism concerning universals. It provides an exhaustive catalogue of the basic analyses of an entity being charactered that nominalists can employ in both a constituent and nonconstituent ontology.
EXCERPT.--With exception to early essays by George von Glahn and Mark Sanders, serious critical scholarship on the writings of Ted Kooser began after the 1980 release of the now-classic Sure Signs, Kooser’s fifth major collection of poems. Looking back over the thirty-plus years since then, only about a dozen or so significant studies—none of which book-length—currently boulder out against the relative flatscape of secondary materials constituted mostly by quick and dirty reviews. Aside from the essays by Wes Mantooth, Allan Benn, (...) and Mary K. Stillwell in this special issue of Midwestern Miscellany, the following works particularly stand out and, in my view, must be consulted by the Kooser scholar: David Baker’s “Ted’s Box”; William Barillas’s Chapter 7 of The Midwestern Pastoral; Victor Contoski’s “Words and Raincoats”; Dana Gioia’s “The Anonymity of the Regional Poet”; Jeff Gundy’s “Among the Erratics”; Jonathan Holden’s “The Chekov of American Poetry”; Denise Low’s “Sight in Motion”; David Mason’s “Introducing Ted Kooser”; and both Mary K. Stillwell’s “The ‘In Between’” and her “When a Walk is a Poem.”. (shrink)
My aim is to figure out whether Aristotle’s response to the argument for fatalism in De Interpretatione 9 is successful. By “response” here I mean not simply the reasons he offers to highlight why fatalism does not accord with how we conduct our lives, but also the solution he devises to block the argument he provides for it. Achieving my aim hence demands that I figure out what exactly is the argument for fatalism he voices, what exactly is his solution, (...) whether his solution is coherent, and whether it does indeed succeed. I find that the argument is essentially bivalence plus that the truth of a proposition stating that an event will happen in the future entails that this event will necessarily happen, that Aristotle’s solution is to restrict bivalence when it comes to propositions about contingent future events, that this solution is coherent, and that while it does not rule out the possibility of fatalism, it does succeed in blocking the argument for fatalism offered within chapter 9. (shrink)
Several articles have recently appeared arguing that there really are no viable alternatives to mechanistic explanation in the biological sciences (Kaplan and Bechtel; Kaplan and Craver). We argue that mechanistic explanation is defined by localization and decomposition. We argue further that systems neuroscience contains explanations that violate both localization and decomposition. We conclude that the mechanistic model of explanation needs to either stretch to now include explanations wherein localization or decomposition fail or acknowledge that there are counterexamples to mechanistic explanation (...) in the biological sciences. (shrink)
The complex systems approach to cognitive science invites a new understanding of extended cognitive systems. According to this understanding, extended cognitive systems are heterogenous, composed of brain, body, and niche, non-linearly coupled to one another. This view of cognitive systems, as non-linearly coupled brain–body–niche systems, promises conceptual and methodological advances. In this article we focus on two of these. First, the fundamental interdependence among brain, body, and niche makes it possible to explain extended cognition without invoking representations or computation. Second, (...) cognition and conscious experience can be understood as a single phenomenon, eliminating fruitless philosophical discussion of qualia and the so-called hard problem of consciousness. What we call “extended phenomenological-cognitive systems” are relational and dynamical entities, with interactions among heterogeneous parts at multiple spatial and temporal scales. (shrink)
"These essays make a splendid book. Ignatieff's lectures are engaging and vigorous; they also combine some rather striking ideas with savvy perceptions about actual domestic and international politics.
To accept that cognition is embodied is to question many of the beliefs traditionally held by cognitive scientists. One key question regards the localization of cognitive faculties. Here we argue that for cognition to be embodied and sometimes embedded, means that the cognitive faculty cannot be localized in a brain area alone. We review recent research on neural reuse, the 1/f structure of human activity, tool use, group cognition, and social coordination dynamics that we believe demonstrates how the boundary between (...) the different areas of the brain, the brain and body, and the body and environment is not only blurred but indeterminate. In turn, we propose that cognition is supported by a nested structure of task-specific synergies, which are softly assembled from a variety of neural, bodily, and environmental components (including other individuals), and exhibit interaction dominant dynamics. (shrink)
What makes us conscious? Many theories that attempt to answer this question have appeared recently in the context of widespread interest about consciousness in the cognitive neurosciences. Most of these proposals are formulated in terms of the information processing conducted by the brain. In this overview, we survey and contrast these models. We first delineate several notions of consciousness, addressing what it is that the various models are attempting to explain. Next, we describe a conceptual landscape that addresses how the (...) theories attempt to explain consciousness. We then situate each of several representative models in this landscape and indicate which aspect of consciousness they try to explain. We conclude that the search for the neural correlates of consciousness should be usefully complemented by a search for the computational correlates of consciousness. (shrink)
Using hypersets as an analytic tool, we compare traditionally Gibsonian (Chemero 2003; Turvey 1992) and representationalist (Sahin et al. this issue) understandings of the notion ‘affordance’. We show that representationalist understandings are incompatible with direct perception and erect barriers between animal and environment. They are, therefore, scarcely recognizable as understandings of ‘affordance’. In contrast, Gibsonian understandings are shown to treat animal-environment systems as unified complex systems and to be compatible with direct perception. We discuss the fruitful connections between Gibsonian affordances (...) and dynamical systems explanation in the behavioral sciences and point to prior fruitful application of Gibsonian affordances in robotics. We conclude that it is unnecessary to re-imagine affordances as representations in order to make them useful for researchers in robotics. (shrink)
Prominent evolutionary psychologists have argued that our innate psychological endowment consists of numerous domainspecific cognitive resources, rather than a few domaingeneral ones. In the light of some conceptual clarification, we examine the central inprinciple arguments that evolutionary psychologists mount against domaingeneral cognition. We conclude (a) that the fundamental logic of Darwinism, as advanced within evolutionary psychology, does not entail that the innate mind consists exclusively, or even massively, of domainspecific features, and (b) that a mixed innate cognitive economy of domainspecific (...) and domaingeneral resources remains a genuine conceptual possibility. However, an examination of evolutionary psychology's 'grain problem' reveals that there is no way of establishing a principled and robust distinction between domainspecific and domaingeneral features. Nevertheless, we show that evolutionary psychologists can and do live with this grain problem without their whole enterprise being undermined. (shrink)
The rapid development in healthcare technologies in recent years has resulted in the need for health services, whether publicly funded or insurance based, to identify means to maximise the benefits and provide equitable distribution of limited resources. This has resulted in the need for rationing decisions, and there has been considerable debate regarding the substantive and procedural ethical principles that promote distributive justice when making such decisions. In this paper, I argue that while the scientifically rigorous approaches of evidence-based healthcare (...) are claimed as aspects of procedural justice that legitimise such guidance, there are biases and distortions in all aspects of the process that may lead to epistemic injustices. Regardless of adherence to principles of distributive justice in the decision-making process, evidential failings may undermine the fairness and legitimacy of such decisions. In particular, I identify epistemic exclusion that denies certain patient and professional groups the opportunity to contribute to the epistemic endeavour. This occurs at all stages of the process, from the generation, analysis and reporting of the underlying evidence, through the interpretation of such evidence, to the decision-making that determines access to healthcare resources. I further argue that this is compounded by processes which confer unwarranted epistemic privilege on experts in relation to explicit or implicit value judgements, which are not within their remit. I suggest a number of areas in which changes to the processes for developing, regulating, reporting and evaluating evidence may improve the legitimacy of such processes. (shrink)
A new edition of the highly acclaimed book Multiculturalism and "The Politics of Recognition," this paperback brings together an even wider range of leading philosophers and social scientists to probe the political controversy surrounding ...
Cognitive science has always included multiple methodologies and theoretical commitments. The philosophy of cognitive science should embrace, or at least acknowledge, this diversity. Bechtel’s (2009a) proposed philosophy of cognitive science, however, applies only to representationalist and mechanist cognitive science, ignoring the substantial minority of dynamically oriented cognitive scientists. As an example of nonrepresentational, dynamical cognitive science, we describe strong anticipation as a model for circadian systems (Stepp & Turvey, 2009). We then propose a philosophy of science appropriate to nonrepresentational, dynamical (...) cognitive science. (shrink)
Many healthcare agencies are producing evidence-based guidance and policy that may determine the availability of particular healthcare products and procedures, effectively rationing aspects of healthcare. They claim legitimacy for their decisions through reference to evidence-based scientific method and the implementation of just decision-making procedures, often citing the criteria of ‘accountability for reasonableness’; publicity, relevance, challenge and revision, and regulation. Central to most decision methods are estimates of gains in quality-adjusted life-years, a measure that combines the length and quality of survival. (...) However, all agree that the QALY alone is not a sufficient measure of all relevant aspects of potential healthcare benefits, and a number of value assessment frameworks have been suggested. I argue that the practical implementation of these procedures has the potential to lead to a distorted assessment of value. Undue weight may be ascribed to certain attributes, particularly those that favour commercial or political interests, while other attributes that are highly valued by society, particularly those related to care processes, may be omitted or undervalued. This may be compounded by a lack of transparency to relevant stakeholders, resulting in an inability for them to participate in, or challenge, the decisions. The makes it likely that costly new technologies, for which inflated prices can be justified by the current value frameworks, are displacing aspects of healthcare that are highly valued by society. (shrink)
This paper has two main purposes. First, it will provide an introductory discussion of hyperset theory, and show that it is useful for modeling complex systems. Second, it will use hyperset theory to analyze Robert Rosen’s metabolismrepair systems and his claim that living things are closed to efficient cause. It will also briefly compare closure to efficient cause to two other understandings of autonomy, operational closure and catalytic closure.
In the UK, current policies and services for people with mental disorders, including those with intellectual disabilities (ID), presume that these men and women can, do, and should, make decisions for themselves. The new Mental Capacity Act (England and Wales) 2005 (MCA) sets this presumption into statute, and codifies how decisions relating to health and welfare should be made for those adults judged unable to make one or more such decisions autonomously. The MCA uses a procedural checklist to guide this (...) process of substitute decision-making. The personal experiences of providing direct support to seven men and women with ID living in residential care, however, showed that substitute decision-making took two forms, depending on the type of decision to be made. The first process, ‘strategic substitute decision-making’, paralleled the MCA’s legal and ethical framework, whilst the second process, ‘relational substitute decision-making’, was markedly different from these statutory procedures. In this setting, ‘relational substitute decision-making’ underpinned everyday personal and social interventions connected with residents’ daily living, and was situated within a framework of interpersonal and interdependent care relationships. The implications of these findings for residential services and the implementation of the MCA are discussed. (shrink)
This paper describes the interventions by the International Committee of the Red Cross to support a hospital in Afghanistan during the mid 1990s. We present elements of the interventions introduced in Ghazni, Afghanistan, and consider a number of ethical issues stimulated by this analysis. Ethical challenges arise whenever humanitarian interventions to deal with complex political emergencies are undertaken: among those related to the case study presented are questions concerning: a) whether humanitarian support runs the risk of propping up repressive and (...) irresponsible governments; b) whether humanitarian relief activities can legitimately focus on a narrow range of interventions, or need to broaden to address the range of challenges facing the health system; and c) whether sustainability and quality of care should be routinely considered in such settings. The paper concludes by highlighting the value of case studies, suggesting mechanisms for extending transparency and accountability in humanitarian health interventions, and highlighting the need of contextualising humanitarian work if the interventions are to be successful. (shrink)
In what constitutes the only English-language collection of essays ever dedicated to the analysis of Montesquieu's contributions to political science, the contributors review some of the most vexing controversies that have arisen in the interpretation of Montesquieu's thought. By paying careful attention to the historical, political, and philosophical contexts of Montesquieu's ideas, the contributors provide fresh readings of The Spirit of Laws, clarify the goals and ambitions of its author, and point out the pertinence of his thinking to the problems (...) of our world today. (shrink)
In this author-meets critics discussion of Howard Thurman’s Philosophical Mysticism, Anthony Sean Neal argues that Thurman’s work requires systematic recognition of how he was rooted firmly within the Modern Era of the African American Freedom Struggle. Michael Barber suggests that Thurman may be understood in contrast to Levinas on two counts. Whereas Thurman develops the duty to love from within the one who must love, Levinas grasps the origin of love’s duty in the command of the one who (...) is to be loved. And while Thurman’s mysticism yearns for oneness, Levinas warns that oneness is ethically problematic. Eddie O'Byrn challenges the symbolic validity of calling love a weapon, and asks why the book has not treated Thurman’s relations to Gandhi or King. Neal defends a provisional usage of the term weapon in relation to love and offers some preliminary considerations of Thurman’s relation to Gandhi and King, especially in the symbolic significance of "the dream.". (shrink)
This paper has two primary aims. The first is to provide an introductory discussion of hyperset theory and its usefulness for modeling complex systems. The second aim is to provide a hyperset analysis of Robert Rosen’s metabolism-repair systems and his claim that living things are closed to efficient cause. Consequences of the hyperset models for Rosen’s claims concerning computability and life are discussed.
In this essay we respond to some criticisms of the guidance theory of representation offered by Tom Roberts. We argue that although Roberts’ criticisms miss their mark, he raises the important issue of the relationship between affordances and the action-oriented representations proposed by the guidance theory. Affordances play a prominent role in the anti-representationalist accounts offered by theorists of embodied cognition and ecological psychology, and the guidance theory is motivated in part by a desire to respond to the critiques of (...) representationalism offered in such accounts, without giving up entirely on the idea that representations are an important part of the cognitive economy of many animals. Thus, explorations of whether and how such accounts can in fact be related and reconciled potentially offer to shed some light on this ongoing controversy. Although the current essay hardly settles the larger debate, it does suggest that there may be more possibility for agreement than is often supposed. (shrink)
Michael Strevens’ Depth: An Account of Scientific Explanation is an impressive recent contribution to the philosophical literature on explanation. While clearly influenced by several of the leading theories of the later twentieth century, Strevens’ account of explanation is firmly rooted in the causal tradition. His most notable intellectual debts in this regard owe to David Lewis, Wesley Salmon and James Woodward. Still, Strevens sees the work of these theorists as flawed in important respects, and his “kairetic account” of explanation (...) is meant to provide answers to problems his predecessors left unresolved (or poorly resolved, as the case may be). Before examining Strevens’ account in detail we should identify the more significant of these problems and briefly survey the contexts in which they arose. (shrink)
According to Darwinian thinking, organisms are designed by natural selection, and so are integrated collections of adaptations, where an adaptation is a phenotypic trait that is a specialized response to a particular selection pressure. For animals that make their living in the Arctic, one adaptive problem is how to maintain body temperature above a certain minimum level necessary for survival. Polar bears' thick coats are a response to that selection pressure . A thick coat makes a positive difference to a (...) polar bear's fitness, since polar bears with very thin coats left fewer offspring than those with thicker coats. The foundational idea of evolutionary psychology is that brains are no different from any other organ with an evolutionary function, insofar as brains too are systems shaped by natural selection to solve adaptive problems. Thus brains have a particular functional organization because their behavioural effects tend, or once tended, to help maintain or increase the fitness of organisms with those brains. Prominent evolutionary psychologists have endorsed the view that the last time any significant modifications were made by natural selection to the human brain's functional architecture, we were hunter-gatherers, inhabiting a world quite different from that which we now inhabit. That world was the Pleistocene epoch, between about 2 million years ago and 10 thousand years ago. On this view, then, the Pleistocene constitutes what evolutionary psychologists often call our environment of evolutionary adaptedness , and the information- processing structure and organization of our present-day cognitive architecture is no different from that of our recent huntergatherer ancestors. (shrink)