In Physics I.8, Aristotle outlines and responds to an Eleatic argument against the reality of change. I defend a new reading according to which the argu- ment assumes Predicational Monism, the claim that each being can possess only one property. In Phys. I.2, Aristotle responds to Predicational Monism, which he attributes to the Eleatics; I argue that he uses this response to distinguish coin- cidental from non-coincidental becoming, a distinction he employs in Phys I.8 to resolve the argument against the (...) reality of change. The Eleatics’ acceptance of Predicational Monism, I argue, explains why this distinction is unavailable to them. (shrink)
In 'Physics' I.7, Aristotle claims that plants and animals are generated from sperma. Since most understood sperma to be an ovum, this claim threatens to undermine the standard view that, for Aristotle, the matter natural beings are generated from persists through their generation. By focusing on Aristotle’s discussion of sperma in the first book of the 'Generation of Animals', I show that, for Aristotle, sperma in the female is surplus blood collected in the uterus and not an ovum. I subsequently (...) argue that, for Aristotle, this blood does persist through the production of the fetus. (shrink)
Aristotle uses 'body' to describe the matter of animals, the elements and what they compose, as well as magnitudes extended in three-dimensions. These last bodies belong to the category of quantity, alongside surfaces and lines. It is this notion of body that interests Christian Pfeiffer, who presents Aristotle's various discussions of it as one exhaustive theory of body. According to this theory, magnitudes are form-matter composites, where boundaries are forms and extensions are matter. The boundary of a body is its (...) particular shape and its extension is its volume. It follows that Socrates destroys his body and gains another by standing up (since the shape of the sitting and standing bodies, and thus the bodies... (shrink)
The work argues that the koans of Zen Buddhism have several intriguing non-accidental parallels with the short stories of Catholic author Flannery O'Connor. Both typically portray characters in a state of non-enlightenment in which they are egoistically obsessed with something which prevents them from perceiving and properly responding to the real world around them. Both present the characters with some opportunity for enlightenment, which they may or may not take up. Both come in a variety of forms, in order to (...) portray and address a rich variety of ways in which such obsessions become obstacles to human self-understanding and fulfillment. And both are ultimately presented for the sake of the listener or reader, who may recognize in the stories parallels to their own state, and perhaps take a lesson therefrom on their own journey towards what Zen calls enlightenment, or what Christians call grace. (shrink)
De-Signing Design: Cartographies of Theory and Practice throws new light on the terrain between theory and practice in transdisciplinary discourses of design and art. The collection brings together a selection of essays on spatiality, difference, cultural aesthetics, and identity in the expanded field of place-making and being.
The objective probability of every physical event is fixed by prior physical events and laws alone. (This thesis is sometimes expressed in terms of explanation: In tracing the causal history of any physical event, one need not advert to any non-physical events or laws. To the extent that there is any explanation available for a physical event, there is a complete explanation available couched entirely in physical vocabulary. We prefer the probability formulation, as it should be acceptable to any physicalist, (...) though some reject the explanation formulation.) (3) Causal Exclusion. (shrink)
This article uses sim-max games to model perceptual categorization with the goal of answering the following question: To what degree should we expect the perceptual categories of biological actors to track properties of the world around them? I argue that an analysis of these games suggests that the relationship between real-world structure and evolved perceptual categories is mediated by successful action in the sense that organisms evolve to categorize together states of nature for which similar actions lead to similar results. (...) This conclusion indicates that both strongly realist and strongly antirealist views about perceptual categories are too simple. (shrink)
Anselm presented his ontological argument in three main forms. In Proslogion II he argued that the very concept of God implies his actual existence. In Reply to Gaunilo —the argument from aseity—he argued that the conception of God as an eternal existent rules out his conception as a merely possible existent. In Proslogion III he argued that the concept of God implies his actual existence as logically necessary. Each of these arguments has its traditional refutation. Against Proslogion II it is (...) argued that the analytic use of ‘exists’ conceptually and descriptively is logically distinct from its synthetic use as an empirical judgement. Against the argument from aseity the same point is made about ‘exists eternally’, and against the detail of his argument it is said that the second premise is not a proposition with a single implication, but a disjunction. Against Proslogion III it is argued that ‘logically necessary existence’ is a meaningless notion. This paper is designed to show that Anselm's arguments may be refuted without recourse to these traditional criticisms; that each of his arguments contains at least one further error, of equal if not more importance, which has passed unnoticed. If this appears to be bringing yet further coals to Newcastle, the revival of the argument by Hartshorne and Malcolm, and the supposed ‘ontological disproof’ by Findlay, may indicate our need of further fuel. (shrink)
Bruner shows that in cultural interactions, members of minority groups will learn to interact with members of majority groups more quickly—minorities tend to meet majorities more often as a brute fact of their respective numbers—and, as a result, may come to be disadvantaged in situations where they divide resources. In this paper, we discuss the implications of this effect for epistemic communities. We use evolutionary game theoretic methods to show that minority groups can end up disadvantaged in academic interactions like (...) bargaining and collaboration as a result of this effect. These outcomes are more likely, in our models, the smaller the minority group. They occur despite assumptions that majority and minority groups do not differ with respect to skill level, personality, preference, or competence of any sort. Furthermore, as we will argue, these disadvantaged outcomes for minority groups may negatively impact the progress of epistemic communities. (shrink)
Twentieth-century analytic philosophy was dominated by positivist antimetaphysics and neo-Humean deflationary metaphysics, and the nature of explanation was reconceived in order to fit these agendas. Unsurprisingly, the explanatory value of theist was widely discredited. I argue that the long-overdue revival of moralized, broadly neo-Aristotelian metaphysics and an improved perspective on modal knowledge dramatically changes the landscape. In this enriched context, there is no sharp divide between physics and metaphysics, and the natural end of the theoretician’s quest for a unified explanation (...) of the universe is God, an absolutely necessary, transcendent, and personal source of all contingent reality. (shrink)
As good a definition as any of a _philosophical_ conundrum is a problem all of whose possible solutions are unsatisfactory. The problem of understanding the springs of action for morally responsible agents is commonly recognized to be such a problem. The origin, nature, and explanation of freely-willed actions puzzle us today as they did the ancients Greeks, and for much the same reasons. However, one can carry this ‘perennial-puzzle’ sentiment too far. The unsatisfactory nature of philosophical theories is a more (...) or less matter, and some of them have admitted of improvement over time. This, at any rate, is what we self-selecting metaphysicians tend to suppose, and I will pursue that high calling by suggesting a few improvements to a theory of metaphysical freedom, or freedom of the will. (shrink)
Using evolutionary game theory, I consider how guilt can provide individual fitness benefits to actors both before and after bad behavior. This supplements recent work by philosophers on the evolution of guilt with a more complete picture of the relevant selection pressures.
Vague predicates, those that exhibit borderline cases, pose a persistent problem for philosophers and logicians. Although they are ubiquitous in natural language, when used in a logical context, vague predicates lead to contradiction. This paper will address a question that is intimately related to this problem. Given their inherent imprecision, why do vague predicates arise in the first place? I discuss a variation of the signaling game where the state space is treated as contiguous, i.e., endowed with a metric that (...) captures a similarity relation over states. This added structure is manifested in payoffs that reward approximate coordination between sender and receiver as well as perfect coordination. I evolve these games using a variation of Herrnstein reinforcement learning that better reflects the generalizing learning strategies real-world actors use in situations where states of the world are similar. In these simulations, signaling can develop very quickly, and the signals are vague in much the way ordinary language predicates are vague—they each exclusively apply to certain items, but for some transition period both signals apply to varying degrees. Moreover, I show that under certain parameter values, in particular when state spaces are large and time is limited, learning generalization of this sort yields strategies with higher payoffs than standard Herrnstein reinforcement learning. These models may then help explain why the phenomenon of vagueness arises in natural language: the learning strategies that allow actors to quickly and effectively develop signaling conventions in contiguous state spaces make it unavoidable. (shrink)
This paper focuses on 3 features of Freyenhagen's Aristotelian version of Adorno. (a) It challenges the strict negativism Freyenhagen finds in Adorno. If we have morally relevant interests in ourselves, it is implicit that we have a standard by which to understand what is both good and bad for us (our interests). Because strict negativism operates without reference to what is good, it seems to be detached from real interests too. Torture, it is argued, is, among other things, a violation (...) of those interests. (b) Freyenhagen identifies the “impulse” in Adorno as an untutored yet moral reaction to morally demanding situations. The plausibility of this primitivism and its compatibility with Adorno's general worries about immediacy are considered. (c) The disruptive character of Adorno's version of the categorical imperative, its willingness to complicate action through wholesale reflection on the norms of what we are committing ourselves to, is set in contrast with Freyenhagen's Aristotelian claim that certain notions, such as “humanity,” cannot be intelligibly questioned. (shrink)
In a recent article, Carlos Santana shows that in common interest signaling games when signals are costly and when receivers can observe contextual environmental cues, ambiguous signaling strategies outperform precise ones and can, as a result, evolve. I show that if one assumes a realistic structure on the state space of a common interest signaling game, ambiguous strategies can be explained without appeal to contextual cues. I conclude by arguing that there are multiple types of cases of payoff-beneficial ambiguity, some (...) of which are better explained by Santana’s models and some of which are better explained by models presented here. (shrink)
One familiar affirmative answer to this question holds that these facts suffice to entail that Descartes' picture of the human mind must be mistaken. On Descartes' view, our mind or soul (the only essential part of ourselves) has no spatial location. Yet it directly interacts with but one physical object, the brain of that body with which it is, 'as it were, intermingled,' so as to 'form one unit.' The radical disparity posited between a nonspatial mind, whose intentional and conscious (...) properties are had by no physical object, and a spatial body, all of whose properties are had by no mind, has prompted some to conclude that, pace Descartes, causal interaction between the two is impossible. Jaegwon Kim has recently given a new twist to this old line of thought.(1) In the present essay, I will use Kim's argument as a springboard for motivating my own favored picture of the metaphysics of mind and body and then discussing how an often vilified account of freedom of the will may be realized within it. (shrink)
This provocative book refurbishes the traditional account of freedom of will as reasons-guided "agent" causation, situating its account within a general metaphysics. O'Connor's discussion of the general concept of causation and of ontological reductionism v. emergence will specially interest metaphysicians and philosophers of mind.
This paper critically evaluates what it identifies as ‘the institutional theory of freedom’ developed within recent neo-Hegelian philosophy. While acknowledging the gains made against the Kantian theory of autonomy as detachment it is argued that the institutional theory ultimately undermines the very meaning of practical agency. By tying agency to institutionally sustained recognition it effectively excludes the exercise of practical reason geared toward emancipation from a settled normative order. Adorno's notion of autonomy as resistance is enlisted to develop an account (...) of practical reason that is neither institutionally constrained nor without appropriate consideration of the historical location of the practical agent. (shrink)
Collaboration is increasingly popular across academia. Collaborative work raises certain ethical questions, however. How will the fruits of collaboration be divided? How will the work for the collaborative project be split? In this paper, we consider the following question in particular. Are there ways in which these divisions systematically disadvantage certain groups? -/- We use evolutionary game theoretic models to address this question. First, we discuss results from O'Connor and Bruner (unpublished). In this paper, we show that underrepresented groups in (...) academia can be disadvantaged in such situations by dint of their small numbers. Second, we present novel results exploring how the hierarchical structure of academia can lead to bargaining disadvantage. We investigate models where one actor has a higher baseline of academic success, less to lose if collaboration goes south, or greater rewards for non-collaborative work. We show that in these situations, the less powerful partner is disadvantaged in bargaining over collaboration. (shrink)
Over the last several years, a number of philosophers have advanced formal versions of certain traditional arguments for the incompatibility of human freedom with causal determinism and for the incompatibility of human freedom with infallible divine foreknowledge. Common to all of these is some form of a principle governing the transfer of a species of alethic necessity (TPN). More recently, a few clear and compelling counterexamples to TNP (and a variant of it) have begun to surface in the literature. These (...) attacks on TNP are developed along somewhat different lines (and were apparently worked out independently of each other). I will show that despite the differences in presentation, however, all of the compelling counterexamples that have been offered turn on a common (and overlooked) basic feature. Once this feature is recognized, I suggest, one is naturally led to restrict the principle in a way that renders it immune to these counterexamples. (I further argue that the restriction I suggest has independent justification.) I then go on to consider two further attempts to show the invalidity of TNP for power necessity, ones that would not be forestalled by my restriction on TNP, and I argue that they are unsuccessful. In a final section, I compare my modified version of TNP for power necessity with a principle suggested in Ginet (1990). (shrink)
It is a commonplace of philosophy that the notion of free will is a hard nut to crack. A simple, compelling argument can be made to show that behavior for which an agent is morally responsible cannot be the outcome of prior determining causal factors.1 Yet the smug satisfaction with which we incompatibilists are prone to trot out this argument has a tendency to turn to embarrassment when we're asked to explain just how it is that morally responsible action might (...) obtain under the assumption of indeterminism. Despair over the prospect of giving a satisfactory answer to this question has led some contemporary philosophers to a position rarely, if ever, held in the history of philosophy: free, responsible action is an incoherent concept.2. (shrink)
"Why should we care about having true beliefs? And why do demonstrably false beliefs persist and spread despite consequences for the people who hold them? Philosophers of science Cailin O’Connor and James Weatherall argue that social factors, rather than individual psychology, are what’s essential to understanding the spread and persistence of false belief. It might seem that there’s an obvious reason that true beliefs matter: false beliefs will hurt you. But if that’s right, then why is it irrelevant to many (...) people whether they believe true things or not? In an age riven by "fake news," "alternative facts," and disputes over the validity of everything from climate change to the size of inauguration crowds, the authors argue that social factors, not individual psychology, are what’s essential to understanding the persistence of false belief and that we must know how those social forces work in order to fight misinformation effectively."–Publisher’s description. (shrink)
Social media, meaning digital technologies and platforms such as blogs, wikis, forums, content aggregators, sharing sites, and social networks like Facebook and Twitter, have profoundly changed the way that information can be shared online. Now, almost anyone with a broadband internet connection or a smart phone can share ideas, data, and opinions with just about anyone else on the planet. This change has serious implications for the way in which human subjects research can be conducted and, concomitantly, for the ways (...) in which such research may be regulated. This article explores some of these issues. (shrink)
I distinguish restrictive and permissive multiverse solutions to the problems of evil and no best world. Restrictive multiverses do not admit a single instance of gratuitous evil and they are not improvable. I show that restrictive multiverses unacceptably entail that all modal distinctions collapse. I consider Timothy O’Connor’s permissive multiverse. I show that a perfect creator minimizes aggregative suffering in permissive multiverses only if the actual universe is not included in any actualizable multiverse. I conclude that permissive multiverses do not (...) offer a credible solution to the problems of evil and no best world. (shrink)
Advocacy has become an accepted and integral attribute of nursing practice. Despite this adoption of advocacy, confusion remains about the precise nature of the concept and how it should be enacted in practice. The aim of this study was to investigate general nurses’ perceptions of being patient advocates in Ireland and how they enact this role. These perceptions were compared with existing theory and research on advocacy in order to contribute to the knowledge base on the subject. An inductive, qualitative (...) approach was used for this study. Three focus group interviews with a total of 20 practising nurses were conducted with a sample representing different grades in a general hospital setting. Data analysis was carried out using elements of Strauss and Corbins’ approach to concept development. The findings indicate that the principal role of the nurse advocate is to act as an intermediary between the patient and the health care environment. The results highlight that advocacy did, however, result in nurses becoming involved in conflict and confrontation with others and that it could be detrimental to nurses both professionally and personally. It was also clear that when enacting advocacy, nurses distinguished between ‘clinical advocacy’ and organizational advocacy. (shrink)
Reid takes it to be part of our commonsense view of ourselves that "we" -- "qua" enduring substances, not merely "qua" subjects of efficacious mental states -- are often the immediate causes of our own volitions. Only if this conviction is veridical, Reid thinks, may we be properly held to be responsible for our actions (indeed, may we truly be said to "act" at all). This paper offers an interpretation of Reid's account of such agency (taking account of Rowe's recent (...) commentary), with particular attention to the issue of the causation of and responsibility for an agent's "causing" of his volition. (shrink)
According to many philosophical theologians, God is metaphysically simple: there is no real distinction among His attributes or even between attribute and existence itself. Here, I consider only one argument against the simplicity thesis. Its proponents claim that simplicity is incompatible with God’s having created another world, since simplicity entails that God is unchanging across possible worlds. For, they argue, different acts of creation involve different willings, which are distinct intrinsic states. I show that this is mistaken, by sketching an (...) adequate account of reasons-guided activity that does not require distinct intrinsic states of willing corresponding to each possible act of creation. (shrink)
We show that previous results from epistemic network models showing the benefits of decreased connectivity in epistemic networks are not robust across changes in parameter values. Our findings motivate discussion about whether and how such models can inform real-world epistemic communities. As we argue, only robust results from epistemic network models should be used to generate advice for the real-world, and, in particular, decreasing connectivity is a robustly poor recommendation.
This essay will canvass recent philosophical discussion of accounts of human (free) agency that deploy a notion of agent causation . Historically, many accounts have only hinted at the nature of agent causation by way of contrast with the causality exhibited by impersonal physical systems. Likewise, the numerous criticisms of agent causal theories have tended to be highly general, often amounting to no more that the bare assertion that the idea of agent causation is obscure or mysterious. But in the (...) past decade, detailed accounts of agent causation have been offered (chiefly by Randolph Clarke and Timothy O’Connor), and they have occasioned more specific objections in turn. These recent accounts and objections to them will be my primary focus in what follows. But first I will identify two distinct motivations that have been advanced for adopting an agent causal approach to human agency and the ontological and metaphysical commitments common to any version of this approach. (shrink)
In this new book, Alessandra Tanesini demonstrates that feminist thought has a lot to offer to the study of Wittgenstein's philosophical work, and that -at the same time-that work can inspire feminist reflection in new directions. In Wittgenstein, Tanesini offers a highly original interpretation of several themes in Wittgenstein's philosophy. She argues that when we look at his work through feminist eyes we discover that he is not primarily concerned with providing solutions to technical problems in the philosophy of mind, (...) mathematics, and language. Instead, his remarks on these topics are intended to offer insights about human finitude, the loneliness of the modern autonomous self, and our relations to other human beings. Thus, the modern conception of the individual emerges as the critical target of Wittgenstein's philosophical work, both early and late. This conception has also been one of the dominant concerns of contemporary feminist philosophy. In this book, Wittgenstein's insights are deployed to further feminist debates on issues such as identity, difference, the masculine character of the modern self. (shrink)
In their recent book, Oreskes and Conway describe the ‘tobacco strategy’, which was used by the tobacco industry to influence policymakers regarding the health risks of tobacco products. The strategy involved two parts, consisting of promoting and sharing independent research supporting the industry’s preferred position and funding additional research, but selectively publishing the results. We introduce a model of the tobacco strategy, and use it to argue that both prongs of the strategy can be extremely effective—even when policymakers rationally update (...) on all evidence available to them. As we elaborate, this model helps illustrate the conditions under which the tobacco strategy is particularly successful. In addition, we show how journalists engaged in ‘fair’ reporting can inadvertently mimic the effects of industry on public belief. 1Introduction2Epistemic Network Models3Selective Sharing4Biased Production5Journalists as Unwitting Propagandists6ConclusionAppendix. (shrink)
In this paper we use an experimental approach to investigate how linguistic conventions can emerge in a society without explicit agreement. As a starting point we consider the signaling game introduced by Lewis. We find that in experimental settings, small groups can quickly develop conventions of signal meaning in these games. We also investigate versions of the game where the theoretical literature indicates that meaning will be less likely to arise—when there are more than two states for actors to transfer (...) meaning about and when some states are more likely than others. In these cases, we find that actors are less likely to arrive at strategies where signals have clear conventional meaning. We conclude with a proposal for extending the use of the methodology of experimental economics in experimental philosophy. (shrink)
The hypothetical scenarios generally known as trolley problems have become widespread in recent moral philosophy. They invariably require an agent to choose one of a strictly limited number of options, all of them bad. Although they don’t always involve trolleys / trams, and are used to make a wide variety of points, what makes it justified to speak of a distinctive “trolley method” is the characteristic assumption that the intuitive reactions that all these artificial situations elicit constitute an appropriate guide (...) to real-life moral reasoning. I dispute this assumption by arguing that trolley cases inevitably constrain the supposed rescuers into behaving in ways that clearly deviate from psychologically healthy, and morally defensible, human behavior. Through this focus on a generally overlooked aspect of trolley theorizing – namely, the highly impoverished role invariably allotted to the would-be rescuer in these scenarios – I aim to challenge the complacent twin assumptions of advocates of the trolley method that this approach to moral reasoning has practical value, and is in any case innocuous. Neither assumption is true. (shrink)
This article is an attempt to situate imagination within consciousness complete with its own pre-cognitive, cognitive, and meta-cognitive domains. In the first sections we briefly review traditional philosophical and psychological conceptions of the imagination. The majority have viewed perception and imagination as separate faculties, performing distinct functions. A return to a phenomenological account of the imagination suggests that divisions between perception and imagination are transcended by precognitive factors of sense of reality and non-reality where perception and imagination play an indivisible (...) role. In fact, both imagination and perception define sense of reality jointly according to what is possible and not possible. Absorption in a possible world depends on the strengths of alternative possibilities, and the relationship between core and marginal consciousness. The model may offer a parsimonious account of different states and levels of imaginal consciousness, and of how “believed-in imaginings” develop and become under some circumstances “lived-in experiences.”. (shrink)
“Free Will” is a philosophical term of art for a particular sort of capacity of rational agents to choose a course of action from among various alternatives. Which sort is the free will sort is what all the fuss is about. (And what a fuss it has been: philosophers have debated this question for over two millenia, and just about every major philosopher has had something to say about it.) Most philosophers suppose that the concept of free will is very (...) closely connected to the concept of moral responsibility. Acting with free will, on such views, is just to satisfy the metaphysical requirement on being responsible for one's action. (Clearly, there will also be epistemic conditions on responsibility as well, such as being aware—or failing that, being culpably unaware—of relevant alternatives to one's action and of the alternatives' moral significance.) But the significance of free will is not exhausted by its connection to moral responsibility. Free will also appears to be a condition on desert for one's accomplishments (why sustained effort and creative work are praiseworthy); on the autonomy and dignity of persons; and on the value we accord to love and friendship. (See Kane 1996, 81ff. and Clarke 2003, Ch.1.). (shrink)
In almost every human society some people get more and others get less. Why is inequity the rule in human societies? Philosopher Cailin O'Connor reveals how cultural evolution works on social categories such as race and gender to generate unfairness.