This paper puts forward two claims about funding carbon capture and storage. The first claim is that there are moral justifications supporting strategic investment into CO2 storage from global and regional perspectives. One argument draws on the empirical evidence which suggests carbon capture and storage would play a significant role in a portfolio of global solutions to climate change; the other draws on Rawls' notion of legitimate expectations and Moellendorf's Anti-Poverty principle. The second claim is that where to pursue this (...) strategic investment poses a morally non-trivial problem, with considerations like near-term global distributive justice and undermining legitimate expectations favouring investing in developing regions, especially in Asia, and considerations like long-term climate impacts and best uses of resources favouring investing in the relatively wealthy regions that have the best prospects for successful storage development. [Open access]. (shrink)
This volume offers very selected papers from the 2014 conference of the “International Association for Computing and Philosophy” (IACAP) - a conference tradition of 28 years. - - - Table of Contents - 0 Vincent C. Müller: - Editorial - 1) Philosophy of computing - 1 Çem Bozsahin: - What is a computational constraint? - 2 Joe Dewhurst: - Computing Mechanisms and Autopoietic Systems - 3 Vincenzo Fano, Pierluigi Graziani, Roberto Macrelli and Gino Tarozzi: - Are Gandy Machines really local? (...) - 4 Doukas Kapantais: - A refutation of the Church-Turing thesis according to some interpretation of what the thesis says - 5 Paul Schweizer: - In What Sense Does the Brain Compute? - 2) Philosophy of computer science & discovery - 6 Mark Addis, Peter Sozou, Peter C R Lane and Fernand Gobet: - Computational Scientific Discovery and Cognitive Science Theories - 7 Nicola Angius and Petros Stefaneas: - Discovering Empirical Theories of Modular Software Systems. An Algebraic Approach. - 8 Selmer Bringsjord, John Licato, Daniel Arista, Naveen Sundar Govindarajulu and Paul Bello: - Introducing the Doxastically Centered Approach to Formalizing Relevance Bonds in Conditionals - 9 Orly Stettiner: - From Silico to Vitro: - Computational Models of Complex Biological Systems Reveal Real-world Emergent Phenomena - 3) Philosophy of cognition & intelligence - 10 Douglas Campbell: - Why We Shouldn’t Reason Classically, and the Implications for Artificial Intelligence - 11 Stefano Franchi: - Cognition as Higher Order Regulation - 12 Marcello Guarini: - Eliminativisms, Languages of Thought, & the Philosophy of Computational Cognitive Modeling - 13 Marcin Miłkowski: - A Mechanistic Account of Computational Explanation in Cognitive Science and Computational Neuroscience - 14 Alex Tillas: - Internal supervision & clustering: - A new lesson from ‘old’ findings? - 4) Computing & society - 15 Vasileios Galanos: - Floridi/Flusser: - Parallel Lives in Hyper/Posthistory - 16 Paul Bello: - Machine Ethics and Modal Psychology - 17 Marty J. Wolf and Nir Fresco: - My Liver Is Broken, Can You Print Me a New One? - 18 Marty J. Wolf, Frances Grodzinsky and Keith W. Miller: - Robots, Ethics and Software – FOSS vs. Proprietary Licenses. (shrink)
There are many cases in which, by making some great sacrifice, you could bring about either a good outcome or a very good outcome. In some of these cases, it seems wrong for you to bring about the good outcome, since you could bring about the very good outcome with no additional sacrifice. It also seems permissible for you not to make the sacrifice, and bring about neither outcome. But together, these claims seem to imply that you ought to bring (...) about neither outcome rather than the good outcome. And that seems very counterintuitive. In this paper, I develop this problem, propose a solution, and then draw out some implications both for how we should understand supererogation and for how we should approach charitable giving. (shrink)
Philosophical and scientific investigations of the proprietary aspects of self—mineness or mental ownership—often presuppose that searching for unique constituents is a productive strategy. But there seem not to be any unique constituents. Here, it is argued that the “self-specificity” paradigm, which emphasizes subjective perspective, fails. Previously, it was argued that mode of access also fails to explain mineness. Fortunately, these failures, when leavened by other findings (those that exhibit varieties and vagaries of mineness), intimate an approach better suited to searching (...) for an explanation. Having an alternative in hand, one that shows promise of achieving explanatory adequacy, provides an additional reason to suspend the search for unique constituents. In short, a negative and a positive thesis are developed: we should cease looking for unique constituents and should seek to explain mineness in accord with the model developed here. This model rejects attempts to explain the phenomenon in terms of either a narrative or a minimal sense of self; it seeks to explain at a “molecular” level, one that appeals to multiple, interacting dimensions. The molecular-level model allows for the possibility that subjective perspective is distinct from a stark perspective (one that does not imply mineness). It proposes that the confounding of tacit expectations plays an important role in explaining mental ownership and its complement, disownership. But the confounding of tacit expectations is not sufficient. Because we are able to be aware of the existence of mental states that do not belong to self, we require a mechanism for determining degree of self-relatedness. One such mechanism is proposed here, and it is shown how this mechanism can be integrated into a general model of mental ownership. In the spirit of suggesting how this model might be able to help resolve outstanding problems, the question as to whether inserted thoughts belong to the patient who reports them is also considered. (shrink)
No other English-language translation comes close to the standard of accuracy and readability set here by Reeve. This volume provides the reader with more of the resources needed to understand Aristotle's argument than any other edition. An introductory essay by Reeve situates _Politics_ in Aristotle's overall thought and offers an engaging critical introduction to its central argument. A detailed glossary, footnotes, bibliography, and indexes provide historical background, analytical assistance with particular passages, and a guide both to Aristotle’s philosophy and to (...) scholarship on it. (shrink)
Whilst much has been said about the implications of predictive processing for our scientific understanding of cognition, there has been comparatively little discussion of how this new paradigm fits with our everyday understanding of the mind, i.e. folk psychology. This paper aims to assess the relationship between folk psychology and predictive processing, which will first require making a distinction between two ways of understanding folk psychology: as propositional attitude psychology and as a broader folk psychological discourse. It will be argued (...) that folk psychology in this broader sense is compatible with predictive processing, despite the fact that there is an apparent incompatibility between predictive processing and a literalist interpretation of propositional attitude psychology. The distinction between these two kinds of folk psychology allows us to accept that our scientific usage of folk concepts requires revision, whilst rejecting the suggestion that we should eliminate folk psychology entirely. (shrink)
ABSTRACT Shagrir and Sprevak explore the apparent necessity of representation for the individuation of digits in computational systems.1 1 I will first offer a response to Sprevak’s argument that does not mention Shagrir’s original formulation, which was more complex. I then extend my initial response to cover Shagrir’s argument, thus demonstrating that it is possible to individuate digits in non-representational computing mechanisms. I also consider the implications that the non-representational individuation of digits would have for the broader theory of computing (...) mechanisms. 1 The Received View: No Computation without Representation 2 Computing Mechanisms and Functional Individuation 3 Against Computational Externalism 4 Implications for the Mechanistic Account. (shrink)
Many of us believe that exploitation is wrong, and that it is wrong even when, because the exploited would otherwise suffer, they consent to the exploitation. Does it follow that we should leave people to suffer rather than exploit them? This conclusion might seem difficult to accept, but avoiding it seems to require accepting a counterintuitively demanding view about our obligations to vulnerable people. In this paper, I offer a new solution to this problem.
Religion is an important element of end-of-life care on the paediatric intensive care unit with religious belief providing support for many families and for some staff. However, religious claims used by families to challenge cessation of aggressive therapies considered futile and burdensome by a wide range of medical and lay people can cause considerable problems and be very difficult to resolve. While it is vital to support families in such difficult times, we are increasingly concerned that deeply held belief in (...) religion can lead to children being potentially subjected to burdensome care in expectation of ‘miraculous’ intervention. We reviewed cases involving end-of-life decisions over a 3-year period. In 186 of 203 cases in which withdrawal or limitation of invasive therapy was recommended, agreement was achieved. However, in the 17 remaining cases extended discussions with medical teams and local support mechanisms did not lead to resolution. Of these cases, 11 (65%) involved explicit religious claims that intensive care should not be stopped due to expectation of divine intervention and complete cure together with conviction that overly pessimistic medical predictions were wrong. The distribution of the religions included Protestant, Muslim, Jewish and Roman Catholic groups. Five of the 11 cases were resolved after meeting religious community leaders; one child had intensive care withdrawn following a High Court order, and in the remaining five, all Christian, no resolution was possible due to expressed expectations that a ‘miracle’ would happen. (shrink)
One of the central issues dividing proponents of metaphysical interpretations of transcendental idealism concerns Kant’s views on the distinctness of things in themselves and appearances. Proponents of metaphysical one-object interpretations claim that things in themselves and appearances are related by some kind of one-object grounding relation, through which the grounding and grounded relata are different aspects of the same object. Proponents of metaphysical two-object interpretations, by contrast, claim that things in themselves and appearances are related by some kind of two-object (...) grounding relation, through which the grounding and grounded relata involve distinct objects. By way of investigating Kant’s overarching account of grounding, I will argue that the most plausible metaphysical interpretation of transcendental idealism is one on which we can know that there are things in themselves grounding appearances, but not which specific kind of one- or two-object grounding relation obtain between them. Our ignorance of things in themselves therefore extends to their distinctness from appearances — pace both metaphysical one-object interpretations and metaphysical two-object interpretations. (shrink)
Kant views every human action as either entirely determined by natural necessity or entirely free. In viewing human action this way, it is unclear how he can account for degrees of responsibility. In this article, I consider three recent attempts to accommodate degrees of responsibility within Kant's framework, but argue that none of them are satisfying. In the end, I claim that transcendental idealism constrains Kant such that he cannot provide an adequate account of degrees of responsibility.
Is there any number of people you should save from paralysis rather than saving one person from death? Is there any number of people you should save from a headache rather than saving one person from death? Many people answer ‘yes’ and ‘no’, respectively. They therefore accept a partially aggregative moral view. Patrick Tomlin has recently argued that the most promising partially aggregative views in the literature have implausible implications in certain cases in which there are additions or subtractions to (...) the groups of people that we can save. Several philosophers have begun responding to this argument by developing partially aggregative views that avoid the relevant implications. In this paper, I extend Tomlin’s argument to create a dilemma that no partially aggregative view can avoid. I conclude that we should accept a fully aggregative moral view. (shrink)
_ Source: _Page Count 27 This is an explication and defense of P. F. Strawson’s naturalist theory of free will and moral responsibility. I respond to a set of criticisms of the view by free will skeptics, compatibilists, and libertarians who adopt the _core assumption_: Strawson thinks that our reactive attitudes provide the basis for a rational justification of our blaming and praising practices. My primary aim is to explain and defend Strawson’s naturalism in light of criticisms based on the (...) core assumption. Strawson’s critiques of incompatibilism and free will skepticism are not intended to provide rational justifications for either compatibilism or the claim that some persons have free will. Hence, the charge that Strawson’s “arguments” are faulty is misplaced. The core assumption resting behind such critiques is mistaken. (shrink)
Several philosophers have defended versions of Minimax Complaint, or MC. According to MC, other things equal, we should act in the way that minimises the strongest individual complaint. In this paper, I argue that MC must be rejected because it has implausible implications in certain cases involving risk. In these cases, we can apply MC either ex ante, by focusing on the complaints that could be made based on the prospects that an act gives to people, or ex post, by (...) focusing on the complaints that could be made based on the actual results that an act has for people. I argue that MC has implausible implications either way. I then defend a view on which, other things equal, we should act in the way that minimizes the sum of complaints. (shrink)
The aim of this paper is to begin developing a version of Gualtiero Piccinini’s mechanistic account of computation that does not need to appeal to any notion of proper functions. The motivation for doing so is a general concern about the role played by proper functions in Piccinini’s account, which will be evaluated in the first part of the paper. I will then propose a potential alternative approach, where computing mechanisms are understood in terms of Carl Craver’s perspectival account of (...) mechanistic functions. According to this approach, the mechanistic function of ‘performing a computation’ can only be attributed relative to an explanatory perspective, but such attributions are nonetheless constrained by the underlying physical structure of the system in question, thus avoiding unlimited pancomputationalism. If successful, this approach would carry with it fewer controversial assumptions than Piccinini’s original account, which requires a robust understanding of proper functions. Insofar as there are outstanding concerns about the status of proper functions, this approach would therefore be more generally acceptable. (shrink)
Corporate philanthropy describes the action when a corporation voluntarily donates a portion of its resources to a societal cause. Although the thought of philanthropy invokes feelings of altruism, there are many objectives for corporate giving beyond altruism. Meeting strategic corporate objectives can be an important if not primary goal of philanthropy. The purpose of this paper is to share insights from a strategic corporate philanthropic initiative aimed at increasing the pool of frontline customer contact employees who are performance-ready, while supporting (...) curriculum development and infrastructure improvement for selected university business programs, creating a win-win situation for the company and the universities. This paper will address three objectives. First, we will examine the evolution of strategic philanthropy from the traditional view to its current position as a strategic option. Second, we will address the recruitment of front line talent needs (customer facing jobs in sales, customer service, and marketing) based on the profit maximization model of strategic philanthropy. Finally, we will offer conclusions and issues for future research. (shrink)
Policymakers who seek to make scientifically informed decisions are constantly confronted by scientific uncertainty and expert disagreement. This thesis asks: how can policymakers rationally respond to expert disagreement and scientific uncertainty? This is a work of non-ideal theory, which applies formal philosophical tools developed by ideal theorists to more realistic cases of policymaking under scientific uncertainty. I start with Bayesian approaches to expert testimony and the problem of expert disagreement, arguing that two popular approaches— supra-Bayesianism and the standard model of (...) expert deference—are insufficient. I develop a novel model of expert deference and show how it can deal with many of these problems raised for them. I then turn to opinion pooling, a popular method for dealing with disagreement. I show that various theoretical motivations for pooling functions are irrelevant to realistic policymaking cases. This leads to a cautious recommendation of linear pooling. However, I then show that any pooling method relies on value judgements, that are hidden in the selection of the scoring rule. My focus then narrows to a more specific case of scientific uncertainty: multiple models of the same system. I introduce a particular case study involving hurricane models developed to support insurance decision-making. I recapitulate my analysis of opinion pooling in the context of model ensembles, confirming that my hesitations apply. This motivates a shift of perspective, to viewing the problem as a decision theoretic one. I rework a recently developed ambiguity theory, called the confidence approach, to take input from model ensembles. I show how it facilitates the resolution of the policymaker’s problem in a way that avoids the issues encountered in previous chapters. This concludes my main study of the problem of expert disagreement. In the final chapter, I turn to methodological reflection. I argue that philosophers who employ the mathematical methods of the prior chapters are modelling. Employing results from the philosophy of scientific models, I develop the theory of normative modelling. I argue that it has important methodological conclusions for the practice of formal epistemology, ruling out popular moves such as searching for counterexamples. (shrink)
While public health organizations can detect disease spread, few can monitor and respond to real-time misinformation. Misinformation risks the public’s health, the credibility of institutions, and the safety of experts and front-line workers. Big Data, and specifically publicly available media data, can play a significant role in understanding and responding to misinformation. The Public Good Projects uses supervised machine learning to aggregate and code millions of conversations relating to vaccines and the COVID-19 pandemic broadly, in real-time. Public health researchers supervise (...) this process daily, and provide insights to practitioners across a range of disciplines. Through this work, we have gleaned three lessons to address misinformation. Sources of vaccine misinformation are known; there is a need to operationalize learnings and engage the pro-vaccination majority in debunking vaccine-related misinformation. Existing systems can identify and track threats against health experts and institutions, which have been subject to unprecedented harassment. This supports their safety and helps prevent the further erosion of trust in public institutions. Responses to misinformation should draw from cross-sector crisis management best practices and address coordination gaps. Real-time monitoring and addressing misinformation should be a core function of public health, and public health should be a core use case for data scientists developing monitoring tools. The tools to accomplish these tasks are available; it remains up to us to prioritize them. (shrink)
Is there any number of people you should save from paralysis rather than saving one person from death? Is there any number of people you should save from a migraine rather than saving one person from death? Many people answer ‘yes’ and ‘no’, respectively. The aim of partially aggregative moral views is to capture and justify combinations of intuitions like these. These views contrast with fully aggregative moral views, which imply that the answer to both questions is ‘yes’, and with (...) non-aggregative moral views, which imply that the answer to both questions is ‘no’. In this paper, I review the most natural and influential ways of developing partially aggregative views and explain the main problems they face. (shrink)
Currently, one of the most influential theories of consciousness is Rosenthal's version of higher-order-thought (HOT). We argue that the HOT theory allows for two distinct interpretations: a one-component and a two-component view. We further argue that the two-component view is more consistent with his effort to promote HOT as an explanatory theory suitable for application to the empirical sciences. Unfortunately, the two-component view seems incapable of handling a group of counterexamples that we refer to as cases of radical confabulation. We (...) begin by introducing the HOT theory and by indicating why we believe it is open to distinct interpretations. We then proceed to show that it is incapable of handling cases of radical confabulation. Finally, in the course of considering various possible responses to our position, we show that adoption of a disjunctive strategy, one that would countenance both one-component and two-component versions, would fail to provide any empirical or explanatory advantage. (shrink)
This paper addresses arguments that “separability” is an assumption of Bell’s theorem, and that abandoning this assumption in our interpretation of quantum mechanics (a position sometimes referred to as “holism”) will allow us to restore a satisfying locality principle. Separability here means that all events associated to the union of some set of disjoint regions are combinations of events associated to each region taken separately.In this article, it is shown that: (a) localised events can be consistently defined without implying separability; (...) (b) the definition of Bell’s locality condition does not rely on separability in any way; (c) the proof of Bell’s theorem does not use separability as an assumption. If, inspired by considerations of non-separability, the assumptions of Bell’s theorem are weakened, what remains no longer embodies the locality principle. Teller’s argument for “relational holism” and Howard’s arguments concerning separability are criticised in the light of these results. Howard’s claim that Einstein grounded his arguments on the incompleteness of QM with a separability assumption is also challenged. Instead, Einstein is better interpreted as referring merely to the existence of localised events. Finally, it is argued that Bell rejected the idea that separability is an assumption of his theorem. (shrink)
_Teachers as Researchers_ urges teachers - as both producers and consumers of knowledge - to engage in the debate about educational research by undertaking meaningful research themselves. Teachers are being encouraged to carry out research in order to improve their effectiveness in the classroom, but this book suggests that they also reflect on and challenge the reductionist and technicist methods that promote a 'top down' system of education. It argues that only by engaging in complex, critical research will teachers rediscover (...) their professional status, empower their practice in the classroom and improve the quality of education for their pupils. Now re-released to introduce this classic guide for teachers, the new edition of _Teachers as Researchers_ now also includes an introductory chapter by Shirley R. Steinberg that sets the book within the context of both the subject and the historical perspective. In addition, she also provides information on some key writing that extends the bibliography of this influential book thereby bringing the material fully up to date with current research. Postgraduate students of education and experienced teachers will find much to inspire and encourage them in this definitive book. (shrink)
Is there any number of people you should save from paralysis rather than saving one person from death? Is there any number of people you should save from a migraine rather than saving one person from death? Many people answer “yes” and “no,” respectively. The aim of partially aggregative moral views is to capture and justify combinations of intuitions like these. In this article, I develop a risk-based reductio argument that shows that there can be no adequate partially aggregative view. (...) I then argue that the only plausible response to this reductio is to accept a fully aggregative view. (shrink)
This paper offers directions for the continuing dialogue between business ethicists and environmental philosophers. I argue that a theory of corporate social responsibility must be consistent with, if not derived from, a model of sustainable economics rather than the prevailing neoclassical model of market economics. I use environmental examples to critique both classical and neoclassical models of corporate social responsibility and sketch the alternative model of sustainable development. After describing some implications of this model at the level of individual firms (...) and industries, I offer an ethical justification of the sustainability alternative that is derived from the same values that underlie traditional market economics. (shrink)
The indispensability argument is a method for showing that abstract mathematical objects exist. Various versions of this argument have been proposed. Lately, commentators seem to have agreed that a holistic indispensability argument will not work, and that an explanatory indispensability argument is the best candidate. In this paper I argue that the dominant reasons for rejecting the holistic indispensability argument are mistaken. This is largely due to an overestimation of the consequences that follow from evidential holism. Nevertheless, the holistic indispensability (...) argument should be rejected, but for a different reason —in order that an indispensability argument relying on holism can work, it must invoke an unmotivated version of evidential holism. Such an argument will be unsound. Correcting the argument with a proper construal of evidential holism means that it can no longer deliver mathematical Platonism as a conclusion: such an argument for Platonism will be invalid. I then show how the reasons for rejecting the holistic indispensability argument importantly constrain what kind of account of explanation will be permissible in explanatory versions. (shrink)
Internalism in epistemology is the view that all the factors relevant to the justification of a belief are importantly internal to the believer, while externalism is the view that at least some of those factors are external. This extremely modest first approximation cries out for refinement (which we undertake below), but is enough to orient us in the right direction, namely that the debate between internalism and externalism is bound up with the controversy over the correct account of the distinction (...) between justified beliefs and unjustified beliefs.1 Understanding that distinction has occasionally been obscured by attention to the analysis of knowledge and to the Gettier problem, but our view is that these problems, while interesting, should not completely seduce philosophers away from central questions about epistemic justification. A plausible starting point in the discussion of justification is that the distinction between justified beliefs and unjustified beliefs is not the same as the distinction between true beliefs and false beliefs. This follows from the mundane observation that it is possible to rationally believe.. (shrink)
This article addresses the question of whether we should conceive of mechanisms as productive of change in a regular way. I argue that, if mechanisms are characterized as fully regular, on the one hand, then not enough processes will count as mechanisms for them to be interesting or useful. If no appeal to regularity is made at all in their characterization, on the other hand, then mechanisms can no longer be useful for grounding prediction and supporting intervention strategies. I conclude (...) that, if the New Mechanistic Philosophy is to be successful, a stochastic characterization of mechanisms must be adopted. (shrink)
In what follows, I suggest that it makes good sense to think of the truth of the probabilistic generalizations made in the life sciences as metaphysically grounded in stochastic mechanisms in the world. To further understand these stochastic mechanisms, I take the general characterization of mechanism offered by MDC :1–25, 2000) and explore how it fits with several of the going philosophical accounts of chance: subjectivism, frequentism, Lewisian best-systems, and propensity. I argue that neither subjectivism, frequentism, nor a best-system-style interpretation (...) of chance will give us what we need from an account of stochastic mechanism, but some version of propensity theory can. I then draw a few important lessons from recent propensity interpretations of fitness in order to present a novel propensity interpretation of stochastic mechanism according to which stochastic mechanisms are thought to have probabilistic propensities to produce certain outcomes over others. This understanding of stochastic mechanism, once fully fleshed-out, provides the benefits of allowing the stochasticity of a particular mechanism to be an objective property in the world, a property investigable by science, a way of quantifying the stochasticity of a particular mechanism, and a way to avoid a problematic commitment to the causal efficacy of propensities. (shrink)
Aristotle's doctrine of the mean is expressed in quantitative terms, but this has been hard for some people to take literally, its more elaborate versions sometimes being described as “extremely silly.” Roughly two books of the Nicomachean Ethics are permeated with talk of character traits which are either deficient or excessive, however, and the aim of this paper is to examine how the doctrine might meet the objections of its critics.
This paper is an examination of evidential holism, a prominent position in epistemology and the philosophy of science which claims that experiments only ever confirm or refute entire theories. The position is historically associated with W.V. Quine, and it is at once both popular and notorious, as well as being largely under-described. But even though there’s no univocal statement of what holism is or what it does, philosophers have nevertheless made substantial assumptions about its content and its truth. Moreover they (...) have drawn controversial and important conclusions from these assumptions. In this paper I distinguish three types of evidential holism and argue that the most oft-cited and controversial thesis is entirely unmotivated. The other two theses are much overlooked, but are well-motivated and free from controversial implications. (shrink)
The article considers structuralism as a philosophy of mathematics, as based on the commonly accepted explicit mathematical concept of a structure. Such a structure consists of a set with specified functions and relations satisfying specified axioms, which describe the type of the structure. Examples of such structures such as groups and spaces, are described. The viewpoint is now dominant in organizing much of mathematics, but does not cover all mathematics, in particular most applications. It does not explain why certain structures (...) are dominant, not why the same mathematical structure can have so many different and protean realizations. ‘structure’ is just one part of the full situation, which must somehow connect the ideal structures with their varied examples. (shrink)
Many of us experience the activities which fill our everyday lives as meaningful, and to do so we must (and do) hold them to be important. However, reflection undercuts this confidence: our activities are aimed at ends which are arbitrary, in that we have reason to regard our taking them so seriously as lacking justification; they are comparatively insignificant; and they leave little of any real permanence. Even though we take our activities seriously, and our everyday lives to be important, (...) on reflection they seem less meaningful than we suppose. Thomas Nagel claims that this discrepancy is inevitable, and thus that our lives are absurd and to be approached with irony. The aim of this paper is to explore whether it is inevitable, and in particular to examine recent formulations (of Peter Singer, Robert Nozick, and others) of the old idea that we can transcend this absurdity by forming attachments less susceptible to being undercut. (shrink)
I propose we abandon the unit concept of "the evolutionary synthesis". There was much more to evolutionary studies in the 1920s and 1930s than is suggested in our commonplace narratives of this object in history. Instead, four organising threads capture much of evolutionary studies at this time. First, the nature of species and the process of speciation were dominating, unifying subjects. Second, research into these subjects developed along four main lines, or problem complexes: variation, divergence, isolation, and selection. Some calls (...) for 'synthesis' focused on these problem complexes (sometimes on one of these; other times, all). In these calls, comprehensive and pluralist compendia of plausibly relevant elements were preferred over reaching consensus about the value of particular formulae. Third, increasing confidence in the study of common problems coincided with methodological and epistemic changes associated with experimental taxonomy. Finally, the surge of interest in species problems and speciation in the 1930s is intimately tied to larger trends, especially a shifting balance in the life sciences towards process-based biologies and away from object-based naturalist disciplines. Advocates of synthesis in evolution supported, and were adapting to, these larger trends. (shrink)
The theory of we-mode cognition seeks to expand our understanding of the cognition involved in joint action, and therein claims to explain how we can have non-theoretical and non-simulative access to the minds of others. A basic tenet of this theory is that each individual jointly intends to accomplish some outcome together, requiring the adoption of a “first-person plural perspective” that is neither strictly individualistic – in the sense that a we-mode state is enabled by the joint involvement of other (...) – nor strictly pluralistic – in the sense that the involved individuals, rather than a ‘group’, are the bearers of the relevant joint intention. Whilst I concur with the idea that, in certain circumstances, we cognise from an irreducible ‘first-person plural perspective’, Gallotti & Frith’s existing proposal of we-mode cognition is in need of theoretical clarification. In this paper, I deliver such clarification so that the theory of we-mode cognition is re-defined as:. sensitive to the phenomenological transformation that is induced by the embodied co-presence of others, and. limited to cases in which one intentionally attends to the capacities of one’s co-participant in joint action. (shrink)
Mensch and Barge in their interpretation of Alasdair MacIntyre’s critique of genealogical ethics as a basis of ethical weakness in the emerging field of “leadership-as-practice,” suggest that L-A-P is lacking in ethical grounding especially because of its relativist philosophy. I address this valid ethical concern in L-A-P theory by arguing that there is a form of realism in Nietzchean axiology and that the dialogic potentialities in material-social interactions may offer a greater capacity for ethical reflexivity than a reliance on rules.
Physical Computation is the summation of Piccinini’s work on computation and mechanistic explanation over the past decade. It draws together material from papers published during that time, but also provides additional clarifications and restructuring that make this the definitive presentation of his mechanistic account of physical computation. This review will first give a brief summary of the account that Piccinini defends, followed by a chapter-by-chapter overview of the book, before finally discussing one aspect of the account in more critical detail.
Saunders Mac Lane has drawn attention many times, particularly in his book Mathematics: Form and Function, to the system of set theory of which the axioms are Extensionality, Null Set, Pairing, Union, Infinity, Power Set, Restricted Separation, Foundation, and Choice, to which system, afforced by the principle, , of Transitive Containment, we shall refer as . His system is naturally related to systems derived from topos-theoretic notions concerning the category of sets, and is, as Mac Lane emphasises, one (...) that is adequate for much of mathematics. In this paper we show that the consistency strength of Mac Lane's system is not increased by adding the axioms of Kripke–Platek set theory and even the Axiom of Constructibility to Mac Lane's axioms; our method requires a close study of Axiom H, which was proposed by Mitchell; we digress to apply these methods to subsystems of Zermelo set theory , and obtain an apparently new proof that is not finitely axiomatisable; we study Friedman's strengthening of , and the Forster–Kaye subsystem of , and use forcing over ill-founded models and forcing to establish independence results concerning and ; we show, again using ill-founded models, that proves the consistency of ; turning to systems that are type-theoretic in spirit or in fact, we show by arguments of Coret and Boffa that proves a weak form of Stratified Collection, and that is a conservative extension of for stratified sentences, from which we deduce that proves a strong stratified version of ; we analyse the known equiconsistency of with the simple theory of types and give Lake's proof that an instance of Mathematical Induction is unprovable in Mac Lane's system; we study a simple set theoretic assertion—namely that there exists an infinite set of infinite sets, no two of which have the same cardinal—and use it to establish the failure of the full schema of Stratified Collection in ; and we determine the point of failure of various other schemata in . The paper closes with some philosophical remarks. (shrink)