The new millennium has opened with a perfectly splendid decade of scholarship relating to the ‘Species Problem’. So, at least we now have a clear idea of what this is, but still no clear solution that will suit both biologists and philosophers. Richards has recently attempted to capture this story and to fill the void with two projects in one book. The first project is a descriptive and analytical history of the problem, which provides links to other recent works and (...) thereby allows one to fully reconstruct the literature. The second is prescriptive and presents Richards’s solution via a ‘division of labour in a conceptual framework’ followed by recapitulation and conclusions. It is my assessment as presented here that the first project will appeal more to biologists and the second one to philosophers. There is much of value in Richards approach including an excellent evaluation of the essentialism story in the descriptive project and clear exposition of several key issues such as the ‘species-as-individuals’ versus ‘species-as-categories’ debate which are covered in the second project. Interesting and informative as these arguments undoubtedly are, something still seems to be missing here. In this essay I suggest that this perception arises from Richards’ failure to embrace ideas about the importance of relativity and contingency in species definitions and further that his new conceptual framework lacks one hierarchical level to link overarching lineage concepts of species as evolutionary units with practical definitions for their recognition. In my view, the missing link is reproductive isolation and I conclude my review by presenting a prescriptive project for biologists to balance the one that Richards has delivered to philosophers. (shrink)
A symposium was held at the Centre for Research in the Arts, Social Sciences and Humanities at the University of Cambridge on June 12th 2019, ‘Rethinking Repetition in a Digital Age’, at which Geoff Stead, a leading mobile tech designer, was a keynote speaker. The focus of the Cambridge UK event was on how the potentials of digital technologies—whose harms have received widespread attention—could be redirected for the social good. For Stead, this is precisely what Babbel are doing in (...) their approach to commercial digital language learning. Stead spoke to the idea of reversing our personal relationships to mechanical affordances, and finding empowerment in understanding their designed logics. The transcript of the interview below, made in October 2021, revisits some of the main points he raised at that event. (shrink)
Recent conversation has blurred two very different social epistemic phenomena: echo chambers and epistemic bubbles. Members of epistemic bubbles merely lack exposure to relevant information and arguments. Members of echo chambers, on the other hand, have been brought to systematically distrust all outside sources. In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. It is crucial to keep these phenomena distinct. First, echo chambers can explain the post-truth phenomena in (...) a way that epistemic bubbles cannot. Second, each type of structure requires a distinct intervention. Mere exposure to evidence can shatter an epistemic bubble, but may actually reinforce an echo chamber. Finally, echo chambers are much harder to escape. Once in their grip, an agent may act with epistemic virtue, but social context will pervert those actions. Escape from an echo chamber may require a radical rebooting of one's belief system. (shrink)
The ‘Problem of Evil’ has been the focus of a number of articles in Think. Here, Timothy Chambers offers an unusual perspective on this seemingly intractable difficulty facing theists. ‘Did not I weep for him whose day was hard? Was not my soul grieved for the poor? But when I looked for good, evil came; and when I waited for light, darkness came.’.
Following Cass Sunstein's popular treatment of the concept, echo chambers are often defined as environments which exclude contrary opinions through omission. C. Thi Nguyen contests the popular usage and defines echo chambers in terms of in-group trust and out-group distrust. In this paper, I argue for a more comprehensive treatment. While both exclusion by omission and out-group distrust help sustain echo chambers, neither defines the phenomenon. I develop a social network model of echo chambers which focuses (...) on the role of belief-reinforcing echoes. First, I argue that the model allows us to incorporate Nguyen's main point about distrust without construing other commentators as deeply mistaken about the nature of echo chambers. Second, I use the model to develop an account of collaborative resistance and use it to clarify the role echo chambers play in spreading misinformation. (shrink)
My aim in this paper is to engage with C. Thi Nguyen’s characterization of the echo chamber and to propose two things. First, I argue that a proper reading of his concept of echo chamber should make use of the notion of ignorance in the form of a structural epistemic insensitivity. My main contention is that ignorance as a substantive structural practice accounts for the epistemically deleterious effects of echo chambers. Second, I propose that from the talk of ignorance (...) we should be able to see echo chambers in terms of their more harmful impacts in our daily lives. To do that, I argue that we should think of echo chambers as tools to promote hermeneutical domination. If my representation of Nguyen’s concept is accurate, I believe we can see some important theoretical consequences stemming from the way Nguyen understands it. (shrink)
In Reasonable Democracy, Simone Chambers describes, explains, and defends a discursive politics inspired by the work of Jürgen Habermas. In addition to comparing Habermas's ideas with other non-Kantian liberal theories in clear and accessible prose, Chambers develops her own views regarding the role of discourse and its importance within liberal democracies. Beginning with a deceptively simple question—"Why is talking better than fighting?"—Chambers explains how the idea of talking provides a rich and compelling view of morality, rationality, and (...) political stability. She considers talking as a way for people to respect each other as moral agents, as a way to reach reasonable and legitimate solutions to disputes, and as a way to reproduce and strengthen shared understandings. In the course of this argument, she defends modern universalist ethics, communicative rationality, and what she calls a "discursive political culture," a concept that locates the political power of discourse and deliberation not so much in institutions of democratic decision-making as in the type of conversations that go on around these institutions. While discourse and deliberation cannot replace voting, bargaining, or compromise, Chambers argues, it is important to maintain a background moral conversation in which to anchor other activities. As an extended case study, Chambers examines the conversation about language rights that has been taking place for more than twenty years in Quebec. A culture of dialogue, she shows, has proved a positive and powerful force in resolving some of the disagreements between the two linguistic communities there. (shrink)
This paper argues against the view that trolley cases are of little or no relevance to the ethics of automated vehicles. Four arguments for this view are outlined and rejected: the Not Going to Happen Argument, the Moral Difference Argument, the Impossible Deliberation Argument and the Wrong Question Argument. In making clear where these arguments go wrong, a positive account is developed of how trolley cases can inform the ethics of automated vehicles.
Derided and disregarded by many of his contemporaries, Michel Foucault is now regarded as probably the most influential thinker of the twentieth century, his work is studied across the humanities and social sciences. Reading Foucault, however, can be a challenge, as can writing about him, but in Understanding Foucault, the authors offer an entertaining and informative introduction to his thinking. They cover all the issues Foucault dealt with, including power, knowledge, subjectivity and sexuality and discuss the development of his analysis (...) throughout his work. (shrink)
1. Introduction Geoff Cockfield, Ann Firth and John Laurent -/- 2. The Role of Thumos in Adam Smith’s System Lisa Hill -/- 3. Adam Smith’s Treatment of the Greeks in The Theory of Moral Sentiments: The Case of Aristotle Richard Temple-Smith -/- 4. Adam Smith, Religion and the Scottish Enlightenment Pete Clarke -/- 5. The ‘New View’ of Adam Smith and the Development of his Views Over Time James E. Alvey -/- 6. The Moon Before the Dawn: A Seventeenth-Century (...) Precursor of Smith’s The Theory of Moral Sentiments Jack Barbalet -/- 7. Adam Smith’s Moral Philosophy as Ethical Self-formation Ann Firth -/- 8. Science and its Applications in The Theory of Moral Sentiments David Thorpe -/- 9. Adam Smith, Charles Darwin and the Moral Sense John Laurent and Geoff Cockfield. (shrink)
Traditional approaches tend to regard figuration (and by extension, deference in general) as an essentially marked or playful use of language, which is associated with a pronounced stylistic effect. For linguistic purposes, however, there is no reason for assigning a special place to deferred uses that are stylistically notable — the sorts of usages that people sometimes qualify with a phrase like "figuratively speaking." There is no important linguistic difference between using redcoat to refer to a British soldier and using (...) suit to refer to a corporate executive (as in "A couple of suits stopped by to talk about the new products"). What creates the stylistic effect of the latter is not the mechanism that generates it, but the marked background assumptions that license it — here, the playful presupposition that.. (shrink)
The phenomenon of systematic polysemy offers a fruitful domain for examining the theoretical differences between lexicological and lexicographic approaches to description. We consider here the process that provides for systematic conversion of count to mass nouns in English (a chicken Æ chicken, an oak Æ oak etc.). From the point of view of lexicology, we argue, standard syntactic and pragmatic tests suggest the phenomenon should be described by means of a single unindividuated transfer function that does not distinguish between interpretations (...) (rabbit = "meat" vs. "fur"). From the point of view of lexicography, however, these pragmatically determined"sense precisions" are made part of explicit description via the inclusion of semantic "licenses," a mechanism distinct from lexical rules. (shrink)
Contextualism in epistemology has traditionally been understood as the view that “know” functions semantically like an indexical term, encoding different contents in contexts with different epistemic standards. But the indexical hypothesis about “know” faces a range of objections. This article explores an alternative version of contextualism on which “know” is a semantically stable term, and the truth-conditional variability in knowledge claims is a matter of pragmatic enrichment. The central idea is that in contexts with stringent epistemic standards, knowledge claims are (...) narrowed: “know” is used in such contexts to make assertions about particularly demanding types of knowledge. The resulting picture captures all of the intuitive data that motivate contextualism while sidestepping the controversial linguistic thesis at its heart. After developing the view, the article shows in detail how it avoids one influential linguistic objection to traditional contextualism concerning indirect speech reports, and then answers an objection concerning the unavailability of certain types of clarification speeches. (shrink)
Alvin Plantinga has argued that evolutionary naturalism (the idea that God does not tinker with evolution) undermines its own rationality. Natural selection is concerned with survival and reproduction, and false beliefs conjoined with complementary motivational drives could serve the same aims as true beliefs. Thus, argues Plantinga, if we believe we evolved naturally, we should not think our beliefs are, on average, likely to be true, including our beliefs in evolution and naturalism. I argue herein that our cognitive faculties are (...) less reliable than we often take them to be, that it is theism which has difficulty explaining the nature of our cognition, that much of our knowledge is not passed through biological evolution but learned and transferred through culture, and that the unreliability of our cognition helps explain the usefulness of science. (shrink)
Allshorn, Geoff I believe the year in which I was born to be a very important year, perhaps not surprisingly, but particularly because of other events which would ultimately become significant in my own life.
Teaching Secondary Science: Theory and Practice provides a dynamic approach to preparing preservice science teachers for practice. Divided into two parts - theory and practice - the text allows students to first become confident in the theory of teaching science before showing how this theory can be applied to practice through ideas for implementation, such as sample lesson plans. These examples span a variety of age levels and subject areas, allowing preservice teachers to adapt each exercise to suit their needs (...) when they enter the classroom.Each chapter is supported by pedagogical features, including learning objectives, reflections, scenarios, key terms, questions, research topics and further readings. Written by leading science education researchers from universities across Australia, Teaching Secondary Science is a practical resource that will continue to inspire preservice teachers as they move from study into the classroom. This book includes a single-use twelve-month subscription to Cambridge Dynamic Science. (shrink)
Suppose a driverless car encounters a scenario where harm to at least one person is unavoidable and a choice about how to distribute harms between different persons is required. How should the driverless car be programmed to behave in this situation? I call this the moral design problem. Santoni de Sio defends a legal-philosophical approach to this problem, which aims to bring us to a consensus on the moral design problem despite our disagreements about which moral principles provide the correct (...) account of justified harm. He then articulates an answer to the moral design problem based on the legal doctrine of necessity. In this paper, I argue that Santoni de Sio’s answer to the moral design problem does not achieve the aim of the legal-philosophical approach. This is because his answer relies on moral principles which, at least, utilitarians have reason to reject. I then articulate an alternative reading of the doctrine of necessity, and construct a partial answer to the moral design problem based on this. I argue that utilitarians, contractualists and deontologists can agree on this partial answer, even if they disagree about which moral principles offer the correct account of justified harm. (shrink)
In his excellent essay, ‘Nudges in a post-truth world’, Neil Levy argues that ‘nudges to reason’, or nudges which aim to make us more receptive to evidence, are morally permissible. A strong argument against the moral permissibility of nudging is that nudges fail to respect the autonomy of the individuals affected by them. Levy argues that nudges to reason do respect individual autonomy, such that the standard autonomy objection fails against nudges to reason. In this paper, I argue that Levy (...) fails to show that nudges to reason respect individual autonomy. (shrink)
This book will be of interest to any person, whether an interested party, student, or scholar of the Roman Empire. It highlights the way in which we should consider ancient figures—be they good or bad.
This book will be of interest to any person, whether an interested party, student, or scholar of the Roman Empire. It highlights the way in which we should consider ancient figures—be they good or bad.
The product/process distinction with regards to “argument” has a longstanding history and foundational role in argumentation theory. I shall argue that, regardless of one’s chosen ontology of arguments, arguments are not the product of some process of arguing. Hence, appeal to the distinction is distorting the very organizational foundations of argumentation theory and should be abandoned.
A horseshoe is regarded as a lucky, perhaps even romantic, symbol of our industrial heritage. Why is it, then, that much of English literature, from Mandeville's ‘Grumbling Hive’ on, portrays business in a murky light? The paper begins with an analysis of this phenomenon and concludes that it is the institutionalisation and legitimisation of avarice and its consequential effects that gives rise to such a portrayal. A horseshoe has also been used as a convenient means of conceptualising an answer to (...) the questions this conclusion raises: ‘Who should control the corporation and for what ends?’ and discussing recent developments in corporate social responsibility. Drawing on research evidence the paper demonstrates how corporations are simultaneously under pressure from society and responding to its concerns. The paper concludes that these current developments can at best ameliorate the situation, and that what is necessary is to rediscover the notion of corporate virtue, instead of putting virtue at the service of vice. (shrink)
Transitional justice scholars are increasingly concerned with measuring the impact of transitional justice initiatives. Scholars often assume that TJ mechanisms must be properly designed and ordered to achieve lasting effect, but the impact of TJ timing and sequencing has attracted relatively little theoretical or empirical attention. Focusing on Latin America, this article explores variation within the region as to when TJ occurs and the order in which mechanisms are implemented. We utilize qualitative comparative analysis to assess the impact of TJ (...) timing and sequencing on democratic development. We find little evidence for path dependency owing to the chronological order of mechanisms. We do find, however, that amnesties and trials approach a sufficient condition for democratic consolidation in Latin America; trials, however, come closest to being a necessary condition for successful democratic consolidation. (shrink)
Even if our justified beliefs are closed under known entailment, there may still be instances of transmission failure. Transmission failure occurs when P entails Q, but a subject cannot acquire a justified belief that Q by deducing it from P. Paradigm cases of transmission failure involve inferences from mundane beliefs (e.g., that the wall in front of you is red) to the denials of skeptical hypotheses relative to those beliefs (e.g., that the wall in front of you is not white (...) and lit by red lights). According to the Bayesian explanation, transmission failure occurs when (i) the subject’s belief that P is based on E, and (ii) P(Q|E) P(Q). No modifications of the Bayesian explanation are capable of accommodating such cases, so the explanation must be rejected as inadequate. Alternative explanations employing simple subjunctive conditionals are fully capable of capturing all of the paradigm cases, as well as those missed by the Bayesian explanation. (shrink)
Collective intelligence is much talked about but remains very underdeveloped as a field. There are small pockets in computer science and psychology and fragments in other fields, ranging from economics to biology. New networks and social media also provide a rich source of emerging evidence. However, there are surprisingly few useable theories, and many of the fashionable claims have not stood up to scrutiny. The field of analysis should be how intelligence is organised at large scale—in organisations, cities, nations and (...) networks. The paper sets out some of the potential theoretical building blocks, suggests an experimental and research agenda, shows how it could be analysed within an organisation or business sector and points to the possible intellectual barriers to progress. (shrink)
Arendt famously pointed out that only citizenship actually confers rights in the modern world. To be a citizen is to be one who has the ‘right to have rights’. Arendt’s analysis emerges out of her recognition that there is a contradiction between this way of conferring rights as tied to the nation-state system and the more philosophical and ethical conceptions of the ‘rights of man’ and notions of ‘human rights’ like those championed by thinkers such as Immanuel Kant who understands (...) rights belonging universally to all humans as a result of facts having to do with what it means to be human. Étienne Balibar, in his recent work, adds to this by pointing out that there is a contradictory movement between this universalizing tendency in philosophical thought and the production of the citizen-subject out of the exclusionary acts of law and force. In this article, I put Balibar’s work in dialogue with the contemporary moment where we are witnessing the re-emergence of a nativist right populism. I use Balibar to help distinguish between three modes of political existence that we find today. Two of these three are more or less well understood. They are the non-citizen, who has no – or almost no – rights in a given nation-state and the citizen who enjoys the full benefit of the rights a given nation-state has to give. The third category is what I term the ‘nominal citizen’. This last category is somewhere in between full citizenship and non-citizenship. Individuals in this last category have rights in name but are largely unable to exercise them. Understanding this last category can, among other things, help us at least partially make sense of the return of right populism and also help us see the ways in which the modern category of citizenship, with its contradictions as elaborated by Balibar, can provide a means for resistance. (shrink)
Following the American indie cinema boom of the 1990s and the creation of "specialty" divisions by several Hollywood studios, many predicted an end to both the indie sector's viability and the making of films with ambitions beyond the commercial mainstream. Yet, as Geoff King demonstrates, plenty of distinct indie productions continue to thrive, even in the face of difficult economic circumstances. Recasting the term "indie" to denote a particular form of independent feature production that has risen to prominence in (...) the twenty-first century, King identifies and discusses the new opportunities available to indie filmmakers. These new options and techniques include low-cost digital video and a range of Internet and social-media ventures providing funding, distribution, promotion, and sales. He also covers the ultra-low-budget "mumblecore" movement; the social realism of such filmmakers as Kelly Reichardt and Ramin Bahrani; the "digital desktop" aesthetics of Jonathan Caouette's Tarnation and Arin Crumley and Susan Buice's Four Eyed Monsters ; and the affect of certain dominant discourses, such as the articulation of notions of "true" indie film and its opposition to what some see as the quirky contrivances of crossover hits such as Little Miss Sunshine and Juno. King ultimately locates a strong vein of continuity in indie practice, both industrially and in the textual qualities that define individual features. (shrink)
The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...) that is, algorithms trained on diverse datasets that perform better for traditionally disadvantaged groups. Whilst such algorithmic decisions may be unfair, the fairness of algorithmic decisions is not the appropriate locus of moral evaluation. What matters is the fairness of final decisions, such as diagnoses, resulting from collaboration between clinicians and algorithms. We argue that affirmative algorithms can permissibly be deployed provided the resultant final decisions are fair. (shrink)
The Thousands of Problems for Theorem Provers (TPTP) World is a well-established infrastructure that supports research, development and deployment of automated theorem proving systems. This paper provides an overview of the logic languages of the TPTP World, from classical first-order form (FOF), through typed FOF, up to typed higher-order form, and beyond to non-classical forms. The logic languages are described in a non-technical way and are illustrated with examples using the TPTP language.
Whether it seems that you know something depends in part upon practical factors. When the stakes are low, it can seem to you that you know that p, but when the stakes go up it'll seem to you that you don't. The apparent sensitivity of knowledge to stakes presents a serious challenge to epistemologists who endorse a stable semantics for knowledge attributions and reject the idea that whether you know something depends on how much is at stake. After arguing that (...) previous attempts to meet this challenge fall short, I offer a new solution: the unassertability account. The account starts with the observation that high stakes subjects aren't in an epistemic position to assert. We generally presuppose that knowing is sufficient for epistemically proper assertion, but this presupposition only stands up to scrutiny if we draw a distinction between two notions of epistemic propriety, and we shouldn't expect ordinary speakers to draw it. A subject in a high stakes situation who fails to draw the distinction will be led by the sufficiency claim to treat anything she isn't in a position to assert as something she isn't in a position to know. The sensitivity of epistemically proper assertion to practical factors explains the merely apparent sensitivity of knowledge to stakes. (shrink)
Suppose that an autonomous vehicle encounters a situation where (i) imposing a risk of harm on at least one person is unavoidable; and (ii) a choice about how to allocate risks of harm between different persons is required. What does morality require in these cases? Derek Leben defends a Rawlsian answer to this question. I argue that we have reason to reject Leben’s answer.
This book provides an integrated and philosophically-grounded framework which enables a coherent approach to organizations and organizational ethics from the perspective of practitioners in the workplace, from the perspective of managers in organizations, as well as from the perspective of organizations themselves.
Some expressions of English, like the demonstratives ‘this’ and ‘that’, are referentially promiscuous: distinct free occurrences of them in the same sentence can differ in content relative to the same context. One lesson of referentially promiscuous expressions is that basic logical properties like validity and logical truth obtain or fail to obtain only relative to a context. This approach to logic can be developed in just as rigorous a manner as David Kaplan’s classic logic of demonstratives. The result is a (...) logic that applies to arguments in English containing multiple occurrences of referentially promiscuous expressions. (shrink)
Demonstratives and Indexicals In the philosophy of language, an indexical is any expression whose content varies from one context of use to another. The standard list of indexicals includes pronouns such as “I”, “you”, “he”, “she”, “it”, “this”, “that”, plus adverbs such as “now”, “then”, “today”, “yesterday”, “here”, and “actually”. Other candidates include the tenses … Continue reading Demonstratives and Indexicals →.