Der protestantische Theologe Karl Girgensohn ist 1903 mit seinem frühen Werk über das Wesen der Religion an die Öffentlichkeit getreten, welches einen starken religionsphilosophischen Standpunkt zum Ausdruck bringt. Kernüberlegung ist hierbei eine kognitive Theorie des Religiösen, in der die Gottesidee zentral ist. Unter Berücksichtigung der Biographie Girgensohns geht der vorliegende Beitrag auf diese frühe Studie zum Wesen der Religion ein und skizziert den Übergang des Autors von einem philosophischen zu einem experimentell-introspektiven Ansatz der Religiositätsforschung, welcher dann zum Fundament für die (...) Dorpater religionspsychologische Schule wurde. Basierend auf Girgensohns frühem Werk werden abschließend Implikationen für die heutige empirische Theologie vorgeschlagen.The Protestant theologian Karl Girgensohn came to the public in 1903 with his early work on the nature of religion, which expresses a strong religious-philosophical standpoint. The core consideration here is a cognitive theory of the religious, in which the idea of God is central. Taking into account Girgensohn’s biography, the present contribution addresses this early study on the nature of religion and outlines the author’s transition from a philosophical to an experimental-introspective approach to religious research, which then became the foundation for the Dorpat School of the psychology of religion. Based on Girgensohn’s early work, implications for contemporary empirical theology are finally proposed. (shrink)
Seven decades after his death, German Jewish writer, philosopher, and literary critic Walter Benjamin continues to fascinate and influence. Here Uwe Steiner offers a comprehensive and sophisticated introduction to the oeuvre of this intriguing theorist. Acknowledged only by a small circle of intellectuals during his lifetime, Benjamin is now a major figure whose work is essential to an understanding of modernity. Steiner traces the development of Benjamin’s thought chronologically through his writings on philosophy, literature, history, politics, the media, art, photography, (...) cinema, technology, and theology. Walter Benjamin reveals the essential coherence of its subject’s thinking while also analyzing the controversial or puzzling facets of Benjamin’s work. That coherence, Steiner contends, can best be appreciated by placing Benjamin in his proper context as a member of the German philosophical tradition and a participant in contemporary intellectual debates. As Benjamin’s writing attracts more and more readers in the English-speaking world, Walter Benjamin will be a valuable guide to this fascinating body of work. (shrink)
Many scientists routinely generalize from study samples to larger populations. It is commonly assumed that this cognitive process of scientific induction is a voluntary inference in which researchers assess the generalizability of their data and then draw conclusions accordingly. Here we challenge this view and argue for a novel account. The account describes scientific induction as involving by default a generalization bias that operates automatically and frequently leads researchers to unintentionally generalize their findings without sufficient evidence. The result is unwarranted, (...) overgeneralized conclusions. We support this account of scientific induction by integrating a range of disparate findings from across the cognitive sciences that have until now not been connected to research on the nature of scientific induction. The view that scientific induction involves by default a generalization bias calls for a revision of our current thinking about scientific induction and highlights an overlooked cause of the replication crisis in the sciences. Commonly proposed interventions to tackle scientific overgeneralizations that may feed into this crisis need to be supplemented with cognitive debiasing strategies to most effectively improve science. (shrink)
A detailed, clear, and comprehensive overview of the current philosophical debate on. The question of when, and under what circumstances, the practice of torture might be justified has received a great deal of attention in the last decade in both academia and in the popular media. Many of these discussions are, however, one-sided with other perspectives either ignored or quickly dismissed with minimal argument. In On the Ethics of Torture, Uwe Steinhoff provides a complete account of the philosophical debate surrounding (...) this highly contentious subject. Steinhoff’s position is that torture is sometimes, under certain narrowly circumscribed conditions, justified, basing his argument on the right to self-defense. His position differs from that of other authors who, using other philosophical justifications, would permit torture under a wider set of conditions. After having given the reader a thorough account of the main arguments for permitting torture under certain circumstances, Steinhoff explains and addresses the many objections that have been raised to employing torture under any circumstances. This is an indispensible work for anyone interested in one of the most controversial subjects of our times. (shrink)
Members of the field of philosophy have, just as other people, political convictions or, as psychologists call them, ideologies. How are different ideologies distributed and perceived in the field? Using the familiar distinction between the political left and right, we surveyed an international sample of 794 subjects in philosophy. We found that survey participants clearly leaned left (75%), while right-leaning individuals (14%) and moderates (11%) were underrepresented. Moreover, and strikingly, across the political spectrum, from very left-leaning individuals and moderates to (...) very right-leaning individuals, participants reported experiencing ideological hostility in the field, occasionally even from those from their own side of the political spectrum. Finally, while about half of the subjects believed that discrimination against left- or right-leaning individuals in the field is not justified, a significant minority displayed an explicit willingness to discriminate against colleagues with the opposite ideology. Our findings are both surprising and important, because a commitment to tolerance and equality is widespread in philosophy, and there is reason to think that ideological similarity, hostility, and discrimination undermine reliable belief formation in many areas of the discipline. (shrink)
McGowan argues “that ordinary utterances routinely enact norms without the speaker having or exercising any special authority” and thereby not “merely cause” but “constitute” harm if harm results from adherence to the enacted norms. The discovery of this “previously overlooked mechanism,” she claims, provides a potential justification for “further speech regulation.” Her argument is unsuccessful. She merely redefines concepts like “harm constitution” and “norm enactment” and fails to explain why speech that “constitutes” harm is legally or morally problematic and thus (...) an initially more plausible target for speech regulation than speech that “merely causes” harm. Even if she could explain that, however, her account would still be incapable of identifying cases where utterances “constitute harm.” This is so for two reasons. First, she provides neither analytical nor empirical criteria for deciding which so-called “s-norms” have been enacted by an “ordinary utterance.” Second, even if such criteria could be provided, there is no epistemically available means to distinguish whether harm has ensued due to adherence to the enacted s-norms or through other mechanisms. Given this lack of criteria and practical applicability, there is no way that this account could serve as a principled basis for speech regulation – it could only serve as a pretext for arbitrary censorship. (shrink)
In present-day political and moral philosophy the idea that all persons are in some way moral equals is an almost universal premise, with its defenders often claiming that philosophical positions that reject the principle of equal respect and concern do not deserve to be taken seriously. This has led to relatively few attempts to clarify, or indeed justify, 'basic equality' and the principle of equal respect and concern. Such clarification and justification, however, would be direly needed. After all, the ideas, (...) for instance, that Adolf Hitler and Nelson Mandela have equal moral worth, or that a rape victim owes equal respect and concern to both her rapist and to her own caring brother, seem to be utterly implausible. Thus, if someone insists on the truth of such ideas, he or she owes his or her audience an explanation. The authors in this volume - which breaks new ground by engaging egalitarians and anti-egalitarians in a genuine dialogue - attempt to shed light into the dark. They try to clarify the concepts of "basic equality", "equal moral worth","equal respect and concern", "dignity," etc; and they try to justify-or to refute-the resulting clarified doctrines. The volume thus demonstrates that the claim that all persons have equal moral worth, are owed equal concern and respect, or have the same rights is anything but obvious. This finding has not only significant philosophical but also political implications. (shrink)
In the philosophy of science, it is a common proposal that values are illegitimate in science and should be counteracted whenever they drive inquiry to the confirmation of predetermined conclusions. Drawing on recent cognitive scientific research on human reasoning and confirmation bias, I argue that this view should be rejected. Advocates of it have overlooked that values that drive inquiry to the confirmation of predetermined conclusions can contribute to the reliability of scientific inquiry at the group level even when they (...) negatively affect an individual’s cognition. This casts doubt on the proposal that such values should always be illegitimate in science. It also suggests that advocates of that proposal assume a narrow, individualistic account of science that threatens to undermine their own project of ensuring reliable belief formation in science. (shrink)
It has been argued that implicit biases are operative in philosophy and lead to significant epistemic costs in the field. Philosophers working on this issue have focussed mainly on implicit gender and race biases. They have overlooked ideological bias, which targets political orientations. Psychologists have found ideological bias in their field and have argued that it has negative epistemic effects on scientific research. I relate this debate to the field of philosophy and argue that if, as some studies suggest, the (...) same bias also exists in philosophy then it will lead to hitherto unrecognised epistemic hazards in the field. Furthermore, the bias is epistemically different from the more familiar biases in respects that are important for epistemology, ethics, and metaphilosophy. (shrink)
The phenomenological approach to the philosophy of mind, as worked out by Husserl, has been severely criticized by philosophers within the Wittgensteinian tradition and, implicitly, by Wittgenstein himself. This book examines this criticism in detail, looking at the writings of Wittgenstein, Ryle, Hacker, Dennett, and others. In defending Husserl against his critics, it offers a comprehensive fresh view of phenomenology as a philosophy of mind.
Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...) some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. (shrink)
Preface This book is about semantics and logic. More specifically, it is about the semantics and logic of natural language; and, even more specifically than ...
Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’. According to the account, confirmation (...) bias evolved because it helps us influence people and social structures so that they come to match our beliefs about them. This can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily. While that might not be the only evolved function of confirmation bias, it is an important one that has so far been neglected in the theorizing on the bias. (shrink)
Certain instances of contraction are provable in Zardini’s system $\mathbf {IK}^\omega $ which causes triviality once a truth predicate and suitable fixed points are available.
What one is ultimately interested in with regard to ‘just cause’ is whether a specific war, actual or potential, is justified. I call this ‘the applied question’. Answering this question requires knowing the empirical facts on the ground. However, an answer to the applied question regarding a specific war requires a prior answer to some more general questions, both descriptive and normative. These questions are: What kind of thing is a ‘just cause’ for war (an aim, an injury or wrong (...) suffered, or something different altogether)? I call this ‘the formal question’. Then there is what I call the ‘the general substantive question’. Depending on the previous answer to the formal question, the general substantive question can be formulated as: ‘Which causes are just?’ or as ‘Under what conditions is there a just cause?’ A final question, which has recently elicited increased interest, is what I call ‘the question of timing’: does the ‘just cause’ criterion only apply to the initiation of a war or also to the continuation of a war, that is, can a war that had a just cause at the beginning lose it at some point in its course (and vice versa)? I argue that a just cause is a state of affairs. Moreover, the criterion of just cause is not independent of proportionality and other valid jus ad bellum criteria. One cannot know whether there is a just cause without knowing whether the other (valid) criteria (apart from ‘right intention’) are satisfied; and this account has certain theoretical and practical advantages. As regards the general substantive question, I argue that all kinds of aims can, in principle, be legitimately pursued by means of war, even aims that might sound dubious at first, like vengeance or the search for glory. Thus, the pursuit of such aims does not make the war disproportionate or deprive it of just cause. As regards the question of timing, I argue that the criteria of jus ad bellum apply throughout the war, not only at the point of its initiation. While starting a war at t1 might be justified, continuing it at time t2 might be unjustified (and vice versa), and this insight does not require an addition to jus ad bellum but is already contained in it. (shrink)
Why do we engage in folk psychology, that is, why do we think about and ascribe propositional attitudes such as beliefs, desires, intentions etc. to people? On the standard view, folk psychology is primarily for mindreading, for detecting mental states and explaining and/or predicting people’s behaviour in terms of them. In contrast, McGeer (1996, 2007, 2015), and Zawidzki (2008, 2013) maintain that folk psychology is not primarily for mindreading but for mindshaping, that is, for moulding people’s behavior and minds (e.g., (...) via the imposition of social norms) so that coordination becomes easier. Mindreading is derived from and only as effective as it is because of mindshaping, not vice versa. I critically assess McGeer’s, and Zawidzki’s proposal and contend that three common motivations for the mindshaping view do not provide sufficient support for their particular version of it. I argue furthermore that their proposal underestimates the role that epistemic processing plays for mindshaping. And I provide reasons for favouring an alternative according to which, in social cognition involving ascriptions of propositional attitudes, neither mindshaping nor mindreading is primary but both are complementary in that effective mindshaping depends as much on mindreading as effective mindreading depends on mindshaping. (shrink)
It is well known that on the Internet, computer algorithms track our website browsing, clicks, and search history to infer our preferences, interests, and goals. The nature of this algorithmic tracking remains unclear, however. Does it involve what many cognitive scientists and philosophers call ‘mindreading’, i.e., an epistemic capacity to attribute mental states to people to predict, explain, or influence their actions? Here I argue that it does. This is because humans are in a particular way embedded in the process (...) of algorithmic tracking. Specifically, if we endorse common conditions for extended cognition, then human mindreading (by website operators and users) is often literally extended into, that is, partly realized by, not merely causally coupled to, computer systems performing algorithmic tracking. The view that human mindreading extends outside the body into computers in this way has significant ethical advantages. It points to new conceptual ways to reclaim our autonomy and privacy in the face of increasing risks of computational control and online manipulation. These benefits speak in favor of endorsing the notion of extended mindreading. (shrink)
In this book Uwe Steinhoff describes and explains the basic tenets of just war theory and gives a precise, succinct and highly critical account of its present status and of the most important and controversial current debates surrounding it. Rejecting certain in effect medieval assumptions of traditional just war theory and advancing a liberal outlook, Steinhoff argues that every single individual is a legitimate authority and has under certain circumstances the right to declare war on others or the state. He (...) also argues that the just cause cannot be established independently of the other criteria of jus ad bellum (the justification of entering a war), except for right intention, which he interprets more leniently than the tradition does. Turning to jus in bello (which governs the conduct of a war) he criticizes the Doctrine of Double Effect and concludes that insofar as wars kill innocents, and be it as "collateral damage", they cannot be just but at best justified as the lesser evil. Steinhoff gives particular attention to the question why soldiers, allegedly, are legitimate targets and civilians not. Discussing four approaches to the explanation of the difference he argues that the four principles underlying them all need to be taken into account and outlines how their weighing can proceed if applied to concrete cases. The resulting approach does not square the distinction between legitimate and illegitimate targets with the distinction between soldiers and civilians, which has extremely important consequences for the conduct of war. Finally, Steinhoff analyses the concept of terrorism and argues that some forms of "terrorism" are actually not terrorism at all and that even terrorism proper can under certain circumstances be justified. (shrink)
Preface This book is about semantics and logic. More specifically, it is about the semantics and logic of natural language; and, even more specifically than that, it is about a particular way of dealing with those subjects, known as Discourse Representation Theory, or DRT. DRT is an approach towards natural language semantics which, some thirteen years ago, arose out of attempts to deal with two distinct problems. The first of those was the semantic puzzle that had been brought to contempo (...) rary attention by Geach's notorious "donkey sentences" - sentences like If Pedro owns some donkey, he beats it, in which the anaphoric connection we perceive between the indefinite noun phrase some donkey and the pronoun it may seem to conflict with the existential meaning of the word some. The second problem had to do with tense and aspect. Some languages, for instance French and the other Romance languages, have two morphologically distinct past tenses, a simple past and a continuous past. To articulate precisely what the difference between these tenses is has turned out to be surprisingly difficult. (shrink)
In this paper we develop a theory of language meaning that represents scope ambiguities by underspecified structures. The set of possible meanings of a sentence, or text is determined by a set of meta-level constraints that restricts the class of semantic representations appropriately. Thus the way ambiguities are represented does not correspond to any of the usual concepts of formalizing ambiguities by means of disjunctions (of completely specified structures). A sound and complete proof theory is provided that relates these structures (...) directly, without considering cases. (shrink)
It has recently been suggested that politically motivated cognition leads progressive individuals to form beliefs that underestimate real differences between social groups and to process information selectively to support these beliefs and an egalitarian outlook. I contend that this tendency, which I shall call ‘egalitarian confirmation bias’, is often ‘Mandevillian’ in nature. That is, while it is epistemically problematic in one’s own cognition, it often has effects that significantly improve other people’s truth tracking, especially that of stigmatized individuals in academia. (...) Due to its Mandevillian character, egalitarian confirmation bias isn’t only epistemically but also ethically beneficial, as it helps decrease social injustice. Moreover, since egalitarian confirmation bias has Mandevillian effects especially in academia, and since progressives are particularly likely to display the bias, there is an epistemic reason for maintaining the often-noted political majority of progressives in academia. That is, while many researchers hold that diversity in academia is epistemically beneficial because it helps reduce bias, I argue that precisely because political diversity would help reduce egalitarian confirmation bias, it would in fact in one important sense be epistemically costly. (shrink)
Demographic diversity might often be present in a group without group members noticing it. What are the epistemic effects if they do? Several philosophers and social scientists have recently argued that when individuals detect demographic diversity in their group, this can result in epistemic benefits even if that diversity doesn’t involve cognitive differences. Here I critically discuss research advocating this proposal, introduce a distinction between two types of detection of demographic diversity, and apply this distinction to the theorizing on diversity (...) in science. Focusing on ‘invisible’ diversity, I argue that in one common kind of group in science, if group members have full insight into their group’s diversity, this is likely to create epistemic costs. These costs can be avoided and epistemic benefits gained if group members only partly detect their group’s diversity. There is thus an epistemic reason for context-dependent limitations on scientists’ insight into the diversity of their group. (shrink)
Analyzing the Arabic translation of Aristotle's Rhetoric and situating it in its historical and intellectual context, this book offers a fresh interpretation of the early Greek-Arabic translation movement and its impact in Islamic culture and beyond.
Language- and music-readiness are demonstrated as related within comparative neuroprimatology by elaborating three hypotheses concerning music-readiness : The rhythm-first hypothesis, the combinatoriality hypothesis, and the socio-affect-cohesion hypothesis. MR-1 states that rhythm precedes evolutionarily melody and tonality. MR-2 states that complex imitation and fractionation within the expanding spiral of the mirror system/complex imitation hypothesis lead to the combinatorial capacities of rhythm necessary for building up a musical lexicon and complex structures; and rhythm, in connection with repetition and variation, scaffolds both musical (...) form and content. MR-3 states that music’s main evolutionary function is to self-induce affective states in individuals to cope with distress; rhythm, in particular isochrony, provides a temporal framework to support movement synchronization, inducing shared affective states in group members, which in turn enhances group cohesion. This document reviews current behavioural and neurocognitive research relevant to the comparative neuroprimatology of music-readiness. It further proposes to extend MS/CIH through the evolution of the relationship of the language- and music-ready brain, by comparing “affective rhythm” and prosody – i.e. by comparatively approaching the language- and music-emotion link in neuroprimatology. (shrink)
This book offers a philosophical analysis of the moral and legal justifications for the use of force. While the book focuses on the ethics self-defense, it also explores its relation to lesser evil justifications, public authority, the justification of punishment, and the ethics of war. Steinhoff’s account of the moral use of force covers a wide range of topics, including the nature of justification in general, the precise elements of different justifications, the logic of claim- and liberty-rights and of rights (...) forfeiture, the value of human life and its limits, and the principles of reciprocity and precaution. While the author’s analysis is primarily philosophical, it is informed by a metaethical stance that also places heavy emphasis on existing law and legal scholarship. In doing so, the book appeals to widely shared moral intuitions, precepts, and concepts grounded in criminal law. Self-Defense, Necessity, and Punishment offers the most comprehensive and systematic account of the ethics of self-defense. It will be of interest to scholars and graduate students working in applied ethics and moral philosophy, philosophy of law, and political philosophy. (shrink)
Bringing together writings on united Germany, this volume addresses the consequences of German history, the challenges and perils of the post-Wall era, and Germany's place in contemporary Europe. The author argues that 1945 - not 1989 - was the crucial turning point in German history.
When scientists or science reporters communicate research results to the public, this often involves ethical and epistemic risks. One such a risk arises when scientific claims cause cognitive or behavioral changes in the audience that contribute to the self-fulfillment of these claims. Focusing on such effects, I argue that the ethical and epistemic problem that they pose is likely to be much broader than hitherto appreciated. Moreover, it is often due to a psychological phenomenon that has been neglected in the (...) research on science communication, namely that many people tend to conform to descriptive norms, that is, norms capturing (perceptions of) what others commonly do, think, or feel. Because of this tendency, science communication can produce significant social harm. I contend that scientists have a responsibility to assess the risk of this potential harm and consider adopting strategies to mitigate it. I introduce one such a strategy and argue that its implementation is independently well motivated by the fact that it helps improve scientific accuracy. (shrink)
On the one hand, the absence of contraction is a safeguard against the logical (property theoretic) paradoxes; but on the other hand, it also disables inductive and recursive definitions, in its most basic form the definition of the series of natural numbers, for instance. The reason for this is simply that the effectiveness of a recursion clause depends on its being available after application, something that is usually assured by contraction. This paper presents a way of overcoming this problem within (...) the framework of a logic based on inclusion and unrestricted abstraction, without any form of extensionality. (shrink)
Teleosemantics explains mental representation in terms of biological function and selection history. One of the main objections to the account is the so-called ‘Swampman argument’ (Davidson 1987), which holds that there could be a creature with mental representation even though it lacks a selection history. A number of teleosemanticists reject the argument by emphasising that it depends on assuming a creature that is fi ctitious and hence irrelevant for teleosemantics because the theory is only concerned with representations in real-world organisms (...) (Millikan 1996, Neander 1996, 2006, Papineau 2001, 2006). I contend that this strategy doesn’t succeed. I off er an argument that captures the spirit of the original Swampman objection but relies only on organisms found in the actual world. Th e argument undermines the just mentioned response to the Swampman objection, and furthermore leads to a particular challenge to strong representationalist theories of consciousness that endorse teleosemantics such as, e.g., Dretske’s (1995) and Tye’s (1995, 2000) accounts. On these theories, the causal effi cacy of consciousness in actual creatures will be undermined. (shrink)
Answer-set programming (ASP) has emerged as a declarative programming paradigm where problems are encoded as logic programs, such that the so-called answer sets of theses programs represent the solutions of the encoded problem. The efficiency of the latest ASP solvers reached a state that makes them applicable for problems of practical importance. Consequently, problems from many different areas, including diagnosis, data integration, and graph theory, have been successfully tackled via ASP. In this work, we present such ASP-encodings for problems associated (...) to abstract argumentation frameworks (AFs) and generalisations thereof. Our encodings are formulated as fixed queries, such that the input is the only part depending on the actual AF to process. We illustrate the functioning of this approach, which is underlying a new argumentation system called ASPARTIX in detail and show its adequacy in terms of computational complexity. (shrink)
Can young children such as 3-year-olds represent the world objectively? Some prominent developmental psychologists (Perner, Tomasello) assume so. I argue that this view is susceptible to a prima facie powerful objection: to represent objectively, one must be able to represent not only features of the entities represented but also features of objectification itself, which 3-year-olds can’t do yet. Drawing on Tyler Burge’s work on perceptual constancy, I provide a response to this objection and motivate a distinction between three different kinds (...) of objectivity. This distinction helps advance current research on both objectivity and teleological action explanations in young children. (shrink)
According to the dominant position in the just war tradition from Augustine to Anscombe and beyond, there is no "moral equality of combatants." That is, on the traditional view the combatants participating in a justified war may kill their enemy combatants participating in an unjustified war - but not vice versa (barring certain qualifications). I shall argue here, however, that in the large number of wars (and in practically all modern wars) where the combatants on the justified side violate the (...) rights of innocent people ("collateral damage"), these combatants are in fact liable to attack by the combatants on the unjustified side. I will support this view with a rights-based account of liability to attack and then defend it against a number of objections raised in particular by Jeff McMahan. The result is that the thesis of the moral equality of combatants holds good for a large range of armed conflicts while the opposing thesis is of very limited practical relevance. (shrink)
It has recently been argued that to tackle social injustice, implicit biases and unjust social structures should be targeted equally because they sustain and ontologically overlap with each other. Here I develop this thought further by relating it to the hypothesis of extended cognition. I argue that if we accept common conditions for extended cognition then people’s implicit biases are often partly realized by and so extended into unjust social structures. This supports the view that we should counteract psychological and (...) social contributors to injustice equally. But it also has a significant downside. If unjust social structures are part of people’s minds then dismantling these structures becomes more difficult than it currently is, as this will then require us to overcome widely accepted ethical and legal barriers protecting people’s bodily and personal integrity. Thus, while there are good grounds to believe that people’s biases and unjust social structures ontologically overlap, there are also strong ethical reasons to reject this view. Metaphysical and ethical intuitions about implicit bias hence collide in an important way. (shrink)
abstract Can torture be morally justified? I shall criticise arguments that have been adduced against torture and demonstrate that torture can be justified more easily than most philosophers dealing with the question are prepared to admit. It can be justified not only in ticking nuclear bomb cases but also in less spectacular ticking bomb cases and even in the so‐called Dirty Harry cases. There is no morally relevant difference between self‐defensive killing of a culpable aggressor and torturing someone who is (...) culpable of a deadly threat that can be averted only by torturing him. Nevertheless, I shall argue that torture should not be institutionalised, for example by torture warrants. (shrink)
This paper proposes a method for computing the temporal aspects of the interpretations of a variety of Germa sentences. The method is strictly modular in the sense that it allows each meaning-bearing sentence constituent to make its own, separate, contribution to the semantic representation of any sentence containing it. The semantic representation of a sentence is reached in several stages. First, an ‘initial semantic representation’ is constructed, using a syntactic analysis of the sentence as input. This initial representation is then (...) transformed into the definitive representation by a series of transformations which reflect the ways in which the contributions from different constituents of the sentence interact. Since the different constituents which make their respective contributions to the meaning of the sentence are in most instances ambiguous, the initial representations are typically of a high degree of underspecification. (shrink)
Recently, philosophers have appealed to empirical studies to argue that whenever we think that p, we automatically believe that p (Millikan 2004; Mandelbaum 2014; Levy and Mandelbaum 2014). Levy and Mandelbaum (2014) have gone further and claimed that the automaticity of believing has implications for the ethics of belief in that it creates epistemic obligations for those who know about their automatic belief acquisition. I use theoretical considerations and psychological findings to raise doubts about the empirical case for the view (...) that we automatically believe what we think. Furthermore, I contend that even if we set these doubts aside, Levy and Mandelbaum’s argument to the effect that the automaticity of believing creates epistemic obligations is not fully convincing. (shrink)
In empirically informed research on action explanation, philosophers and developmental psychologists have recently proposed a teleological account of the way in which we make sense of people’s intentional behavior. It holds that we typically don’t explain an agent’s action by appealing to her mental states but by referring to the objective, publically accessible facts of the world that count in favor of performing the action so as to achieve a certain goal. Advocates of the teleological account claim that this strategy (...) is our main way of understanding people’s actions. I argue that common motivations mentioned to support the teleological account are insufficient to sustain its generalization from children to adults. Moreover, social psychological studies, combined with theoretical considerations, suggest that we do not explain actions mainly by invoking publically accessible, reason-giving facts alone but by ascribing mental states to the agent. (shrink)
Revised and reprinted; originally in Dov Gabbay & Franz Guenthner (eds.), Handbook of Philosophical Logic, Volume IV. Kluwer 133-251. -- Two sorts of property theory are distinguished, those dealing with intensional contexts property abstracts (infinitive and gerundive phrases) and proposition abstracts (‘that’-clauses) and those dealing with predication (or instantiation) relations. The first is deemed to be epistemologically more primary, for “the argument from intensional logic” is perhaps the best argument for the existence of properties. This argument is presented in the (...) course of discussing generality, quantifying-in, learnability, referential semantics, nominalism, conceptualism, realism, type-freedom, the first-order/higher-order controversy, names, indexicals, descriptions, Mates’ puzzle, and the paradox of analysis. Two first-order intensional logics are then formulated. Finally, fixed-point type-free theories of predication are discussed, especially their relation to the question whether properties may be identified with propositional functions. (shrink)
Revised and reprinted in Handbook of Philosophical Logic, volume 10, Dov Gabbay and Frans Guenthner (eds.), Dordrecht: Kluwer, (2003). -- Two sorts of property theory are distinguished, those dealing with intensional contexts property abstracts (infinitive and gerundive phrases) and proposition abstracts (‘that’-clauses) and those dealing with predication (or instantiation) relations. The first is deemed to be epistemologically more primary, for “the argument from intensional logic” is perhaps the best argument for the existence of properties. This argument is presented in the (...) course of discussing generality, quantifying-in, learnability, referential semantics, nominalism, conceptualism, realism, type-freedom, the first-order/higher-order controversy, names, indexicals, descriptions, Mates’ puzzle, and the paradox of analysis. Two first-order intensional logics are then formulated. Finally, fixed-point type-free theories of predication are discussed, especially their relation to the question whether properties may be identified with propositional functions. (shrink)