Members of the field of philosophy have, just as other people, political convictions or, as psychologists call them, ideologies. How are different ideologies distributed and perceived in the field? Using the familiar distinction between the political left and right, we surveyed an international sample of 794 subjects in philosophy. We found that survey participants clearly leaned left (75%), while right-leaning individuals (14%) and moderates (11%) were underrepresented. Moreover, and strikingly, across the political spectrum, from very left-leaning individuals and moderates to (...) very right-leaning individuals, participants reported experiencing ideological hostility in the field, occasionally even from those from their own side of the political spectrum. Finally, while about half of the subjects believed that discrimination against left- or right-leaning individuals in the field is not justified, a significant minority displayed an explicit willingness to discriminate against colleagues with the opposite ideology. Our findings are both surprising and important, because a commitment to tolerance and equality is widespread in philosophy, and there is reason to think that ideological similarity, hostility, and discrimination undermine reliable belief formation in many areas of the discipline. (shrink)
In the philosophy of science, it is a common proposal that values are illegitimate in science and should be counteracted whenever they drive inquiry to the confirmation of predetermined conclusions. Drawing on recent cognitive scientific research on human reasoning and confirmation bias, I argue that this view should be rejected. Advocates of it have overlooked that values that drive inquiry to the confirmation of predetermined conclusions can contribute to the reliability of scientific inquiry at the group level even when they (...) negatively affect an individual’s cognition. This casts doubt on the proposal that such values should always be illegitimate in science. It also suggests that advocates of that proposal assume a narrow, individualistic account of science that threatens to undermine their own project of ensuring reliable belief formation in science. (shrink)
It has been argued that implicit biases are operative in philosophy and lead to significant epistemic costs in the field. Philosophers working on this issue have focussed mainly on implicit gender and race biases. They have overlooked ideological bias, which targets political orientations. Psychologists have found ideological bias in their field and have argued that it has negative epistemic effects on scientific research. I relate this debate to the field of philosophy and argue that if, as some studies suggest, the (...) same bias also exists in philosophy then it will lead to hitherto unrecognised epistemic hazards in the field. Furthermore, the bias is epistemically different from the more familiar biases in respects that are important for epistemology, ethics, and metaphilosophy. (shrink)
It is well known that on the Internet, computer algorithms track our website browsing, clicks, and search history to infer our preferences, interests, and goals. The nature of this algorithmic tracking remains unclear, however. Does it involve what many cognitive scientists and philosophers call ‘mindreading’, i.e., an epistemic capacity to attribute mental states to people to predict, explain, or influence their actions? Here I argue that it does. This is because humans are in a particular way embedded in the process (...) of algorithmic tracking. Specifically, if we endorse common conditions for extended cognition, then human mindreading (by website operators and users) is often literally extended into, that is, partly realized by, not merely causally coupled to, computer systems performing algorithmic tracking. The view that human mindreading extends outside the body into computers in this way has significant ethical advantages. It points to new conceptual ways to reclaim our autonomy and privacy in the face of increasing risks of computational control and online manipulation. These benefits speak in favor of endorsing the notion of extended mindreading. (shrink)
Confirmation bias is one of the most widely discussed epistemically problematic cognitions, challenging reliable belief formation and the correction of inaccurate views. Given its problematic nature, it remains unclear why the bias evolved and is still with us today. To offer an explanation, several philosophers and scientists have argued that the bias is in fact adaptive. I critically discuss three recent proposals of this kind before developing a novel alternative, what I call the ‘reality-matching account’. According to the account, confirmation (...) bias evolved because it helps us influence people and social structures so that they come to match our beliefs about them. This can result in significant developmental and epistemic benefits for us and other people, ensuring that over time we don’t become epistemically disconnected from social reality but can navigate it more easily. While that might not be the only evolved function of confirmation bias, it is an important one that has so far been neglected in the theorizing on the bias. (shrink)
Why do we engage in folk psychology, that is, why do we think about and ascribe propositional attitudes such as beliefs, desires, intentions etc. to people? On the standard view, folk psychology is primarily for mindreading, for detecting mental states and explaining and/or predicting people’s behaviour in terms of them. In contrast, McGeer (1996, 2007, 2015), and Zawidzki (2008, 2013) maintain that folk psychology is not primarily for mindreading but for mindshaping, that is, for moulding people’s behavior and minds (e.g., (...) via the imposition of social norms) so that coordination becomes easier. Mindreading is derived from and only as effective as it is because of mindshaping, not vice versa. I critically assess McGeer’s, and Zawidzki’s proposal and contend that three common motivations for the mindshaping view do not provide sufficient support for their particular version of it. I argue furthermore that their proposal underestimates the role that epistemic processing plays for mindshaping. And I provide reasons for favouring an alternative according to which, in social cognition involving ascriptions of propositional attitudes, neither mindshaping nor mindreading is primary but both are complementary in that effective mindshaping depends as much on mindreading as effective mindreading depends on mindshaping. (shrink)
Demographic diversity might often be present in a group without group members noticing it. What are the epistemic effects if they do? Several philosophers and social scientists have recently argued that when individuals detect demographic diversity in their group, this can result in epistemic benefits even if that diversity doesn’t involve cognitive differences. Here I critically discuss research advocating this proposal, introduce a distinction between two types of detection of demographic diversity, and apply this distinction to the theorizing on diversity (...) in science. Focusing on ‘invisible’ diversity, I argue that in one common kind of group in science, if group members have full insight into their group’s diversity, this is likely to create epistemic costs. These costs can be avoided and epistemic benefits gained if group members only partly detect their group’s diversity. There is thus an epistemic reason for context-dependent limitations on scientists’ insight into the diversity of their group. (shrink)
It has recently been suggested that politically motivated cognition leads progressive individuals to form beliefs that underestimate real differences between social groups and to process information selectively to support these beliefs and an egalitarian outlook. I contend that this tendency, which I shall call ‘egalitarian confirmation bias’, is often ‘Mandevillian’ in nature. That is, while it is epistemically problematic in one’s own cognition, it often has effects that significantly improve other people’s truth tracking, especially that of stigmatized individuals in academia. (...) Due to its Mandevillian character, egalitarian confirmation bias isn’t only epistemically but also ethically beneficial, as it helps decrease social injustice. Moreover, since egalitarian confirmation bias has Mandevillian effects especially in academia, and since progressives are particularly likely to display the bias, there is an epistemic reason for maintaining the often-noted political majority of progressives in academia. That is, while many researchers hold that diversity in academia is epistemically beneficial because it helps reduce bias, I argue that precisely because political diversity would help reduce egalitarian confirmation bias, it would in fact in one important sense be epistemically costly. (shrink)
When scientists or science reporters communicate research results to the public, this often involves ethical and epistemic risks. One such a risk arises when scientific claims cause cognitive or behavioral changes in the audience that contribute to the self-fulfillment of these claims. Focusing on such effects, I argue that the ethical and epistemic problem that they pose is likely to be much broader than hitherto appreciated. Moreover, it is often due to a psychological phenomenon that has been neglected in the (...) research on science communication, namely that many people tend to conform to descriptive norms, that is, norms capturing (perceptions of) what others commonly do, think, or feel. Because of this tendency, science communication can produce significant social harm. I contend that scientists have a responsibility to assess the risk of this potential harm and consider adopting strategies to mitigate it. I introduce one such a strategy and argue that its implementation is independently well motivated by the fact that it helps improve scientific accuracy. (shrink)
Can young children such as 3-year-olds represent the world objectively? Some prominent developmental psychologists (Perner, Tomasello) assume so. I argue that this view is susceptible to a prima facie powerful objection: to represent objectively, one must be able to represent not only features of the entities represented but also features of objectification itself, which 3-year-olds can’t do yet. Drawing on Tyler Burge’s work on perceptual constancy, I provide a response to this objection and motivate a distinction between three different kinds (...) of objectivity. This distinction helps advance current research on both objectivity and teleological action explanations in young children. (shrink)
It has recently been argued that to tackle social injustice, implicit biases and unjust social structures should be targeted equally because they sustain and ontologically overlap with each other. Here I develop this thought further by relating it to the hypothesis of extended cognition. I argue that if we accept common conditions for extended cognition then people’s implicit biases are often partly realized by and so extended into unjust social structures. This supports the view that we should counteract psychological and (...) social contributors to injustice equally. But it also has a significant downside. If unjust social structures are part of people’s minds then dismantling these structures becomes more difficult than it currently is, as this will then require us to overcome widely accepted ethical and legal barriers protecting people’s bodily and personal integrity. Thus, while there are good grounds to believe that people’s biases and unjust social structures ontologically overlap, there are also strong ethical reasons to reject this view. Metaphysical and ethical intuitions about implicit bias hence collide in an important way. (shrink)
Teleosemantics explains mental representation in terms of biological function and selection history. One of the main objections to the account is the so-called ‘Swampman argument’ (Davidson 1987), which holds that there could be a creature with mental representation even though it lacks a selection history. A number of teleosemanticists reject the argument by emphasising that it depends on assuming a creature that is fi ctitious and hence irrelevant for teleosemantics because the theory is only concerned with representations in real-world organisms (...) (Millikan 1996, Neander 1996, 2006, Papineau 2001, 2006). I contend that this strategy doesn’t succeed. I off er an argument that captures the spirit of the original Swampman objection but relies only on organisms found in the actual world. Th e argument undermines the just mentioned response to the Swampman objection, and furthermore leads to a particular challenge to strong representationalist theories of consciousness that endorse teleosemantics such as, e.g., Dretske’s (1995) and Tye’s (1995, 2000) accounts. On these theories, the causal effi cacy of consciousness in actual creatures will be undermined. (shrink)
Some artificial intelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in (...) some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. (shrink)
Recently, philosophers have appealed to empirical studies to argue that whenever we think that p, we automatically believe that p (Millikan 2004; Mandelbaum 2014; Levy and Mandelbaum 2014). Levy and Mandelbaum (2014) have gone further and claimed that the automaticity of believing has implications for the ethics of belief in that it creates epistemic obligations for those who know about their automatic belief acquisition. I use theoretical considerations and psychological findings to raise doubts about the empirical case for the view (...) that we automatically believe what we think. Furthermore, I contend that even if we set these doubts aside, Levy and Mandelbaum’s argument to the effect that the automaticity of believing creates epistemic obligations is not fully convincing. (shrink)
In empirically informed research on action explanation, philosophers and developmental psychologists have recently proposed a teleological account of the way in which we make sense of people’s intentional behavior. It holds that we typically don’t explain an agent’s action by appealing to her mental states but by referring to the objective, publically accessible facts of the world that count in favor of performing the action so as to achieve a certain goal. Advocates of the teleological account claim that this strategy (...) is our main way of understanding people’s actions. I argue that common motivations mentioned to support the teleological account are insufficient to sustain its generalization from children to adults. Moreover, social psychological studies, combined with theoretical considerations, suggest that we do not explain actions mainly by invoking publically accessible, reason-giving facts alone but by ascribing mental states to the agent. (shrink)
The paper briefly summarises and critiques Tomasello’s A Natural History of Human Thinking. After offering an overview of the book, the paper focusses on one particular part of Tomasello’s proposal on the evolution of uniquely human thinking and raises two points of criticism against it. One of them concerns his notion of thinking. The other pertains to empirical findings on egocentric biases in communication.
This paper explores the nature of self-knowledge of beliefs by investigating the relationship between self-knowledge of beliefs and one's knowledge of other people's beliefs. It introduces and defends a new account of self-knowledge of beliefs according to which this type of knowledge is developmentally interconnected with and dependent on resources already used for acquiring knowledge of other people's beliefs, which is inferential in nature. But when these resources are applied to oneself, one attains and subsequently frequently uses a method for (...) acquiring knowledge of beliefs that is non-inferential in nature. The paper argues that this account is preferable to some of the most common empirically motivated theories of self-knowledge of beliefs and explains the origin of the widely discussed phenomenon that our own beliefs are often transparent to us in that we can determine whether we believe that p simply by settling whether p is the case. (shrink)
The paper briefly summarises and critiques Tomasello’s A Natural History of Human Thinking. After offering an overview of the book, the paper focusses on one particular part of Tomasello’s proposal on the evolution of uniquely human thinking and raises two points of criticism against it. One of them concerns his notion of thinking. The other pertains to empirical findings on egocentric biases in communication.
Recently, researchers and reporters have made a wide range of claims about the distribution, nature, and societal impact of political polarization. Here I offer reasons to believe that, even when they are correct and prima facie merely descriptive, many of these claims have the highly negative side effect of increasing political polarization. This is because of the interplay of two factors that have so far been neglected in the work on political polarization, namely that (1) people have a tendency to (...) conform to descriptive norms (i.e., norms capturing (perceptions of) what others commonly do, think, or feel), and (2) claims about political polarization often convey such norms. Many of these claims thus incline people to behave, cognize, and be affectively disposed in ways that contribute to social division. But there is a silver lining. People’s tendency to conform to descriptive norms also provides the basis for developing new, experimentally testable strategies for counteracting political polarization. I outline three. (shrink)
‘No-platforming’—the practice of denying someone the opportunity to express their opinion at certain venues because of the perceived abhorrent or misguided nature of their view—is a hot topic. Several philosophers have advanced epistemic reasons for using the policy in certain cases. Here we introduce epistemic considerations against no-platforming that are relevant for the reflection on the cases at issue. We then contend that three recent epistemic arguments in favor of no-platforming fail to factor these considerations in and, as a result, (...) offer neither a conclusive justification nor strong epistemic support for no-platforming in any of the relevant cases. Moreover, we argue that, taken together, our epistemic considerations against no-platforming and the three arguments for the policy suggest that no-platforming poses an epistemic dilemma. While advocates and opponents of no-platforming alike have so far overlooked this dilemma, it should be addressed not only to prevent that actual no-platforming decisions create more epistemic harm than good, but also to put us into a better position to justify the policy when it is indeed warranted. (shrink)
It is typically assumed that while we know other people’s mental states by observing and interpreting their behavior, we know our own mental states by introspection, i.e., without interpreting ourselves. In his latest book, The opacity of mind: An integrative theory of self-knowledge, Peter Carruthers (2011) argues against this assumption. He holds that findings from across the cognitive sciences strongly suggest that self-knowledge of conscious propositional attitudes such as intentions, judgments, and decisions involves a swift and unconscious process of self-interpretation (...) that utilizes the same sensory channels that we employ when working out other people’s mental states. I provide an overview of Carruthers’ book before discussing a pathological case that challenges his account of self-knowledge and mentioning empirical evidence that undermines his use of a particular kind of data in his case against introspection of conscious attitudes. (shrink)
Social and medical scientists frequently produce empirical generalizations that involve concepts partly defined by value judgments. These generalizations, which have been called ‘mixed claims’, raise interesting questions. Does the presence of them in science imply that science is value-laden? Is the value-ladenness of mixed claims special compared to other kinds of value-ladenness of science? Do we lose epistemically if we reformulate these claims as conditional statements? And if we want to allow mixed claims in science, do we need a new (...) account of how to reconcile values with objectivity? Alexandrova (2017, 2018) offers affirmative answers to these questions. In responding to Alexandrova’s arguments, this discussion note motivates negative ones and in doing so casts new light on mixed claims. (shrink)
By drawing on empirical evidence, Matt King and Peter Carruthers have recently argued that there are no conscious propositional attitudes, such as decisions, and that this undermines moral responsibility. Neil Levy responds to King and Carruthers, and claims that their considerations needn’t worry theorists of moral responsibility. I argue that Levy’s response to King and Carruthers’ challenge to moral responsibility is unsatisfactory. After that, I propose what I take to be a preferable way of dealing with their challenge. I offer (...) an account of moral responsibility that ties responsibility to consciously deciding to do X, as opposed to a conscious decision to do X. On this account, even if there are no conscious decisions, moral responsibility won’t be undermined. (shrink)
Does evolutionary theory have the potential to undermine morality? In his book The Evolution of Morality, Richard Joyce (2006) argues for a positive answer. He contends that an evolutionary account of morality would undermine moral judgements and lend support to moral scepticism. I offer a critique of Joyce’s argument. As it turns out, his case can be read in two different ways. It could be construed as an argument to establish a general scepticism about the justification of moral judgements. Or (...) it could be read as an argument that targets only a particular meta-ethical position, namely moral realism. My claim is that it fails on both interpretations. There is no reason to believe that evolutionary considerations undermine morality. (shrink)
Suppose we know our own attitudes, e.g. judgments and decisions, only by unconsciously interpreting ourselves. Would this undermine the assumption that there are conscious attitudes? Carruthers has argued that if the mentioned view of selfknowledge is combined with either of the two most common approaches to consciousness, i.e. the higher-order state account or the global workspace theory , then the conjunction of these theories implies that there are no conscious attitudes. I shall show that Carruthers' argument against the existence of (...) conscious attitudes doesn't succeed, and mention studies on autism and logical reasoning under cognitive load that suggest that there are conscious attitudes. (shrink)