What sets the practice of rigorously tested, sound science apart from pseudoscience? In this volume, the contributors seek to answer this question, known to philosophers of science as “the demarcation problem.” This issue has a long history in philosophy, stretching as far back as the early twentieth century and the work of Karl Popper. But by the late 1980s, scholars in the field began to treat the demarcation problem as impossible to solve and futile to ponder. However, the essays that (...) Massimo Pigliucci and Maarten Boudry have assembled in this volume make a rousing case for the unequivocal importance of reflecting on the separation between pseudoscience and sound science. (shrink)
What makes beliefs thrive? In this paper, we model the dissemination of bona fide science versus pseudoscience, making use of Dan Sperber's epidemiological model of representations. Drawing on cognitive research on the roots of irrational beliefs and the institutional arrangement of science, we explain the dissemination of beliefs in terms of their salience to human cognition and their ability to adapt to specific cultural ecologies. By contrasting the cultural development of science and pseudoscience along a number of dimensions, we gain (...) a better understanding of their underlying epistemic differences. Pseudoscience can achieve widespread acceptance by tapping into evolved cognitive mechanisms, thus sacrificing intellectual integrity for intuitive appeal. Science, by contrast, defies those deeply held intuitions precisely because it is institutionally arranged to track objective patterns in the world, and the world does not care much about our intuitions. In light of these differences, we discuss the degree of openness or resilience to conceptual change (evidence and reason), and the divergent ways in which science and pseudoscience can achieve cultural “success”. (shrink)
Philosophers of science have given up on the quest for a silver bullet to put an end to all pseudoscience, as such a neat formal criterion to separate good science from its contenders has proven elusive. In the literature on critical thinking and in some philosophical quarters, however, this search for silver bullets lives on in the taxonomies of fallacies. The attractive idea is to have a handy list of abstract definitions or argumentation schemes, on the basis of which one (...) can identify bad or invalid types of reasoning, abstracting away from the specific content and dialectical context. Such shortcuts for debunking arguments are tempting, but alas, the promise is hardly if ever fulfilled. Different strands of research on the pragmatics of argumentation, probabilistic reasoning and ecological rationality have shown that almost every known type of fallacy is a close neighbor to sound inferences or acceptable moves in a debate. Nonetheless, the kernel idea of a fallacy as an erroneous type of argument is still retained by most authors. We outline a destructive dilemma we refer to as the Fallacy Fork: on the one hand, if fallacies are construed as demonstrably invalid form of reasoning, then they have very limited applicability in real life. On the other hand, if our definitions of fallacies are sophisticated enough to capture real-life complexities, they can no longer be held up as an effective tool for discriminating good and bad forms of reasoning. As we bring our schematic “fallacies” in touch with reality, we seem to lose grip on normative questions. Even approaches that do not rely on argumentation schemes to identify fallacies fail to escape the Fallacy Fork, and run up against their own version of it. (shrink)
The concept of burden of proof is used in a wide range of discourses, from philosophy to law, science, skepticism, and even in everyday reasoning. This paper provides an analysis of the proper deployment of burden of proof, focusing in particular on skeptical discussions of pseudoscience and the paranormal, where burden of proof assignments are most poignant and relatively clear-cut. We argue that burden of proof is often misapplied or used as a mere rhetorical gambit, with little appreciation of the (...) underlying principles. The paper elaborates on an important distinction between evidential and prudential varieties of burdens of proof, which is cashed out in terms of Bayesian probabilities and error management theory. Finally, we explore the relationship between burden of proof and several (alleged) informal logical fallacies. This allows us to get a firmer grip on the concept and its applications in different domains, and also to clear up some confusions with regard to when exactly some fallacies (ad hominem, ad ignorantiam, and petitio principii) may or may not occur. (shrink)
The scientific study of living organisms is permeated by machine and design metaphors. Genes are thought of as the ‘‘blueprint’’ of an organism, organisms are ‘‘reverse engineered’’ to discover their func- tionality, and living cells are compared to biochemical factories, complete with assembly lines, transport systems, messenger circuits, etc. Although the notion of design is indispensable to think about adapta- tions, and engineering analogies have considerable heuristic value (e.g., optimality assumptions), we argue they are limited in several important respects. In (...) particular, the analogy with human-made machines falters when we move down to the level of molecular biology and genetics. Living organisms are far more messy and less transparent than human-made machines. Notoriously, evolution is an oppor- tunistic tinkerer, blindly stumbling on ‘‘designs’’ that no sensible engineer would come up with. Despite impressive technological innovation, the prospect of artificially designing new life forms from scratch has proven more difficult than the superficial analogy with ‘‘programming’’ the right ‘‘software’’ would sug- gest. The idea of applying straightforward engineering approaches to living systems and their genomes— isolating functional components, designing new parts from scratch, recombining and assembling them into novel life forms—pushes the analogy with human artifacts beyond its limits. In the absence of a one-to-one correspondence between genotype and phenotype, there is no straightforward way to imple- ment novel biological functions and design new life forms. Both the developmental complexity of gene expression and the multifarious interactions of genes and environments are serious obstacles for ‘‘engi- neering’’ a particular phenotype. The problem of reverse-engineering a desired phenotype to its genetic ‘‘instructions’’ is probably intractable for any but the most simple phenotypes. Recent developments in the field of bio-engineering and synthetic biology reflect these limitations. Instead of genetically engi- neering a desired trait from scratch, as the machine/engineering metaphor promises, researchers are making greater strides by co-opting natural selection to ‘‘search’’ for a suitable genotype, or by borrowing and recombining genetic material from extant life forms. (shrink)
In recent controversies about Intelligent Design Creationism (IDC), the principle of methodological naturalism (MN) has played an important role. In this paper, an often neglected distinction is made between two different conceptions of MN, each with its respective rationale and with a different view on the proper role of MN in science. According to one popular conception, MN is a self-imposed or intrinsic limitation of science, which means that science is simply not equipped to deal with claims of the supernatural (...) (Intrinsic MN or IMN). Alternatively, we will defend MN as a provisory and empirically grounded attitude of scientists, which is justified in virtue of the consistent success of naturalistic explanations and the lack of success of supernatural explanations in the history of science (Provisory MN or PMN). Science does have a bearing on supernatural hypotheses, and its verdict is uniformly negative. We will discuss five arguments that have been proposed in support of IMN: the argument from the definition of science, the argument from lawful regularity, the science stopper argument, the argument from procedural necessity, and the testability argument. We conclude that IMN, because of its philosophical flaws, proves to be an ill-advised strategy to counter the claims of IDC. Evolutionary scientists are on firmer ground if they discard supernatural explanations on purely evidential grounds, instead of ruling them out by philosophical fiat. (shrink)
Religious people seem to believe things that range from the somewhat peculiar to the utterly bizarre. Or do they? According to a new paper by Neil Van Leeuwen, religious “credence” is nothing like mundane factual belief. It has, he claims, more in common with fictional imaginings. Religious folk do not really “believe”—in the ordinary sense of the word—what they profess to believe. Like fictional imaginings, but unlike factual beliefs, religious credences are activated only within specific settings. We argue that Van (...) Leeuwen’s thesis contradicts a wealth of data on religiously motivated behavior. By and large, the faithful genuinely believe what they profess to believe. Although many religions openly embrace a sense of mystery, in general this does not prevent the attribution of beliefs to religious people. Many of the features of religious belief that Van Leeuwen alludes to, like invulnerability to refutation and incoherence, are characteristic of irrational beliefs in general and actually betray... (shrink)
Genes are often described by biologists using metaphors derived from computa- tional science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Accordingly, when the human genome project was initially announced, the promise was that we would soon know how a human being is made, just as we know how to make airplanes and buildings. Impor- tantly, (...) modern proponents of Intelligent Design, the latest version of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as ‘‘irreducible complexity’’ and on flawed analogies between living cells and mechanical factories. However, the living organ- ism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume’s criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do. In this article we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. Thus, we argue that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public. (shrink)
True beliefs are better guides to the world than false ones. This is the common-sense assumption that undergirds theorizing in evolutionary epistemology. According to Alvin Plantinga, however, evolution by natural selection does not care about truth: it cares only about fitness. If our cognitive faculties are the products of blind evolution, we have no reason to trust them, anytime or anywhere. Evolutionary naturalism, consequently, is a self-defeating position. Following up on earlier objections, we uncover three additional flaws in Plantinga's latest (...) formulation of his argument: a failure to appreciate adaptive path dependency, an incoherent conception of content ascription, and a conflation of common-sense and scientific beliefs, which we diagnose as the ‘foundationalist fallacy’. More fundamentally, Plantinga's reductive formalism with respect to the issue of cognitive reliability is inadequate to deal with relevant empirical details. (shrink)
The scientific study of living organisms is permeated by machine and design metaphors. Genes are thought of as the ‘‘blueprint’’ of an organism, organisms are ‘‘reverse engineered’’ to discover their functionality, and living cells are compared to biochemical factories, complete with assembly lines, transport systems, messenger circuits, etc. Although the notion of design is indispensable to think about adaptations, and engineering analogies have considerable heuristic value (e.g., optimality assumptions), we argue they are limited in several important respects. In particular, the (...) analogy with human-made machines falters when we move down to the level of molecular biology and genetics. Living organisms are far more messy and less transparent than human-made machines. Notoriously, evolution is an opportunistic tinkerer, blindly stumbling on ‘‘designs’’ that no sensible engineer would come up with. Despite impressive technological innovation, the prospect of artificially designing new life forms from scratch has proven more difficult than the superficial analogy with ‘‘programming’’ the right ‘‘software’’ would suggest. The idea of applying straightforward engineering approaches to living systems and their genomes— isolating functional components, designing new parts from scratch, recombining and assembling them into novel life forms—pushes the analogy with human artifacts beyond its limits. In the absence of a one-to-one correspondence between genotype and phenotype, there is no straightforward way to implement novel biological functions and design new life forms. Both the developmental complexity of gene expression and the multifarious interactions of genes and environments are serious obstacles for ‘‘engineering’’ a particular phenotype. The problem of reverse-engineering a desired phenotype to its genetic ‘‘instructions’’ is probably intractable for any but the most simple phenotypes. Recent developments in the field of bio-engineering and synthetic biology reflect these limitations. Instead of genetically engineering a desired trait from scratch, as the machine/engineering metaphor promises, researchers are making greater strides by co-opting natural selection to ‘‘search’’ for a suitable genotype, or by borrowing and recombining genetic material from extant life forms. (shrink)
This paper offers an epistemological discussion of self-validating belief systems and the recurrence of ?epistemic defense mechanisms? and ?immunizing strategies? across widely different domains of knowledge. We challenge the idea that typical ?weird? belief systems are inherently fragile, and we argue that, instead, they exhibit a surprising degree of resilience in the face of adverse evidence and criticism. Borrowing from the psychological research on belief perseverance, rationalization and motivated reasoning, we argue that the human mind is particularly susceptible to belief (...) systems that are structurally self-validating. On this cognitive-psychological basis, we construct an epidemiology of beliefs, arguing that the apparent convenience of escape clauses and other defensive ?tactics? used by believers may well derive not from conscious deliberation on their part, but from more subtle mechanisms of cultural selection. (shrink)
What are the consequences of evolutionary theory for the epistemic standing of our beliefs? Evolutionary considerations can be used to either justify or debunk a variety of beliefs. This paper argues that evolutionary approaches to human cognition must at least allow for approximately reliable cognitive capacities. Approaches that portray human cognition as so deeply biased and deficient that no knowledge is possible are internally incoherent and self-defeating. As evolutionary theory offers the current best hope for a naturalistic epistemology, evolutionary approaches (...) to epistemic justification seem to be committed to the view that our sensory systems and belief-formation processes are at least approximately accurate. However, for that reason they are vulnerable to the charge of circularity, and their success seems to be limited to commonsense beliefs. This paper offers an extension of evolutionary arguments by considering the use of external media in human cognitive processes: we suggest that the way humans supplement their evolved cognitive capacities with external tools may provide an effective way to increase the reliability of their beliefs and to counter evolved cognitive biases. (shrink)
An immunizing strategy is an argument brought forward in support of a belief system, though independent from that belief system, which makes it more or less invulnerable to rational argumentation and/or empirical evidence. By contrast, an epistemic defense mechanism is defined as a structural feature of a belief system which has the same effect of deflecting arguments and evidence. We discuss the remarkable recurrence of certain patterns of immunizing strategies and defense mechanisms in pseudoscience and other belief systems. Five different (...) types will be distinguished and analyzed, with examples drawn from widely different domains. The difference between immunizing strategies and defense mechanisms is analyzed, and their epistemological status is discussed. Our classification sheds new light on the various ways in which belief systems may achieve invulnerability against empirical evidence and rational criticism, and we propose our analysis as part of an explanation of these belief systems’ enduring appeal and tenacity. (shrink)
This paper discusses the ecological case for epistemic innocence: does biased cognition have evolutionary benefits, and if so, does that exculpate human reasoners from irrationality? Proponents of ‘ecological rationality’ have challenged the bleak view of human reasoning emerging from research on biases and fallacies. If we approach the human mind as an adaptive toolbox, tailored to the structure of the environment, many alleged biases and fallacies turn out to be artefacts of narrow norms and artificial set-ups. However, we argue that (...) putative demonstrations of ecological rationality involve subtle locus shifts in attributions of rationality, conflating the adaptive rationale of heuristics with our own epistemic credentials. By contrast, other cases also involve an ecological reframing of human reason, but do not involve such problematic locus shifts. We discuss the difference between these cases, bringing clarity to the rationality debate. (shrink)
Why do irrational beliefs adopt the trappings of science, to become what is known as “pseudoscience”? Here, we develop and extend an epidemiological framework to map the factors that explain the form and the popularity of irrational beliefs in scientific garb. These factors include the exploitation of epistemic vigilance, the misunderstanding of the authority of science, the use of the honorific title of “science” as an explicit argument for belief, and the phenomenon of epistemic negligence. We conclude by integrating the (...) various factors in an epidemiological framework and thus provide a comprehensive cultural evolutionary account of science mimicry. (shrink)
What is wrong with ad hoc hypotheses? Ever since Popper’s falsificationist account of adhocness, there has been a lively philosophical discussion about what constitutes adhocness in scientific explanation, and what, if anything, distinguishes legitimate auxiliary hypotheses from illicit ad hoc ones. This paper draws upon distinct examples from pseudoscience to provide us with a clearer view as to what is troubling about ad hoc hypotheses. In contrast with other philosophical proposals, our approach retains the colloquial, derogative meaning of adhocness, and (...) calls attention to the way in which the context of a theoretical move bears on the charge of adhocness. We also discuss the role of motivations implicit in the concept of adhocness, and the way ad hoc moves draw on theory-internal rationalizations. (shrink)
Social constructivist approaches to science have often been dismissed as inaccurate accounts of scientific knowledge. In this article, we take the claims of robust social constructivism (SC) seriously and attempt to find a theory which does instantiate the epistemic predicament as described by SC. We argue that Freudian psychoanalysis, in virtue of some of its well-known epistemic complications and conceptual confusions, provides a perfect illustration of what SC claims is actually going on in science. In other words, the features SC (...) mistakenly ascribes to science in general correctly characterize the epistemic status of Freudian psychoanalysis. This sheds some light on the internal disputes in the field of psychoanalysis, on the sociology of psychoanalytic movement, and on the “war” that has been waged over Freud's legacy with his critics. In addition, our analysis offers an indirect and independent argument against SC as an account of bona fide science, by illustrating what science would look like if it were to function as SC claims it does. (shrink)
Sober has reconstructed the biological design argument in the framework of likelihoodism, purporting to demonstrate that it is defective for intrinsic reasons. We argue that Sober’s restriction on the introduction of auxiliary hypotheses is too restrictive, as it commits him to rejecting types of everyday reasoning that are clearly valid. Our account shows that the design argument fails, not because it is intrinsically untestable but because it clashes with the empirical evidence and fails to satisfy certain theoretical desiderata (in particular, (...) unification). Likewise, Sober’s critique of the arguments from imperfections and from evil against design is off the mark. (shrink)
Why do irrational beliefs adopt the trappings of science, to become what is known as “pseudoscience”? Here, we develop and extend an epidemiological framework to map the factors that explain the form and the popularity of irrational beliefs in scientific garb. These factors include the exploitation of epistemic vigilance, the misunderstanding of the authority of science, the use of the honorific title of “science” as an explicit argument for belief, and the phenomenon of epistemic negligence. We conclude by integrating the (...) various factors in an epidemiological framework and thus provide a comprehensive cultural evolutionary account of science mimicry. (shrink)
Ever since Socrates, philosophers have been in the business of asking ques- tions of the type “What is X?” The point has not always been to actually find out what X is, but rather to explore how we think about X, to bring up to the surface wrong ways of thinking about it, and hopefully in the process to achieve an increasingly better understanding of the matter at hand. In the early part of the twentieth century one of the most (...) ambitious philosophers of sci- ence, Karl Popper, asked that very question in the specific case in which X = science. Popper termed this the “demarcation problem,” the quest for what distinguishes science from nonscience and pseudoscience (and, presumably, also the latter two from each other). (shrink)
After contrasting obscurantism with bullshit, we explore some ways in which obscurantism is typically justified by investigating a notorious test-case: defences of Lacanian psychoanalysis. Obscurantism abuses the reader's natural sense of curiosity and interpretive charity with the promise of deep and profound insights about a designated subject matter that is often vague or elusive. When the attempt to understand what the speaker means requires excessive hermeneutic efforts, interpreters are reluctant to halt their quest for meaning. We diagnose this as a (...) case of psychological loss aversion, in particular, the aversion to acknowledging that there was no hidden meaning after all, or that whatever meaning found was projected onto the text by the reader herself. (shrink)
Pseudoscience spreads through communicative and inferential processes that make people vulnerable to weird beliefs. However, the fact that pseudoscientific beliefs are unsubstantiated and have no basis in reality does not mean that the people who hold them have no reasons for doing so. We propose that, reasons play a central role in the diffusion of pseudoscience. On the basis of cultural epidemiology and the interactionist theory of reasoning, we will here analyse the structure and the function of reasons in the (...) propagation of pseudoscience. We conclude by discussing the implications of our approach for the understanding of human irrationality. (shrink)
We respond to Van Leeuwen's critique of our paper. We clarify why our account is not committed to a unitary view of "belief", and we argue that Van Leeuwen's dichotomy between "fakers" and "fanatics" is a false dilemma, based on an equivocation in the use of the term "fanaticism". Once we pay attention to crucial content differences in religious belief, to which Van Leeuwen is largely oblivious, we can explain all the phenomena that he alludes to. Finally, we discuss some (...) peculiar features of religion, such as the unfalsifiablity of many doctrines and the importance of mystery, but we insist that such features do not rest on a difference in cognitive attitude. By and large, religious folks factually believe what they profess to believe. (shrink)
What, if any, are the limits of human understanding? Epistemic pessimists, sobered by our humble evolutionary origins, have argued that some parts of the universe will forever remain beyond our ken. But what exactly does it mean to say that humans are ‘cognitively closed’ to some parts of the world, or that some problems will forever remain ‘mysteries’? In this paper we develop a richer conceptual toolbox for thinking about different forms and varieties of cognitive limitation, which are often conflated (...) by the so-called ‘new mysterians’. We distinguish between representational access and imaginative understanding, as well as between different modalities of cognitive limitation. Next, we look at tried-and-tested strategies for overcoming our innate cognitive limitations, drawing from the literature on distributed cognition and cognitive scaffolding’. This allows us to distinguish between the limits of bare brains vs. scaffolded brains. Most importantly, we argue that this panoply of mind-extension devices is combinatorial and open-ended. In the end, this allows us to turn the table on the mysterians: for every alleged ‘mystery’, they should demonstrate that no possible combination of mind extension devices will bring us any closer to a solution. (shrink)
Does cultural evolution happen by a process of copying or replication? And how exactly does cultural transmission compare with that paradigmatic case of replication, the copying of DNA in living cells? Theorists of cultural evolution are divided on these issues. The most important objection to the replication model has been leveled by Dan Sperber and his colleagues. Cultural transmission, they argue, is almost always reconstructive and transformative, while strict ‘replication’ can be seen as a rare limiting case at most. By (...) means of some thought experiments and intuition pumps, I clear up some confusion about what qualifies as ‘replication’. I propose a distinction between evocation and extraction of cultural information, applying these concepts at different levels of resolution. I defend a purely abstract and information-theoretical definition of replication, while rejecting more material conceptions. In the end, even after taking Sperber’s valuable and important points on board, the notion of cultural replication remains a valid and useful one. This is fortunate, because we need it for certain explanatory projects. (shrink)
The demarcation between science and pseudoscience is a long-standing problem in philosophy of science. Although philosophers have been hesitant to engage in this project since Larry Laudan announce...
According to some philosophers, we are “cognitively closed” to the answers to certain problems. McGinn has taken the next step and offered a list of examples: the mind/body problem, the problem of the self and the problem of free will. There are naturalistic, scientific answers to these problems, he argues, but we cannot reach them because of our cognitive limitations. In this paper, we take issue with McGinn's thesis as the most well-developed and systematic one among the so-called “new mysterians”. (...) McGinn aims to establish a strong, representational notion of cognitive closure: a principled inaccessibility of a true theory of certain properties of the world, but he offers arguments that only bear on difficulties with psychologically grasping the correct answers. The latter we label psychological closure. We argue that representational closure does not follow from psychological closure, and that McGinn's case therefore falters. We could very well be able to represent the correct answer to some question, even without being able to grasp that answer psychologically. McGinn's mistake in deriving representational closure from psychological closure rests on a fallacy of equivocation relating to the concept of ‘understanding’. By making this distinction explicit, we hope to improve our thinking about the limits of science in particular and human knowledge in general. (shrink)
For a long time, philosophers of science have expressed little interest in the so-called demarcation project that occupied the pioneers of their field, and most now concur that terms like “pseudoscience” cannot be defined in any meaningful way. However, recent years have witnessed a revival of philosophical interest in demarcation. In this paper, I argue that, though the demarcation problem of old leads to a dead-end, the concept of pseudoscience is not going away anytime soon, and deserves a fresh look. (...) My approach proposes to naturalize and down-size the concept, anchoring it in real-life doctrines and fields of inquiry. First, I argue against the definite article “the” in “the demarcation problem”, distinguishing between territorial and normative demarcation, and between different failures and shortcomings in science apart from pseudoscience. Next, I argue that pseudosciences can be fruitfully regarded as simulacra of science, doctrines that are not epistemically warranted but whose proponents try to create the impression that they are. In this element of imitation or mimicry, I argue, lies the clue to their common identity. Despite the huge variety of doctrines and beliefs gathered under the rubric of “pseudoscience”, and the wide range of defects from which they suffer, pseudosciences all engage in similar strategies to create an impression of epistemic warrant. The indirect, symptomatic approach defended here leads to a general characterization of pseudosciences in all domains of inquiry, and to a useful diagnostic tool. (shrink)
Certain enterprises at the fringes of science, such as intelligent design creationism, claim to identify phenomena that go beyond not just our present physics but any possible physical explanation. Asking what it would take for such a claim to succeed, we introduce a version of physicalism that formulates the proposition that all available data sets are best explained by combinations of “chance and necessity”—algorithmic rules and randomness. Physicalism would then be violated by the existence of oracles that produce certain kinds (...) of noncomputable functions. Examining how a candidate for such an oracle would be evaluated leads to questions that do not admit an easy resolution. Since we lack any plausible candidate for any such oracle, however, chance-and-necessity physicalism appears very likely to be correct. (shrink)
Alvin Plantinga's evolutionary argument against naturalism states that evolution cannot produce warranted beliefs. In contrast, according to Plantinga, Christian theism provides properly functioning cognitive faculties in an appropriate cognitive environment, in accordance with a design plan aimed at producing true beliefs. But does theism fulfill criteria I–III? Judging from the Bible, God employs deceit in his relations with humanity, rendering our cognitive functions unreliable. Moreover, there is no reason to suppose that God's purpose would be to produce true beliefs in (...) humans. Finally, from the theistic/religious perspective, it is impossible to tell whether observations have natural or supernatural causes, which undermines an appropriate cognitive environment. Reliable identification of deceit or miracles could alleviate these problems, but the theistic community has failed to resolve this issue. Dismissal of parts of the Bible, or attempts to find alternative interpretations, would collapse into skepticism or deism. Thus, Plantinga's problem of epistemic warrant backfires on theism. (shrink)
In recent years, there has been an intense public debate about whether and, if so, to what extent investments in nuclear energy should be part of strategies to mitigate climate change. Here, we address this question from an ethical perspective, evaluating different strategies of energy system development in terms of three ethical criteria, which will differentially appeal to proponents of different normative ethical frameworks. Starting from a standard analysis of climate change as arising from an intergenerational collective action problem, we (...) evaluate whether contributions from nuclear energy will, on expectation, increase the likelihood of successfully phasing out fossil fuels in time to avert dangerous global warming. For many socio-economic and geographic contexts, our review of the energy system modeling literature suggests the answer to this question is “yes.” We conclude that, from the point of view of climate change mitigation, investments in nuclear energy as part of a broader energy portfolio will be ethically required to minimize the risks of decarbonization failure, and thus the tail risks of catastrophic global warming. Finally, using a sensitivity analysis, we consider which other aspects of nuclear energy deployment, apart from climate change, have the potential to overturn the ultimate ethical verdict on investments in nuclear energy. Out of several potential considerations, we suggest that its potential interplay — whether beneficial or adverse — with the proliferation of nuclear weapons is the most plausible candidate. (shrink)
Some philosophers have argued that, owing to our humble evolutionary origins, some mysteries of the universe will forever remain beyond our ken. But what exactly does it mean to say that humans are ‘cognitively closed’ to some parts of the universe, or that some problems will forever remain ‘mysteries’? First, we distinguish between representational access and imaginative understanding, as well as between different modalities of cognitive limitation. Next, we look at tried-and-tested strategies for overcoming our innate cognitive limitations. In particular, (...) we consider how metaphors and analogies can extend the reach of the human mind, by allowing us to make sense of bizarre and counterintuitive things in terms of more familiar things. Finally, we argue that this collection of mind-extension devices is combinatorial and open-ended, and that therefore pronouncements about cognitive closure and about the limits of human inquiry are premature. (shrink)
In a previous issue of Tijdschrift voor Filosofie, Filip Buekens argues that evolutionary psychology (EP), or some interpretations thereof, have a corrosive impact on our ‘manifest self-image’. Buekens wants to defend and protect the global adequacy of this manifest self-image in the face of what he calls evolutionary revisionism. Although we largely agree with Buekens’ central argument, we criticize his analysis on several accounts, making some constructive proposals to strengthen his case. First, Buekens’ argument fails to target EP, because his (...) notion of the ‘constitutive conditions’ of our attitudes is too wide and too extensive. Second, his defense of the global adequacy of our attitudes does not allow for sufficient differentiation to analyze the problem of potential self-refutation with respect to EP. Third, his account of knowledge about constitutive conditions, and its impact on our self-image, is problematic. We provide a more detailed explanation for the pervasiveness of evolutionary revisionism and other misconceptions about EP. Finally, we consider in what sense EP may legitimately affect our self-image, and whether it can truly inspire corrections of our view of human nature. (shrink)
In the space of all possible beliefs, conspiracy theories stand out with a special and possibly unique feature: they are the only beliefs that predict an absence of evidence in their favor, and even the discovery of counterevidence. In the traditional, narrow sense of the term, a ‘conspiracy theory’ refers to an alternative explanation of a historical event in terms of a small group of actors working together to achieve some nefarious goal. In a broader sense, however, any theory that (...) involves a form of invisible intentional agency can adopt the contours of a conspiracy theory. In this paper, I adopt a broader and more abstract definition of conspiracy theories, based on a conceptual core that unifies all such theories. By drawing comparisons between conspiracy theories in a range of different domains, we gain more insight into their central epistemological defects, as well as their cultural dynamics. Some belief systems are inherently conspiratorial, in that they posit some form of intelligent agency that deliberately wants to escape detection, while others merely resort to conspiratorial reasoning when threatened with counterevidence. This paper builds on earlier research into the cultural evolution of belief systems. (shrink)
In this commentary on Daniel Dennett's 'From Bacteria to Bach and Back', I make some suggestions to strengthen the meme concept, in particular the hypothesis of cultural parasitism. This is a notion that has both caused excitement among enthusiasts and raised the hackles of critics. Is the “meme” meme itself an annoying piece of malware, which has infected and corrupted the mind of an otherwise serious philosopher? Or is it an indispensable theoretical tool, as Dennett believes, which deserves to be (...) spread far and wide? (shrink)
What, if anything, is wrong with conspiracy theories? A conspiracy refers to a group of people acting in secret to achieve some nefarious goal. But given that the pages of history are full of such plots, why are CTs regarded with suspicion? Just like with the traditional demarcation problem, philosophers disagree about whether there are general ways to distinguish legitimate hypotheses about conspiracies from unfounded ‘conspiracy theories’. According to particularism, the currently dominant view among philosophers, there is no such demarcation (...) line to be drawn. Each CT should be evaluated on its own merits, and the bad reputation of CTs as a class is undeserved. In this paper, I present a new defense of generalism, the view that there is indeed something prima facie suspicious about CTs. To demarcate legitimate theorizing about real-life conspiracies from “mere conspiracy theories”, I draw on the principle of asymmetry between causes and effects, and show how it sheds light on classical problems of missing evidence and adhocness. Because of their extreme resilience to counterevidence, CTs can be seen as the epistemological equivalent of black holes, in which unwary truth-seekers are drawn, never to escape again. Finally, by presenting a general ‘recipe’ for generating novel CTs around any given event, regardless of the circumstances and the available evidence, I rescue the intuition behind colloquial phrases like “That’s just a conspiracy theory”. (shrink)
What, if anything, is wrong with conspiracy theories? A conspiracy refers to a group of people acting in secret to achieve some nefarious goal. But given that the pages of history are full of such plots, why are CTs regarded with suspicion? Just like with the traditional demarcation problem, philosophers disagree about where to draw the line between legitimate hypotheses about conspiracies and unfounded ‘conspiracy theories’. Some believe that there is no such demarcation line to be drawn, that each CT (...) should be evaluated on its own merits, and that the bad reputation of CTs is wholly undeserved. In this paper, I intend to rescue the intuition that there is indeed something prima facie suspicious about CTs. First, I demarcate legitimate theorizing about real-life conspiracies from “mere conspiracy theories”. Along the way, my analysis will clarify some epistemological issues surrounding falsifiability, asymmetries between causes and effects, and hypotheses involving intentional agents. Because of their extreme resilience to external criticism and counterevidence, I argue, CTs are the epistemological equivalent of a ‘black hole’, in which unwary truth-seekers are drawn, never to escape again. But this strong attraction of CTs comes at a steep price: their theoretical parameters are essentially arbitrary, making them vulnerable to internal disruption. In essence, because it is so easy to construct a novel CT, it is equally easy to construct many ones about the same historical event. And that is what justifies our suspicion of CTs. (shrink)
Are there any such things as mind viruses? By analogy with biological parasites, such cultural items are supposed to subvert or harm the interests of their host. Most popularly, this notion has been associated with Richard Dawkins’ concept of the “selfish meme”. To unpack this claim, we first clear some conceptual ground around the notions of cultural adaptation and units of culture. We then formulate Millikan’s challenge: how can cultural items develop novel purposes of their own, cross-cutting or subverting human (...) purposes? If this central challenge is not met, talk of cultural ‘parasites’ or ‘selfish memes’ will be vacuous or superfluous. First, we discuss why other attempts to answer Millikan’s challenge have failed. In particular, we put to rest the claims of panmemetics, a somewhat sinister worldview according to which human culture is nothing more than a swarm of selfish agents, plotting and scheming behind the scenes. Next, we reject a more reasonable, but still overly permissive approach to mind parasites, which equates them with biologically maladaptive culture. Finally, we present our own answer to Millikan’s challenge: certain systems of misbelief can be fruitfully treated as selfish agents developing novel purposes of their own. In fact, we venture that this is the only way to properly understand them. Systems of misbelief are designed to spread in a viral-like manner, without any regard to the interests of their human hosts, and with possibly harmful consequences. As a proof of concept, we discuss witchcraft beliefs in early modern Europe. In this particular case, treating cultural representations as “parasites” – i.e. adopting the meme’s eye view – promises to shed new light on a mystery that historians and social scientists have been wrestling with for decades. (shrink)
The human brain is the only object in the universe, as far as we know, that has discovered its own origins. But what, if any, are the limits of our understanding? Epistemic pessimists, sobered by our humble evolutionary origins, have argued that some truths about the universe are perennial mysteries and will forever remain beyond our ken. Others have brushed this off as premature, a form of epistemic defeatism. In this paper we develop a conceptual toolbox for parsing different forms (...) of cognitive limitation that are often conflated in the literature. We distinguish between representational access and intuitive understanding. We also distinguish different modalities of cognitive limitation. If the scientific endeavor ever comes to a halt, will this feel like slamming into a brick wall, or rather like slowly getting bogged down in a swamp? By distinguishing different types and modalities of human cognitive limitation, we soften up the hypothesis of ‘cognitive closure’ and ultimate ‘mysteries’. Next, we propose specific mechanisms and strategies for overcoming our innate cognitive limitations. For a start, it is uninformative to think of the limits of a single, bare, unassisted brain. One of the central features of human intelligence is the capacity for mind extension and distributed cognition. We have developed various technologies for extending the reach of our naked brains and for pooling their cognitive resources, as witnessed by the history of science. We then discuss different cognitive mechanisms for overcoming the limits to our intuitive understanding, and argue that these are combinatorial and open-ended. In light of all these possibilities for extending the limits of our understanding, we conclude that there is no good reason to suspect the existence of an outer wall of human comprehension. (shrink)
The leading Intelligent Design theorist William Dembski (Rowman & Littlefield, Lanham MD, 2002) argued that the first No Free Lunch theorem, first formulated by Wolpert and Macready (IEEE Trans Evol Comput 1: 67–82, 1997), renders Darwinian evolution impossible. In response, Dembski’s critics pointed out that the theorem is irrelevant to biological evolution. Meester (Biol Phil 24: 461–472, 2009) agrees with this conclusion, but still thinks that the theorem does apply to simulations of evolutionary processes. According to Meester, the theorem shows (...) that simulations of Darwinian evolution, as these are typically set in advance by the programmer, are teleological and therefore non-Darwinian. Therefore, Meester argues, they are useless in showing how complex adaptations arise in the universe. Meester uses the term teleological inconsistently, however, and we argue that, no matter how we interpret the term, a Darwinian algorithm does not become non-Darwinian by simulation. We show that the NFL theorem is entirely irrelevant to this argument, and conclude that it does not pose a threat to the relevance of simulations of biological evolution. (shrink)