In the present interview, Jacob Rogozinski elucidates the main concepts and theses he developed in his latest book dedicated to the issue of modern jihadism. On this occasion, he explains his disagreements with other philosophical (Badiou, Baudrillard, Žižek) and anthropological (Girard) accounts of Islamic terrorism. Rogozinski also explains that although jihadism betrays Islam, it nonetheless has everything to do with Islam. Eventually, he describes his own philosophical journey which led him from a phenomenological study of the ego and the flesh (...) to the study of past (witch-hunts, French Reign of Terror) and contemporary (jihadism) terror apparatuses. (shrink)
Throughout an illustrious career of teaching and writing that spans five decades, philosopher Needleman has always tackled the "big questions" of life. In this collection of six feature interviews that began in the 1980s, Miller and Needleman discuss "Making Sense of Mysticism, The Secrets of Time and Love, The Meanings of Money, Searching for the Soul of America, Meeting God without Religion, " and "The Need for Philosophy.".
Medical nihilism is the view that we should have little confidence in the effectiveness of medical interventions. Jacob Stegenga argues persuasively that this is how we should see modern medicine, and suggests that medical research must be modified, clinical practice should be less aggressive, and regulatory standards should be enhanced.
Robustness is a common platitude: hypotheses are better supported with evidence generated by multiple techniques that rely on different background assumptions. Robustness has been put to numerous epistemic tasks, including the demarcation of artifacts from real entities, countering the “experimenter’s regress,” and resolving evidential discordance. Despite the frequency of appeals to robustness, the notion itself has received scant critique. Arguments based on robustness can give incorrect conclusions. More worrying is that although robustness may be valuable in ideal evidential circumstances (i.e., (...) when evidence is concordant), often when a variety of evidence is available from multiple techniques, the evidence is discordant. †To contact the author, please write to: Jacob Stegenga, Department of Philosophy, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093; e‐mail: [email protected] (shrink)
Philosophers have committed sins while studying science, it is said – philosophy of science focused on physics to the detriment of biology, reconstructed idealizations of scientific episodes rather than attending to historical details, and focused on theories and concepts to the detriment of experiments. Recent generations of philosophers of science have tried to atone for these sins, and by the 1980s the exculpation was in full swing. Marcel Weber’s Philosophy of Experimental Biology is a zenith mea culpa for philosophy of (...) science: it carefully describes several historical examples from twentieth century biology to address both ‘old’ philosophical topics, like reductionism, inference, and realism, and ‘new’ topics, like discovery, models, and norms. Biology, experiments, history – at last, philosophy of science, free of sin. (shrink)
Robustness arguments hold that hypotheses are more likely to be true when they are confirmed by diverse kinds of evidence. Robustness arguments require the confirming evidence to be independent. We identify two kinds of independence appealed to in robustness arguments: ontic independence —when the multiple lines of evidence depend on different materials, assumptions, or theories—and probabilistic independence. Many assume that OI is sufficient for a robustness argument to be warranted. However, we argue that, as typically construed, OI is not a (...) sufficient independence condition for warranting robustness arguments. We show that OI evidence can collectively confirm a hypothesis to a lower degree than individual lines of evidence, contrary to the standard assumption undergirding usual robustness arguments. We employ Bayesian networks to represent the ideal empirical scenario for a robustness argument and a variety of ways in which empirical scenarios can fall short of this ideal. (shrink)
An astonishing volume and diversity of evidence is available for many hypotheses in the biomedical and social sciences. Some of this evidence—usually from randomized controlled trials (RCTs)—is amalgamated by meta-analysis. Despite the ongoing debate regarding whether or not RCTs are the ‘gold-standard’ of evidence, it is usually meta-analysis which is considered the best source of evidence: meta-analysis is thought by many to be the platinum standard of evidence. However, I argue that meta-analysis falls far short of that standard. Different meta-analyses (...) of the same evidence can reach contradictory conclusions. Meta-analysis fails to provide objective grounds for intersubjective assessments of hypotheses because numerous decisions must be made when performing a meta-analysis which allow wide latitude for subjective idiosyncrasies to influence its outcome. I end by suggesting that an older tradition of evidence in medicine—the plurality of reasoning strategies appealed to by the epidemiologist Sir Bradford Hill—is a superior strategy for assessing a large volume and diversity of evidence. (shrink)
This highly original interpretation of Paul by the Jewish philosopher of religion Jacob Taubes was presented in a number of lectures held in Heidelberg toward the end of his life, and was regarded by him as his “spiritual testament.” ...
Empirical discussions of mental representation appeal to a wide variety of representational kinds. Some of these kinds, such as the sentential representations underlying language use and the pictorial representations of visual imagery, are thoroughly familiar to philosophers. Others have received almost no philosophical attention at all. Included in this latter category are analogue magnitude representations, which enable a wide range of organisms to primitively represent spatial, temporal, numerical, and related magnitudes. This article aims to introduce analogue magnitude representations to a (...) philosophical audience by rehearsing empirical evidence for their existence and analysing their format, their content, and the computations they support. 1 Background1.1 Evidence of analogue magnitude representations1.2 Weber’s law1.3 Scepticism about analogue magnitude representations2 Format2.1 Carey’s analogy2.2 Neural realization2.3 Analogue representation2.4 Analogue magnitude representation components3 Content3.1 Do analogue magnitude representations have representational content?3.2 What do analogue magnitude representations represent?3.3 What content types do analogue magnitude representations have?4 Computations4.1 Arithmetic computation4.2 Practical deliberation5 Conclusion. (shrink)
Enhanced indispensability arguments claim that Scientific Realists are committed to the existence of mathematical entities due to their reliance on Inference to the best explanation. Our central question concerns this purported parity of reasoning: do people who defend the EIA make an appropriate use of the resources of Scientific Realism to achieve platonism? We argue that just because a variety of different inferential strategies can be employed by Scientific Realists does not mean that ontological conclusions concerning which things we should (...) be Scientific Realists about are arrived at by any inferential route which eschews causes, and nor is there any direct pressure for Scientific Realists to change their inferential methods. We suggest that in order to maintain inferential parity with Scientific Realism, proponents of EIA need to give details about how and in what way the presence of mathematical entities directly contribute to explanations. (shrink)
This afterword extends and refines the arguments presented in Cohen and Jacobs . The main point made by the authors is that the antidepressant randomized controlled trial world is a make-believe world in which researchers act as if a bona fide medical experiment is being conducted. From the assumed existence of the “disorder” and the assumed homogeneity of the treatment groups, through the validity of rating scales and the meaning of their scores, to the presentations of researchers’ ratings as (...) the genuine outcome of interest — all aspects of such trials are make-believe. The continued acceptance of randomized controlled trials as appropriate mechanisms to ascertain the actual effects of psychoactive drugs on human beings in distress confirms that researchers are inextricably dependent on large-scale organizational and financial interests that require the sustained production of make-believe results about psychoactive drugs. (shrink)
Formal principles governing best practices in classification and definition have for too long been neglected in the construction of biomedical ontologies, in ways which have important negative consequences for data integration and ontology alignment. We argue that the use of such principles in ontology construction can serve as a valuable tool in error-detection and also in supporting reliable manual curation. We argue also that such principles are a prerequisite for the successful application of advanced data integration techniques such as ontology-based (...) multi-database querying, automated ontology alignment and ontology-based text-mining. These theses are illustrated by means of a case study of the Gene Ontology, a project of increasing importance within the field of biomedical data integration. (shrink)
Measuring the effectiveness of medical interventions faces three epistemological challenges: the choice of good measuring instruments, the use of appropriate analytic measures, and the use of a reliable method of extrapolating measures from an experimental context to a more general context. In practice each of these challenges contributes to overestimating the effectiveness of medical interventions. These challenges suggest the need for corrective normative principles. The instruments employed in clinical research should measure patient-relevant and disease-specific parameters, and should not be sensitive (...) to parameters that are only indirectly relevant. Effectiveness always should be measured and reported in absolute terms (using measures such as 'absolute risk reduction'), and only sometimes should effectiveness also be measured and reported in relative terms (using measures such as 'relative risk reduction')-employment of relative measures promotes an informal fallacy akin to the base-rate fallacy, which can be exploited to exaggerate claims of effectiveness. Finally, extrapolating from research settings to clinical settings should more rigorously take into account possible ways in which the intervention in question can fail to be effective in a target population. (shrink)
The Meno, one of the most widely read of the Platonic dialogues, is seen afresh in this original interpretation that explores the dialogue as a theatrical presentation. Just as Socrates's listeners would have questioned and examined their own thinking in response to the presentation, so, Klein shows, should modern readers become involved in the drama of the dialogue. Klein offers a line-by-line commentary on the text of the Meno itself that animates the characters and conversation and carefully probes each significant (...) turn of the argument. "A major addition to the literature on the Meno and necessary reading for every student of the dialogue."--Alexander Seasonske, Philosophical Review "There exists no other commentary on Meno which is so thorough, sound, and enlightening."-- Choice Jacob Klein was a student of Martin Heidegger and a tutor at St. John's College from 1937 until his death. His other works include Plato's Trilogy: Theaetetus, the Sophist, and the Statesman, also published by the University of Chicago Press. (shrink)
To be effective, a medical intervention must improve one's health by targeting a disease. The concept of disease, though, is controversial. Among the leading accounts of disease-naturalism, normativism, hybridism, and eliminativism-I defend a version of hybridism. A hybrid account of disease holds that for a state to be a disease that state must both (i) have a constitutive causal basis and (ii) cause harm. The dual requirement of hybridism entails that a medical intervention, to be deemed effective, must target either (...) the constitutive causal basis of a disease or the harms caused by the disease (or ideally both). This provides a theoretical underpinning to the two principle aims of medical treatment: care and cure. (shrink)
Are evolution and creation irreconcilably opposed? Is 'intelligent design' theory an unhappy compromise? Is there another way of approaching the present-day divide between religious and so-called secular views of the origins of life? Jacob Klapwijk offers a philosophical analysis of the relation of evolutionary biology to religion, and addresses the question of whether the evolution of life is exclusively a matter of chance or is better understood as including the notion of purpose. Writing from a Christian point of view, he (...) criticizes creationism and intelligent design theory as well as opposing reductive naturalism. He offers an alternative to both and an attempt to bridge the gap between them, via the idea of 'emergent evolution'. In this theory the process of evolution has an emergent or innovative character resulting in a living world of ingenious, multifaceted complexity. (shrink)
A platitude that took hold with Kuhn is that there can be several equally good ways of balancing theoretical virtues for theory choice. Okasha recently modelled theory choice using technical apparatus from the domain of social choice: famously, Arrow showed that no method of social choice can jointly satisfy four desiderata, and each of the desiderata in social choice has an analogue in theory choice. Okasha suggested that one can avoid the Arrow analogue for theory choice by employing a strategy (...) used by Sen in social choice, namely, to enhance the information made available to the choice algorithms. I argue here that, despite Okasha’s claims to the contrary, the information-enhancing strategy is not compelling in the domain of theory choice. (shrink)
Science and technology are so intertwined that technoscience has been argued to more accurately reflect the progress of science and its impact on society, and most socioscientific issues require technoscientific reasoning. Education policy documents have long noted that the general public lacks sufficient understanding of science and technology necessary for informed decision-making regarding socioscientific/technological issues. The science–technology–society movement and scholarship addressing socioscientific issues in science education reflect efforts in the science education community to promote more informed decision-making regarding such issues. (...) Now Science, Technology, Engineering, Mathematics education has emerged as a major reform movement impacting science education. STEM education efforts emphasize literacy across the disciplines of science, technology, engineering, and mathematics, but with rare exceptions, treat issues of technology superficially and uncritically. Informed decision-making regarding many personal and societal issues requires technological literacy beyond merely becoming an enthusiastic designer or skilled user of technology, but the science education community has given little attention to what such literacy entails. Here, we present results of an extensive review of the literature regarding the nature of technology in order to identify key issues among scholars who study technology. We then provide predominant perspectives among those scholars and suggest which identified NOT issues are most essential to address as part of STEM education efforts that seek to promote informed personal and societal decision-making. (shrink)
The idea that perceptual and cognitive systems must incorporate knowledge about the structure of the environment has become a central dogma of cognitive theory. In a Bayesian context, this idea is often realized in terms of “tuning the prior”—widely assumed to mean adjusting prior probabilities so that they match the frequencies of events in the world. This kind of “ecological” tuning has often been held up as an ideal of inference, in fact defining an “ideal observer.” But widespread as this (...) viewpoint is, it directly contradicts Bayesian philosophy of probability, which views probabilities as degrees of belief rather than relative frequencies, and explicitly denies that they are objective characteristics of the world. Moreover, tuning the prior to observed environmental frequencies is subject to overfitting, meaning in this context overtuning to the environment, which leads (ironically) to poor performance in future encounters with the same environment. Whenever there is uncertainty about the environment—which there almost always is—an agent's prior should be biased away from ecological relative frequencies and toward simpler and more entropic priors. (shrink)
Evidence hierarchies are widely used to assess evidence in systematic reviews of medical studies. I give several arguments against the use of evidence hierarchies. The problems with evidence hierarchies are numerous, and include methodological shortcomings, philosophical problems, and formal constraints. I argue that medical science should not employ evidence hierarchies, including even the latest and most-sophisticated of such hierarchies.
Medicalisation is a social phenomenon in which conditions that were once under legal, religious, personal or other jurisdictions are brought into the domain of medical authority. Low sexual desire in females has been medicalised, pathologised as a disease, and intervened upon with a range of pharmaceuticals. There are two polarised positions on the medicalisation of low female sexual desire: I call these the mainstream view and the critical view. I assess the central arguments for both positions. Dividing the two positions (...) are opposing models of the aetiology of low female sexual desire. I conclude by suggesting that the balance of arguments supports a modest defence of the critical view regarding the medicalisation of low female sexual desire. (shrink)
Personal authenticity is out of fashion amongst analytic philosophers. Yet, Kierkegaard, Nietzsche, Heidegger, Sartre and Camus were clearly preoccupied by its theoretical and practical viability. In this study, Jacob Golomb illuminates the writings of these philosophers in an attempt to explain their particular ethical stance on the subject. This book will prove invaluable reading for students and teachers of philosophy, literature and education and indeed for anyone who has ever empathized with Camus's Meursault, Sartre's Matthieu or Nietzsche's Zarathustra.
I defend a radical interpretation of biological populations—what I call population pluralism—which holds that there are many ways that a particular grouping of individuals can be related such that the grouping satisfies the conditions necessary for those individuals to evolve together. More constraining accounts of biological populations face empirical counter-examples and conceptual difficulties. One of the most intuitive and frequently employed conditions, causal connectivity—itself beset with numerous difficulties—is best construed by considering the relevant causal relations as ‘thick’ causal concepts. I (...) argue that the fine-grained causal relations that could constitute membership in a biological population are huge in number and many are manifested by degree, and thus we can construe population membership as being defined by massively multidimensional constructs, the differences between which are largely arbitrary. I end by showing that positions in two recent debates in theoretical biology depend on a view of biological populations at odds with the pluralism defended here. (shrink)