In this paper we argue that normative reasons are hyperintensional and put forward a formal account of this thesis. That reasons are hyperintensional means that a reason for a proposition does not imply that it is also a reason for a logically equivalent proposition. In the first part we consider three arguments for the hyperintensionality of reasons: an argument from the nature of reasons, an argument from substitutivity and an argument from explanatory power. In the second part we describe a (...) hyperintensional logic of reasons based on justification logics. Eventually we discuss the philosophical import of this proposal and highlight some limitations and possible developments. (shrink)
In this paper, we provide a semantic analysis of the well-known knowability paradox stemming from the Church–Fitch observation that the meaningful knowability principle /all truths are knowable/, when expressed as a bi-modal principle F --> K♢F, yields an unacceptable omniscience property /all truths are known/. We offer an alternative semantic proof of this fact independent of the Church–Fitch argument. This shows that the knowability paradox is not intrinsically related to the Church–Fitch proof, nor to the Moore sentence upon which it (...) relies, but rather to the knowability principle itself. Further, we show that, from a verifiability perspective, the knowability principle fails in the classical logic setting because it is missing the explicit incorporation of a hidden assumption of /stability/: ‘the proposition in question does not change from true to false in the process of discovery.’ Once stability is taken into account, the resulting /stable knowability principle/ and its nuanced versions more accurately represent verification-based knowability and do not yield omniscience. (shrink)
The new mechanistic philosophy is divided into two largely disconnected projects. One deals with a metaphysical inquiry into how mechanisms relate to issues such as causation, capacities and levels of organization, while the other deals with epistemic issues related to the discovery of mechanisms and the intelligibility of mechanistic representations. Tudor Baetu explores and explains these projects, and shows how the gap between them can be bridged. His proposed account is compatible both with the assumptions and practices of experimental (...) design in biological research, and with scientifically accepted interpretations of experimental results. (shrink)
Artemov and Protopopescu introduced a Brouwer-Heyting-Kolmogorov interpretation of knowledge operator to define the intuitionistic epistemic logic IEL, where the axiom A⊃KA\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A\supset KA$$\end{document} is accepted but the axiom KA⊃A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$KA\supset A$$\end{document} is refused. This paper studies the notion of distributed knowledge on an expansion of the multi agent variant of IEL. We provide a BHK interpretation of distributed knowledge operator to define the intuitionistic (...) epistemic logic with distributed knowledge DIEL. We construct Hilbert system and cut-free sequent calculus for DIEL\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbf {DIEL}$$\end{document} and show that they are sound and complete for the intended Kripke semantics. (shrink)
Both clinical research and basic science rely on the epistemic practice of extrapolation from surrogate models, to the point that explanatory accounts presented in review papers and biology textbooks are in fact composite pictures reconstituted from data gathered in a variety of distinct experimental setups. This raises two new challenges to previously proposed mechanistic-similarity solutions to the problem of extrapolation: one pertaining to the absence of mechanistic knowledge in the early stages of research and the second to the large number (...) of extrapolations underpinning explanatory accounts. An analysis of the strategies deployed in experimental research supports the conclusion that while results from validated surrogate models are treated as a legitimate line of evidence supporting claims about target systems, the overall structure of research projects also demonstrates that extrapolative inferences are not considered definitive or sufficient evidence, but only partially justified hypotheses subjected to further testing. 1 Introduction2 Surrogate Models2.1 What exactly is a surrogate model?2.2 Why use surrogate models?3 Prior Validation of Surrogate Models3.1 The validation and ranking of surrogate models in the early stages of basic research3.2 The validation of surrogate models in later stages of basic research and in clinical research4 ‘Big Picture’ Accounts and the Extrapolations Underpinning Them4.1 The mosaic nature of mechanistic descriptions in basic science4.2 Challenges for mechanistic-similarity-based validation protocols5 Retrospective Testing of Extrapolated Knowledge5.1 Holistic confirmation5.2 Fallback strategies6 Conclusions. (shrink)
The article analyses in detail the Meselson–Stahl experiment, identifying two novel difficulties for the crucial experiment account, namely, the fragility of the experimental results and the fact that the hypotheses under scrutiny were not mutually exclusive. The crucial experiment account is rejected in favour of an experimental-mechanistic account of the historical significance of the experiment, emphasizing that the experiment generated data about the biochemistry of DNA replication that is independent of the testing of the semi-conservative, conservative, and dispersive hypotheses. _1_ (...) Introduction _2_ The Meselson–Stahl Experiment _3_ Some Difficulties for the Crucial Experiment Account _3.1_ Additional experiments were required _3.2_ Simplicity considerations are unconvincing _3.3_ The fragility of the experimental results _3.4_ The problematic interpretation of falsifyng results _3.5_ Not all possible hypotheses considered or tested _3.6_ The tested hypotheses were not mutually exclusive _4_ The Historical and Scientific Significance of the Meselson–Stahl Experiment _4.1_ The challenge for the crucial experiment account _4.2_ A mechanistic perspective on the experiment _4.3_ The experimental design and its independence from theoretical speculations _4.4_ The advantages of a ‘bottom-up’ mechanistic account _5_ Conclusion. (shrink)
This article critically examines the rationales for the well-settled principle in sentencing law that an offender’s remorse is to be treated as a mitigating factor. Four basic types of rationale are examined: remorse makes punishment redundant; offering mitigation can induce remorse; remorse should be rewarded with mitigation; and remorse should be recognised by mitigation. The first three rationales each suffer from certain weaknesses or limitations, and are argued to be not as persuasive as the fourth. The article then considers, and (...) rejects, two arguments against remorse as a mitigating factor in sentencing: that the crime, not the offender, is the focus of punishment; and that the truly remorseful offender would not ask for mitigation. The article concludes with a brief consideration of whether a lack of remorse should be an aggravating factor. (shrink)
A survey of models in immunology is conducted and distinct kinds of models are characterized based on whether models are material or conceptual, the distinctiveness of their epistemic purpose, and the criteria for evaluating the goodness of a model relative to its intended purpose. I argue that the diversity of models in interdisciplinary fields such as immunology reflects the fact that information about the phenomena of interest is gathered from different sources using multiple methods of investigation. To each model is (...) attached a description specifying how information about a phenomenon of interest has been acquired, highlighting points of commonality and difference between the methodological and epistemic histories of the information encapsulated in different models. These points of commonality and difference allow investigators to integrate findings from different models into more comprehensive explanatory accounts, as well as to troubleshoot anomalies and faulty accounts by going back to the original building blocks. (shrink)
The paper discusses methodological guidelines for evaluating mechanistic explanations. According to current accounts, a satisfactory mechanistic explanation should include all of the relevant features of the mechanism, its component entities and activities, and their properties and organization, as well as exhibit productive continuity. It is not specified, however, how this kind of mechanistic completeness can be demonstrated. I argue that parameter sufficiency inferences based on mathematical model simulations provide a way of determining whether a mechanism capable of producing the phenomenon (...) of interest can be constructed from mechanistic components organized, acting, and having the properties described in the mechanistic explanation. (shrink)
An important strategy in the discovery of biological mechanisms involves the piecing together of experimental results from interventions. However, if mechanisms are investigated by means of ideal interventions, as defined by James Woodward and others, then the kind of information revealed is insufficient to discriminate between modular and non-modular causal contributions. Ideal interventions suffice for constructing webs of causal dependencies that can be used to make some predictions about experimental outcomes, but tell us little about how causally relevant factors are (...) organized together and how they interact with each other in order to produce a phenomenon. I argue that lab research relies on more elaborated types of interventions targeting in a controlled fashion multiple variables at the same time in order to probe the temporal organization of causally-relevant factors along distinct causal pathways and to test for non-modular interaction effects, thus providing crucial spatial-temporal constraints guiding the formulation of more detailed mechanistic explanations. (shrink)
This paper provides an account of the experimental conditions required for establishing whether correlating or causally relevant factors are constitutive components of a mechanism connecting input (start) and output (finish) conditions. I argue that two-variable experiments, where both the initial conditions and a component postulated by the mechanism are simultaneously manipulated on an independent basis, are usually required in order to differentiate between correlating or causally relevant factors and constitutively relevant ones. Based on a typical research project molecular biology, a (...) flowchart model detailing typical stages in the formulation and testing of hypotheses about mechanistic components is also developed. (shrink)
Current debates surrounding the virtues and shortcomings of randomization are symptomatic of a lack of appreciation of the fact that causation can be inferred by two distinct inference methods, each requiring its own, specific experimental design. There is a non-statistical type of inference associated with controlled experiments in basic biomedical research; and a statistical variety associated with randomized controlled trials in clinical research. I argue that the main difference between the two hinges on the satisfaction of the comparability requirement, which (...) is in turn dictated by the nature of the objects of study, namely homogeneous or heterogeneous populations of biological systems. Among other things, this entails that the objection according to which randomized experiments fail to provide better evidence for causation because randomization cannot guarantee comparability is mistaken. (shrink)
Examples from the sciences showing that mechanisms do not always succeed in producing the phenomena for which they are responsible have led some authors to conclude that the regularity requirement can be eliminated from characterizations of mechanisms. In this article, I challenge this conclusion and argue that a minimal form of regularity is inextricably embedded in examples of elucidated mechanisms that have been shown to be causally responsible for phenomena. Examples of mechanistic explanations from the sciences involve mechanisms that have (...) been shown to produce phenomena with a reproducible rate of success. By contrast, if phenomena are infrequent to the point that they amount to irreproducible observations and experimental results, they are indistinguishable from the background noise of accidental happenings. The inability to detect or measure the phenomenon of interest against the background noise of accidental correlations makes it impossible to elucidate a mechanism by experimental means, to demonstrate that a proposed mechanism actually produces the phenomenon, and ultimately to justify why a hypothetical scenario involving an irregular mechanism should be preferred over attributing irreproducible happenings to chance. (shrink)
The Soul is considered, both for religions and philosophy, to be the immaterial aspect or essence of a human being, conferring individuality and humanity, often considered to be synonymous with the mind or the self. For most theologies, the Soul is further defined as that part of the individual, which partakes of divinity and transcends the body in different explanations. But, regardless of the philosophical background in which a specific theology gives the transcendence of the soul as the source of (...) its everlasting essence – often considered to survive the death of the body –, it is always appraised as a higher existence for which all should fight for. In this regard, all religious beliefs assert that there are many unseen battles aiming to take hold of the human soul, either between divinity and evil, or between worlds, or even between the body and the soul itself. These unseen battles over the human soul raging in the whole world made it the central item of the entire universe, both for the visible and the unseen worlds, an item of which whoever takes possession will also become the ruler of the universe. Through this philosophy, the value of the soul became abysmal, incommensurable, and without resemblance. The point for making such a broad overview of the soul in religious beliefs is the question of whether we can build an interfaith discourse based on the religions’ most debated and valuable issue, soul? Regardless of the variety of religious beliefs on what seems to be the soul, there is always a residual consideration in them that makes the soul more important than the body. This universal impression is due to another belief or instead need of believing that above and beyond this seen, palpable, finite life and the world should exist another one, infinite, transcendent, and available all the same after here. This variety stretches from the minimum impact that soul has on the body, as being the superior essence that inhabitants and enlivens the matter, to the highest impact in which soul has nothing to do with matter[1] and is only ephemeral linked to it, but its existence is not at all limited, defined or depended on the matter [2], or even placed to the extreme, as the very life of the matter thus this seen universe is merely a thought in the soul/mind [3]. In this extensive variety of soul overviews, the emphasis of the soul’s importance gives an inverse significance to the body/matter, from being everything that matters to a thin, dwindle item that has no existence at all outside consciousness. (shrink)
Examining the writings of Katherine Parr both from the standpoint of metaphysical issues of her time and her status as a writer of the Tudor era, it is concluded that Queen Katherine had a developed humanist ontology, and one that coincided with a great deal of the new learning of the Henrician period, whether stridently Protestant or not. Analyses from James, Dubrow, and McConica are alluded to, and a comparison is made to some of the currents at work in (...) English intellectual life at that time. (shrink)
Kenneth Waters and Marcel Weber argue that the joint use of distinct gene concepts and the transfer of knowledge between classical and molecular analyses in contemporary scientific practice is possible because classical and molecular concepts of the gene refer to overlapping chromosomal segments and the DNA sequences associated with these segments. However, while pointing in the direction of coreference, both authors also agree that there is a considerable divergence between the actual sequences that count as genes in classical genetics and (...) molecular biology. The thesis advanced in this paper is that the referents of classical and molecular gene concepts are coextensive to a higher degree than admitted by Waters and Weber, and therefore coreference can provide a satisfactory account of the high level of integration between classical genetics and molecular biology. In particular, I argue that the functional units/cistrons identified by classical techniques overlap with functional elements entering the composition of molecular transcription units, and that the precision of this overlap can be improved by conducting further experimentation. (shrink)
BackgroundMany journals prohibit the use of declarative titles that state study findings, yet a few journals encourage or even require them. We compared the effects of a declarative versus a descriptive title on readers’ perceptions about the strength of evidence in a research abstract describing a randomized trial.MethodsStudy participants read two abstracts describing studies of a fictitious treatment for a fictitious condition. The first abstract described an uncontrolled, 10-patient, case series, and the second described a randomized, placebo-controlled trial involving 48 (...) patients. All participants rated identical A1 abstracts to provide baseline ratings and thus reduce the effects of inter-individual variability. Participants were randomized so that half rated a version of A2 with a descriptive title and half with a declarative title. For each abstract, participants indicated their agreement with the statement “Anticox is an effective treatment for pain in Green’s syndrome” using 100 mm visual analogue scales ranging from “disagree completely” to “agree completely.” VAS scores were measured by an investigator who was unaware of group allocation.ResultsOne hundred forty-four participants from four centres completed the study. There was no significant difference between the declarative and the descriptive title groups’ confidence in the study conclusions as expressed on VAS scales—in fact, the mean difference between A1 and A2 was smaller for the declarative title group than that for the descriptive title group.ConclusionsWe found no evidence that the use of a declarative title affected readers’ perceptions about study conclusions. This suggests that editors’ fears that declarative titles might unduly influence readers’ judgements about study conclusions may be unfounded, at least in relation to reports of randomized trials. However, our study design had several limitations, and our findings may not be generalizable to other situations. (shrink)
The Fukushima nuclear accident from 2011 provided an occasion for the public display of radiation maps generated using decision-support systems for nuclear emergency management. Such systems rely on computer models for simulating the atmospheric dispersion of radioactive materials and estimating potential doses in the event of a radioactive release from a nuclear reactor. In Germany, as in Japan, such systems are part of the national emergency response apparatus and, in case of accidents, they can be used by emergency task forces (...) for planning radioprotection countermeasures. In this context, the paper addresses the epistemology of dose projections by critically analyzing some of the sources of epistemic opacity and non-knowledge affecting them, and the different methods and practices used by German radioprotection experts to improve their trustworthiness and reliability. It will be argued that dose projections are part of an entire radioprotection regime or assemblage built around the belief that the effects of nuclear accidents can be effectively mitigated thanks to the simulation technologies underlying different protocols and practices of nuclear preparedness. And, as the Fukushima experience showed, some of these expectations will not be met in real emergencies due to the inherent uncertainties entailed by the use of dose projections when planning protective countermeasures. (shrink)
In the literary history of Tudor England, I venture to propose two names as standing out and claiming comparison with each other as witnesses to the ideal and reality of Christendom – those of Thomas More in the reign of Henry VIII and William Shakespeare in the reign of Elizabeth I. In the case of More, little needs to be said, it is so obvious that he bore witness to the ideal and the reality, even to the shedding of (...) his blood as a canonized martyr. But in that of Shakespeare, much more has to be said in view of the seemingly overwhelming evidence to the contrary. For this purpose it is necessary to take account not just of the dramatist’s indebtedness to More’s Life of Richard III in his history play of that title, nor just of his contribution to the MS Book of Sir Thomas More, nor of the one explicit mention of More in the play of Henry VIII, which is commonly attributed to John Fletcher, but of the whole corpus of Shakespeare’s plays in their chronological order as bearing witness in their totality to what More called in his last speech at his trial in Westminster Hall “the whole corps of Christendom”. (shrink)
Steven Tudor defends the mitigation of criminal sentences in cases in which offenders are genuinely remorseful for their crimes. More than this, he takes the principle that such remorse-based sentence reductions are appropriate to be a âwell-settled legal principleââso well settled, in fact, that âit is among those deep-seated commitments which can serve to test general theories as much as they are tested by themâ. However, his account of why remorse should reduce punishment is strongly philosophical in character. He (...) sets to one side the many practical difficulties in implementing such reductions in the real world of criminal justice institutions so that he can focus on the question of whether a plausible account of sentencing can show that remorse should mitigate punishment. I contend that Tudorâs defense of such reductions is unpersuasive in certain respects. Yet even if it can be made more persuasive, I argue that the conditions that would have to be satisfied for remorse-based sentence reductions to be justifiably implemented are so many and various that they would likely exceed our abilities to responsibly grant them in real world legal contexts. I therefore claim that Tudor has failed to provide a defense of the âremorse principleâ that serves to explain or justify existing legal practices. (shrink)
ABSTRACTThe Bene Israel is a Jewish community in western India whose origins are unknown from conventional sources. This paper discusses a genetic ancestry study that mapped Bene Israel geneaologies and the impact of the study on the Bene Israel.
In this article, I argue that genomic programs are not substitutes for multi-causal molecular mechanistic explanations of inheritance, but abstract representations of the same sort as mechanism schemas already described in the philosophical literature. On this account, the program analogy is not reductionistic and does not ignore or underestimate the active contribution of epigenetic elements to phenotypes and development. Rather, genomic program representations specifically highlight the genomic determinants of inheritance and their organizational features at work in the wider context of (...) the mechanisms of genome expression. (shrink)
Emergent antireductionism in biological sciences states that even though all living cells and organisms are composed of molecules, molecular wholes are characterized by emergent properties that can only be understood from the perspective of cellular and organismal levels of composition. Thus, an emergence claim (molecular wholes are characterized by emergent properties) is thought to support a form of antireductionism (properties of higher-level molecular wholes can only be understood by taking into account concepts, theories and explanations dealing with higher-level entities). I (...) argue that this argument is flawed: even if molecular wholes are characterized by emergent properties and even if many successful explanations in biology are not molecular, there is no entailment between the two claims. (shrink)
It has been argued that supervenience generates unavoidable confounding problems for interventionist accounts of causation, to the point that we must choose between interventionism and supervenience. According to one solution, the dilemma can be defused by excluding non-causal determinants of an outcome as potential confounders. I argue that this solution undermines the methodological validity of causal tests. Moreover, we don’t have to choose between interventionism and supervenience in the first place. Some confounding problems are effectively circumvented by experimental designs routinely (...) employed in science. The remaining confounding issues concern the physical interpretation of variables and cannot be solved by choosing between interventionism and supervenience. (shrink)
This article replies to some of Richard Lippke’s criticisms of my earlier article on the issue of whether remorse should mitigate sentence. I query whether remorse-based mitigation must always wait for signs of moral reform, and re-affirm that remorse is worthy of recognition in itself and not just for the moral reform it may bring. I also argue that, where delayed mitigation is appropriate, the task of ascertaining moral reform is not as dubious, practically or in principle, as Lippke maintains. (...) I then confirm that my defence of the principle that remorse should mitigate sentence is not necessarily a defence of current practice. (shrink)
Multidisciplinary models aggregating ‘lower-level’ biological and ‘higher-level’ psychological and social determinants of a phenomenon raise a puzzle. How is the interaction between the physical, the psychological and the social conceptualized and explained? Using biopsychosocial models of pain as an illustration, I argue that these models are in fact level-neutral compilations of empirical findings about correlated and causally relevant factors, and as such they neither assume, nor entail a conceptual or ontological stratification into levels of description, explanation or reality. If inter-level (...) causation is deemed problematic or if debates about the superiority of a particular level of description or explanation arise, these issues are fueled by considerations other than empirical findings. (shrink)
Looking from a critical race perspective at Wittig’s lesbian, in this article, I draw two conclusions. First, I suggest that it is actually trans exclusionary lesbians' own transphobia that makes them cis-gendered. And second, it becomes clear that the politicisation of choosing and refusing gender needs to acknowledge racism’s shaping role in the construction of gender. My approach not only intervenes in transphobic feminisms that are obsessed with simplistic understandings of sexual violence, but also questions rigid cis/trans binaries and rejects (...) accounts of trans/gender that ignore the role of racialisation for the emergence of gender. The main question that I address is: how to conceptualise the complex im/possibilities of refusing and choosing in relation to gender? It is my aim to connect seemingly disparate knowledge productions on genderqueer, trans and other ‘impossible’ genders and sexualities. I am particularly interested in a phenomenon that can be described as ‘lesbian haunting’: the ambivalences one will find in tracing lesbian theory in relation to transing gender. With this, it is my attempt to rethink lesbian, queer and trans feminist approaches on violence, and to investigate the role of sexual violence within broader concepts of violence. More specifically, in order to understand both ‘gender’ and ‘transing gender’ as always already racialised, my approach builds on theories that identify ‘ungendering’ as an effect of normative racial violence. (shrink)
Different concepts define species at the pattern-level grouping of organisms into discrete clusters, the level of the processes operating within and between populations leading to the formation and maintenance of these clusters, or the level of the inner-organismic genetic and molecular mechanisms that contribute to species cohesion or promote speciation. I argue that, unlike single-level approaches, a multi-level framework takes into account the complex sequences of cause-effect reinforcements leading to the formation and maintenance of various patterns, and allows for revisions (...) and refinements of pattern-based characterizations in light of the gradual elucidation of the causes and mechanisms contributing to pattern formation and maintenance. (shrink)
The purpose of this article is to update and defend syntax-based gene concepts. I show how syntax-based concepts can and have been extended to accommodate complex cases of processing and gene expression regulation. In response to difficult cases and causal parity objections, I argue that a syntax-based approach fleshes out a deflationary concept defining genes as genomic sequences and organizational features of the genome contributing to a phenotype. These organizational features are an important part of accepted molecular explanations, provide the (...) theoretical basis for a large number of experimental techniques and practical applications, and play a crucial role in in annotating the genome, deriving predictions and constructing bioinformatics models. (shrink)
Savage et al. make a compelling case, Mehr et al. less so, for social bonding and credible signalling, respectively, as the main adaptive function of human musicality. We express general advocacy for the former thesis, highlighting: overlap between the two; direct versus derived biological functions, and aspects of music embedded in cultural evolution, for example, departures from tonality.