In the world of philosophy of science, the dominant theory of confirmation is Bayesian. In the wider philosophical world, the idea of inference to the best explanation exerts a considerable influence. Here we place the two worlds in collision, using Bayesian confirmation theory to argue that explanatoriness is evidentially irrelevant.
It is well known that the probabilistic relation of confirmation is not transitive in that even if E confirms H1 and H1 confirms H2, E may not confirm H2. In this paper we distinguish four senses of confirmation and examine additional conditions under which confirmation in different senses becomes transitive. We conduct this examination both in the general case where H1 confirms H2 and in the special case where H1 also logically entails H2. Based on these analyses, we argue that (...) the Screening-Off Condition is the most important condition for transitivity in confirmation because of its generality and ease of application. We illustrate our point with the example of Moore’s “proof” of the existence of a material world, where H1 logically entails H2, the Screening-Off Condition holds, and confirmation in all four senses turns out to be transitive. (shrink)
An important question in the current debate on the epistemic significance of peer disagreement is whether evidence of evidence is evidence. Fitelson argues that, at least on some renderings of the thesis that evidence of evidence is evidence, there are cases where evidence of evidence is not evidence. I introduce a condition and show that under this condition evidence of evidence is evidence.
Probabilistic support is not transitive. There are cases in which x probabilistically supports y , i.e., Pr( y | x ) > Pr( y ), y , in turn, probabilistically supports z , and yet it is not the case that x probabilistically supports z . Tomoji Shogenji, though, establishes a condition for transitivity in probabilistic support, that is, a condition such that, for any x , y , and z , if Pr( y | x ) > Pr( y (...) ), Pr( z | y ) > Pr( z ), and the condition in question is satisfied, then Pr( z | x ) > Pr( z ). I argue for a second and weaker condition for transitivity in probabilistic support. This condition, or the principle involving it, makes it easier (than does the condition Shogenji provides) to establish claims of probabilistic support, and has the potential to play an important role in at least some areas of philosophy. (shrink)
We show that as a chain of confirmation becomes longer, confirmation dwindles under screening-off. For example, if E confirms H1, H1 confirms H2, and H1 screens off E from H2, then the degree to which E confirms H2 is less than the degree to which E confirms H1. Although there are many measures of confirmation, our result holds on any measure that satisfies the Weak Law of Likelihood. We apply our result to testimony cases, relate it to the Data-Processing Inequality (...) in information theory, and extend it in two respects so that it covers a broader range of cases. (shrink)
Igor Douven establishes several new intransitivity results concerning evidential support. I add to Douven’s very instructive discussion by establishing two further intransitivity results and a transitivity result.
This article proposes a new interpretation of mutual information. We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the problem of (...) measure sensitivity and fails to justify the use of MI in giving definitive answers to questions of information. We propose a fourth interpretation of MI by reduction in expected inaccuracy, where inaccuracy is measured by a strictly proper monotonic scoring rule. It is shown that the answers to questions of information given by MI are definitive whenever this interpretation is appropriate, and that it is appropriate in a wide range of applications with epistemic implications. _1_ Introduction _2_ Formal Analyses of the Three Interpretations _2.1_ Reduction in doubt _2.2_ Reduction in uncertainty _2.3_ Divergence _3_ Inconsistency with Epistemic Value of Information _4_ Problem of Measure Sensitivity _5_ Reduction in Expected Inaccuracy _6_ Resolution of the Problem of Measure Sensitivity _6.1_ Alternative measures of inaccuracy _6.2_ Resolution by strict propriety _6.3_ Range of applications _7_ Global Scoring Rules _8_ Conclusion. (shrink)
We argue elsewhere that explanatoriness is evidentially irrelevant . Let H be some hypothesis, O some observation, and E the proposition that H would explain O if H and O were true. Then O screens-off E from H: Pr = Pr. This thesis, hereafter “SOT” , is defended by appeal to a representative case. The case concerns smoking and lung cancer. McCain and Poston grant that SOT holds in cases, like our case concerning smoking and lung cancer, that involve frequency (...) data. However, McCain and Poston contend that there is a wider sense of evidential relevance—wider than the sense at play in SOT—on which explanatoriness is evidentially relevant even in cases involving frequency data. This is their main point, but they also contend that SOT does not hold in certain cases not involving frequency data. We reply to each of these points and conclude with some general remarks on screening-off as a test of evidential relevance. (shrink)
Is there some general reason to expect organisms that have beliefs to have false beliefs? And after you observe that an organism occasionally occupies a given neural state that you think encodes a perceptual belief, how do you evaluate hypotheses about the semantic content that that state has, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection and (...) show how organisms that are risk-prone in the beliefs they form can be fitter than organisms that are risk-free. To address the second question, we discuss a problem that is widely recognized in statistics – the problem of over-fitting – and one influential device for addressing that problem, the Akaike Information Criterion (AIC). We then use AIC to solve epistemological versions of the disjunction and distality problems, which are two key problems concerning what it is for a belief state to have one semantic content rather than another. (shrink)
It is well known that the probabilistic relation of confirmation is not transitive in that even if E confirms H1 and H1 confirms H2, E may not confirm H2. In this paper we distinguish four senses of confirmation and examine additional conditions under which confirmation in different senses becomes transitive. We conduct this examination both in the general case where H1 confirms H2 and in the special case where H1 also logically entails H2. Based on these analyses, we argue that (...) the Screening-Off Condition is the most important condition for transitivity in confirmation because of its generality and ease of application. We illustrate our point with the example of Moore’s ‘‘proof’’ of the existence of a material world, where H1 logically entails H2, the Screening-Off Condition holds, and confirmation in all four senses turns out to be transitive. (shrink)
There is a plethora of confirmation measures in the literature. Zalabardo considers four such measures: PD, PR, LD, and LR. He argues for LR and against each of PD, PR, and LD. First, he argues that PR is the better of the two probability measures. Next, he argues that LR is the better of the two likelihood measures. Finally, he argues that LR is superior to PR. I set aside LD and focus on the trio of PD, PR, and LR. (...) The question I address is whether Zalabardo succeeds in showing that LR is superior to each of PD and PR. I argue that the answer is negative. I also argue, though, that measures such as PD and PR, on one hand, and measures such as LR, on the other hand, are naturally understood as explications of distinct senses of confirmation. (shrink)
We argued that explanatoriness is evidentially irrelevant in the following sense: Let H be a hypothesis, O an observation, and E the proposition that H would explain O if H and O were true. Then our claim is that Pr = Pr. We defended this screening-off thesis by discussing an example concerning smoking and cancer. Climenhaga argues that SOT is mistaken because it delivers the wrong verdict about a slightly different smoking-and-cancer case. He also considers a variant of SOT, called (...) “SOT*”, and contends that it too gives the wrong result. We here reply to Climenhaga’s arguments and suggest that SOT provides a criticism of the widely held theory of inference called “inference to the best explanation”. (shrink)
Is evidential support transitive? The answer is negative when evidential support is understood as confirmation so that X evidentially supports Y if and only if p(Y | X) > p(Y). I call evidential support so understood “support” (for short) and set out three alternative ways of understanding evidential support: support-t (support plus a sufficiently high probability), support-t* (support plus a substantial degree of support), and support-tt* (support plus both a sufficiently high probability and a substantial degree of support). I also (...) set out two screening-off conditions (under which support is transitive): SOC1 and SOC2. It has already been shown that support-t is non-transitive in the general case (where it is not required that SOC1 holds and it is not required that SOC2 holds), in the special case where SOC1 holds, and in the special case where SOC2 holds. I introduce two rather weak adequacy conditions on support measures and argue that on any support measure meeting those conditions it follows that neither support-t* nor support-tt* is transitive in the general case, in the special case where SOC1 holds, or in the special case where SOC2 holds. I then relate some of the results to Douven’s evidential support theory of conditionals along with a few rival theories. (shrink)
The need for ethical leadership in navigating today’s complex, global and competitive organisations has been established. While research has confirmed the importance of ethical leaders in promoting positive organisational and employee outcomes, scant research has examined the antecedents of ethical leadership. Furthermore, there has been a call for further examination of leadership models, particularly indigenous leadership models. Responding to these issues, this study suggests Māori leaders’ values add insights into enhancing ethical leadership. Three studies confirm the role of Māori values (...) and ethical leadership. Study one, based on kaupapa Māori research methods, is an exploratory 22-interview study of Māori leaders and identifies five values, as common to successful indigenous leaders. In study two, 249 employees rate their leaders on these five dimensions in relation to their ethical leadership and exchange relationships. Structural equation modelling shows strong support for the distinct nature of the five values and their positive influence on ethical leadership perceptions and quality exchange relationships. Study three, on 122 employees, reinforces the findings of study two—and demonstrates that LMX predicts job outcomes both indirectly and directly, with humility and collectivism also directly predicting outcomes. Our findings suggest that indigenous leaders’ values enhance perceptions and outcomes of ethical leadership for employees. (shrink)
Data fraud and selective reporting both present serious threats to the credibility of science. However, there remains considerable disagreement among scientists about how best to sanction data fraud, and about the ethicality of selective reporting. The public is arguably the largest stakeholder in the reproducibility of science; research is primarily paid for with public funds, and flawed science threatens the public’s welfare. Members of the public are able to make meaningful judgments about the morality of different behaviors using moral intuitions. (...) Legal scholars emphasize that to maintain legitimacy, social control policies must be developed with some consideration given to the public’s moral intuitions. Although there is a large literature on popular attitudes toward science, there is no existing evidence about public opinion on data fraud or selective reporting. We conducted two studies—a survey experiment with a nationwide convenience sample, and a follow-up survey with a representative sample of US adults —to explore community members’ judgments about the morality of data fraud and selective reporting in science. The findings show that community members make a moral distinction between data fraud and selective reporting, but overwhelmingly judge both behaviors to be immoral and deserving of punishment. Community members believe that scientists who commit data fraud or selective reporting should be fired and banned from receiving funding. For data fraud, most Americans support criminal penalties. Results from an ordered logistic regression analysis reveal few demographic and no significant partisan differences in punitiveness toward data fraud. (shrink)
It is widely thought in philosophy and elsewhere that parsimony is a theoretical virtue in that if T1 is more parsimonious than T2, then T1 is preferable to T2, other things being equal. This thesis admits of many distinct precisifications. I focus on a relatively weak precisification on which preferability is a matter of probability, and argue that it is false. This is problematic for various alternative precisifications, and even for Inference to the Best Explanation as standardly understood.
Jaegwon Kim’s influential exclusion argument attempts to demonstrate the inconsistency of nonreductive materialism in the philosophy of mind. Kim’s argument begins by showing that the three main theses of nonreductive materialism, plus two additional considerations, lead to a specific and familiar picture of mental causation. The exclusion argument can succeed only if, as Kim claims, this picture is not one of genuine causal overdetermination. Accordingly, one can resist Kim’s conclusion by denying this claim, maintaining instead that the effects of the (...) mental are always causally overdetermined. I call this strategy the ‘ overdetermination challenge’. One of the main aims of this paper is to show that the overdetermination challenge is the most appropriate response to Kim’s exclusion argument, at least in its latest form. I argue that Kim fails to adequately respond to the overdetermination challenge, thus failing to prevent his opponents from reasonably maintaining that the effects of the mental are always causally overdetermined. Interestingly, this discussion reveals a curious dialectical feature of Kim’s latest response to the overdetermination challenge: if it succeeds, then a new, simpler and more compact version of the exclusion argument is available. While I argue against the consequent of this conditional, thereby also rejecting the antecedent, this dialectical feature should be of interest to philosophers on either side of this debate. (shrink)
Andrew Cling presents a new version of the epistemic regress problem, and argues that intuitionist foundationalism, social contextualism, holistic coherentism, and infinitism fail to solve it. Cling’s discussion is quite instructive, and deserving of careful consideration. But, I argue, Cling’s discussion is not in all respects decisive. I argue that Cling’s dilemma argument against holistic coherentism fails.
Coherentists on epistemic justification claim that all justification is inferential, and that beliefs, when justified, get their justification together (not in isolation) as members of a coherent belief system. Some recent work in formal epistemology shows that “individual credibility” is needed for “witness agreement” to increase the probability of truth and generate a high probability of truth. It can seem that, from this result in formal epistemology, it follows that coherentist justification is not truth-conducive, that it is not the case (...) that, under the requisite conditions, coherentist justification increases the probability of truth and generates a high probability of truth. I argue that this does not follow. (shrink)
Some recent work in formal epistemology shows that “witness agreement” by itself implies neither an increase in the probability of truth nor a high probability of truth—the witnesses need to have some “individual credibility.” It can seem that, from this formal epistemological result, it follows that coherentist justification (i.e., doxastic coherence) is not truth-conducive. I argue that this does not follow. Central to my argument is the thesis that, though coherentists deny that there can be noninferential justification, coherentists do not (...) deny that there can be individual credibility. (shrink)
On one reading of Kant’s account of our original representations of space and time, they are, in part, products of the understanding or imagination. On another, they are brute, sensible givens, entirely independent of the understanding. In this article, while I agree with the latter interpretation, I argue for a version of it that does more justice to the insights of the former than others currently available. I claim that Kant’s Transcendental Deduction turns on the representations of space and time (...) as determinate, enduring particulars, whose unity is both given and a product of synthesis. (shrink)
This paper proposes a new interpretation of mutual information (MI). We examine three extant interpretations of MI by reduction in doubt, by reduction in uncertainty, and by divergence. We argue that the first two are inconsistent with the epistemic value of information (EVI) assumed in many applications of MI: the greater is the amount of information we acquire, the better is our epistemic position, other things being equal. The third interpretation is consistent with EVI, but it is faced with the (...) problem of measure sensitivity and fails to justify the use of MI in giving definitive answers to questions of information. We propose a fourth interpretation of MI by reduction in expected inaccuracy, where inaccuracy is measured by a strictly proper monotonic scoring rule. It is shown that the answers to questions of information given by MI are definitive whenever this interpretation is appropriate, and that it is appropriate in a wide range of applications with epistemic implications. (shrink)
I argue that coherence is truth-conducive in that coherence implies an increase in the probability of truth. Central to my argument is a certain principle for transitivity in probabilistic support. I then address a question concerning the truth-conduciveness of coherence as it relates to (something else I argue for) the truth-conduciveness of consistency, and consider how the truth-conduciveness of coherence bears on coherentist theories of justification.
The paper explores the main competing interpretations of Aristotle's view of the relation between happiness and external goods in the Nicomachean Ethics. On the basis of a careful analysis of what Aristotle says in the Nicomachean Ethics (and other works such as the Eudemian Ethics, Politics, Rhetoric, etc.) it is argued that it is likely that Aristotle takes at least some external goods to be actual constituents of happiness provided that (1) they are accompanied by virtuous activity and (2) the (...) agent enjoying and using those external goods does so in ways compatible with the continuous aim of acting virtuously. (shrink)
Bayesian confirmation theory is rife with confirmation measures. Many of them differ from each other in important respects. It turns out, though, that all the standard confirmation measures in the literature run counter to the so-called “Reverse Matthew Effect” (“RME” for short). Suppose, to illustrate, that H1 and H2 are equally successful in predicting E in that p(E | H1)/p(E) = p(E | H2)/p(E) > 1. Suppose, further, that initially H1 is less probable than H2 in that p(H1) < p(H2). (...) Then by RME it follows that the degree to which E confirms H1 is greater than the degree to which it confirms H2. But by all the standard confirmation measures in the literature, in contrast, it follows that the degree to which E confirms H1 is less than or equal to the degree to which it confirms H2. It might seem, then, that RME should be rejected as implausible. Festa (2012), however, argues that there are scientific contexts in which RME holds. If Festa’s argument is sound, it follows that there are scientific contexts in which none of the standard confirmation measures in the literature is adequate. Festa’s argument is thus interesting, important, and deserving of careful examination. I consider five distinct respects in which E can be related to H, use them to construct five distinct ways of understanding confirmation measures, which I call “Increase in Probability”, “Partial Dependence”, “Partial Entailment”, “Partial Discrimination”, and “Popper Corroboration”, and argue that each such way runs counter to RME. The result is that it is not at all clear that there is a place in Bayesian confirmation theory for RME. (shrink)
Only 27 percent of Americans in a 1995 Harris poll said they had read or heard “quite a lot” about genetic tests. Nonetheless, 68 percent said they would be either “very likely” or “somewhat likely” to undergo genetic testing even for diseases “for which there is presently no cure or treatment.” Perhaps most astonishing, 56 percent found it either “very” or “somewhat acceptable” to develop a government computerized DNA bank with samples taken from all newborns, and their names attached to (...) the samples. This does not necessarily mean the public is unconcerned about genetic privacy. More likely it means that the public is still uninformed about the risks associated with genetic testing, and has not thought at all about the risks involved in storing identifiable DNA samples.A central question presented by genetic screening and testing is whether the genetic information so obtained is different in kind from other medical information, and, if so, whether this means that it should receive special legal protection. (shrink)
Only 27 percent of Americans in a 1995 Harris poll said they had read or heard “quite a lot” about genetic tests. Nonetheless, 68 percent said they would be either “very likely” or “somewhat likely” to undergo genetic testing even for diseases “for which there is presently no cure or treatment.” Perhaps most astonishing, 56 percent found it either “very” or “somewhat acceptable” to develop a government computerized DNA bank with samples taken from all newborns, and their names attached to (...) the samples. This does not necessarily mean the public is unconcerned about genetic privacy. More likely it means that the public is still uninformed about the risks associated with genetic testing, and has not thought at all about the risks involved in storing identifiable DNA samples.A central question presented by genetic screening and testing is whether the genetic information so obtained is different in kind from other medical information, and, if so, whether this means that it should receive special legal protection. (shrink)
This book looks at two ‘revolutions’ in philosophy – phenomenology and conceptual analysis which have been influential in sociology and psychology. It discusses humanistic psychiatry and sociological approaches to the specific area of mental illness, which counter the ultimately reductionist implications of Freudian psycho-analytic theory. The book, originally published in 1973, concludes by stating the broad underlying themes of the two forms of humanistic philosophy and indicating how they relate to the problems of theory and method in sociology.
The disjunction problem and the distality problem each presents a challenge that any theory of mental content must address. Here we consider their bearing on purely probabilistic causal theories. In addition to considering these problems separately, we consider a third challenge—that a theory must solve both. We call this “the hard problem.” We consider 8 basic ppc theories along with 240 hybrids of them, and show that some can handle the disjunction problem and some can handle the distality problem, but (...) none can handle the hard problem. This is our main result. We then discuss three possible responses to that result, and argue that though the first two fail, the third has some promise. (shrink)
The disjunction problem and the distality problem each presents a challenge that any theory of mental content must address. Here we consider their bearing on purely probabilistic causal theories. In addition to considering these problems separately, we consider a third challenge—that a theory must solve both. We call this “the hard problem.” We consider 8 basic ppc theories along with 240 hybrids of them, and show that some can handle the disjunction problem and some can handle the distality problem, but (...) none can handle the hard problem. This is our main result. We then discuss three possible responses to that result, and argue that though the first two fail, the third has some promise. (shrink)
Recently there have been several attempts in formal epistemology to develop an adequate probabilistic measure of coherence. There is much to recommend probabilistic measures of coherence. They are quantitative and render formally precise a notion—coherence—notorious for its elusiveness. Further, some of them do very well, intuitively, on a variety of test cases. Siebel, however, argues that there can be no adequate probabilistic measure of coherence. Take some set of propositions A, some probabilistic measure of coherence, and a probability distribution such (...) that all the probabilities on which A’s degree of coherence depends (according to the measure in question) are defined. Then, the argument goes, the degree to which A is coherent depends solely on the details of the distribution in question and not at all on the explanatory relations, if any, standing between the propositions in A. This is problematic, the argument continues, because, first, explanation matters for coherence, and, second, explanation cannot be adequately captured solely in terms of probability. We argue that Siebel’s argument falls short. (shrink)
Lucy Allais argues that we can better understand Kant's transcendental idealism by taking seriously the analogy of appearances to secondary qualities that Kant offers in the Prolegomena. A proper appreciation of this analogy, Allais claims, yields a reading of transcendental idealism according to which all properties that can appear to us in experience are mind-dependent relational properties that inhere in mind-independent objects. In section 1 of my paper, I articulate Allais's position and its benefits, not least of which is its (...) elegant explanation of how the features of objects that appear to us are transcendentally ideal while still being ‘empirically’ real. In section 2, I contend that there are elements of Allais's account that are problematic, yet also inessential, to what I view to be the core contribution of her analysis. These elements are the views that the properties that appear to human beings are not really distinct from properties that things have ‘in themselves’ and that Kant embraced a relational account of perception. In section 3, I return to the core of Allais's reading and argue that, despite its multiple virtues, it cannot make sense of key features of Kant's idealism. (shrink)
This is an excellent collection of essays on introspection and consciousness. There are fifteen essays in total (all new except for Sydney Shoemaker’s essay). There is also an introduction where the editors explain the impetus for the collection and provide a helpful overview. The essays contain a wealth of new and challenging material sure to excite specialists and shape future research. Below we extract a skeptical argument from Fred Dretske’s essay and relate the remaining essays to that argument. Due to (...) space limitations we focus in detail on just a few of the essays. We regret that we cannot give them all the attention they merit. (shrink)
Prevalent narratives of agricultural innovation predict that we are once again on the cusp of a global agricultural revolution. According to these narratives, this so-called fourth agricultural revolution, or agriculture 4.0, is set to transform current agricultural practices around the world at a quick pace, making use of new sophisticated precision technologies. Often used as a rhetorical device, this narrative has a material effect on the trajectories of an inherently political and normative agricultural transition; with funding, other policy instruments, and (...) research attention focusing on the design and development of new precision technologies. A growing critical social science literature interrogates the promises of revolution. Engagement with new technology is likely to be uneven, with benefits potentially favouring the already powerful and the costs falling hardest on the least powerful. If grand narratives of change remain unchallenged, we risk pursuing innovation trajectories that are exclusionary, failing to achieve responsible innovation. This study utilises a range of methodologies to explore everyday encounters between farmers and technology, with the aim of inspiring further work to compile the microhistories that can help to challenge robust grand narratives of change. We explore how farmers are engaging with technology in practice and show how these interactions problematise a simple, linear notion of innovation adoption and use. In doing so, we reflect upon the contribution that the study of everyday encounters can make in setting more inclusionary, responsible pathways towards sustainable agriculture. (shrink)
Farm to School (FTS) programs are increasingly popular as methods to teach students about food, nutrition, and agriculture by connecting students with the sources of the food that they eat. They may also provide opportunity for farmers seeking to diversify market channels. Food service buyers in FTS programs often choose to procure food for school meals directly from farmers. The distribution practices required for such direct procurement often bring significant transaction costs for both school food service professionals and farmers. Analysis (...) of data from a survey of Vermont farmers who sell directly to school food services explores farmers’ motivations and distribution practices in these partnerships. A two-step cluster analysis procedure characterizes farmers’ motivations along a continuum between market-based and socially embedded values. Further bivariate analysis shows that farmers who are motivated most by market-based values are significantly associated with distribution practices that facilitate sales to school food services. Implications for technical assistance to facilitate these sales are discussed. (shrink)
This survey traces the history of the astronomer's cross staff on the Continent from Levi ben Gerson to Gemma Frisius, in England from John Dee to John Greaves, and again on the Continent from Tycho Brahe to Adrian Metius. The emphasis throughout is on sources and influences, on distinguishing the various kinds of cross staff, and on clarifying terminology.
Citizenship rights have become vital to our sense of personal identity and social membership in modern society. Roche argues that today we have to shift from the conventional postwar politics of social rights to a new politics of social obligations and personal responsibility.
Dretske’s theory of self-knowledge is interesting but peculiar and can seem implausible. He denies that we can know by introspection that we have thoughts, feelings, and experiences. But he allows that we can know by introspection what we think, feel, and experience. We consider two puzzles. The first puzzle, PUZZLE 1, is interpretive. Is there a way of understanding Dretske’s theory on which the knowledge affirmed by its positive side is different than the knowledge denied by its negative side? The (...) second puzzle, PUZZLE 2, is substantive. Each of the following theses has some prima facie plausibility: there is introspective knowledge of thoughts, knowledge requires evidence, and there are no experiences of thoughts. It is unclear, though, that these claims form a consistent set. These puzzles are not unrelated. Dretske’s theory of self-knowledge is a potential solution to PUZZLE 2 in that Dretske’s theory is meant to show how,, and can all be true. We provide a solution to PUZZLE 1 by appeal to Dretske’s early work in the philosophy of language on contrastive focus. We then distinguish between “Closure” and “Transmissibility”, and raise and answer a worry to the effect that Dretske’s theory of self-knowledge runs counter to Transmissibility. These results help to secure Dretske’s theory as a viable solution to PUZZLE 2. (shrink)
All extant purely probabilistic measures of explanatory power satisfy the following technical condition: if Pr(E | H1) > Pr(E | H2) and Pr(E | ~H1) < Pr(E | ~H2), then H1’s explanatory power with respect to E is greater than H2’s explanatory power with respect to E. We argue that any measure satisfying this condition faces three serious problems – the Problem of Temporal Shallowness, the Problem of Negative Causal Interactions, and the Problem of Non-Explanations. We further argue that many (...) such measures face a fourth problem – the Problem of Explanatory Irrelevance. (shrink)
There are many scientific and everyday cases where each of Pr and Pr is high and it seems that Pr is high. But high probability is not transitive and so it might be in such cases that each of Pr and Pr is high and in fact Pr is not high. There is no issue in the special case where the following condition, which I call “C1”, holds: H 1 entails H 2. This condition is sufficient for transitivity in high (...) probability. But many of the scientific and everyday cases referred to above are cases where it is not the case that H 1 entails H 2. I consider whether there are additional conditions sufficient for transitivity in high probability. I consider three candidate conditions. I call them “C2”, “C3”, and “C2&3”. I argue that C2&3, but neither C2 nor C3, is sufficient for transitivity in high probability. I then set out some further results and relate the discussion to the Bayesian requirement of coherence. (shrink)
Background Payment of research participants helps to increase recruitment for research studies, but can pose ethical dilemmas. Research ethics committees (RECs) have a centrally important role in guiding this practice, but standardisation of the ethical approval process in Ireland is lacking. Aim Our aim was to examine REC policies, experiences and concerns with respect to the payment of participants in research projects in Ireland. Method Postal survey of all RECs in Ireland. Results Response rate was 62.5% (n=50). 80% of RECs (...) reported not to have any established policy on the payment of research subjects while 20% had refused ethics approval to studies because the investigators proposed to pay research participants. The most commonly cited concerns were the potential for inducement and undermining of voluntary consent. Conclusions There is considerable variability among RECs on the payment of research participants and a lack of clear consensus guidelines on the subject. The development of standardised guidelines on the payment of research subjects may enhance recruitment of research participants. (shrink)
Physicalism is the view, roughly, that everything is physical. This thesis is often characterized in terms of a particular supervenience thesis. Central to this thesis is the idea of physical duplication. I argue that the standard way of understanding physical duplication leads—along with other claims—to a sub-optimal consequence for the physicalist. I block this consequence by shifting to an alternative sense of physical duplication. I then argue that physicalism is best characterized by a supervenience thesis that employs both the new (...) sense of physical duplication and a new class of possible worlds. (shrink)