In _Reliable Reasoning_, Gilbert Harman and Sanjeev Kulkarni -- a philosopher and an engineer -- argue that philosophy and cognitive science can benefit from statistical learning theory, the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors -- a central topic in SLT. After discussing philosophical attempts to (...) evade the problem of induction, Harman and Kulkarni provide an admirably clear account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis dimension of a set of hypotheses and distinguish two kinds of inductive reasoning. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and suggest possible new models of human reasoning suggested by developments in SLT. (shrink)
It is a live possibility that certain of our experiences reliably misrepresent the world around us. I argue that tracking theories of mental representation have difficulty allowing for this possibility, and that this is a major consideration against them.
This book considers how observations about the past influence future behaviour, as expressed in language. Focusing on information gathered from speech and other evidence sources, the author offers a model of how judgements about reliability can be made, and how such judgements factor into how people treat information they acquire via those sources.
Is perception cognitively penetrable, and what are the epistemological consequences if it is? I address the latter of these two questions, partly by reference to recent work by Athanassios Raftopoulos and Susanna Seigel. Against the usual, circularity, readings of cognitive penetrability, I argue that cognitive penetration can be epistemically virtuous, when---and only when---it increases the reliability of perception.
According to process reliabilism, a belief produced by a reliable belief-forming process is justified. I introduce problems for this theory on any account of reliability. Does the performance of a process in some domain of worlds settle its reliability? The theories that answer “Yes” typically fail to state the temporal parameters of this performance. I argue that any theory paired with any plausible parameters has implausible implications. The theories that answer “No,” I argue, thereby lack essential support and (...) exacerbate familiar problems. There are new reasons to avoid any reliability conditions on justification. (shrink)
Advanced medical imaging, such as CT, fMRI and PET, has undergone enormous progress in recent years, both in accuracy and utilization. Such techniques often bring with them an illusion of immediacy, the idea that the body and its diseases can be directly inspected. In this paper we target this illusion and address the issue of the reliability of advanced imaging tests as knowledge procedures, taking positron emission tomography in oncology as paradigmatic case study. After individuating a suitable notion of (...)reliability, we argue that PET is a highly theory-laden and non-immediate knowledge procedure, in spite of the photographic-like quality of the images it delivers; the diagnostic conclusions based on the interpretation of PET images are population-dependent; PET images require interpretation, which is inherently observer-dependent and therefore variable. We conclude with a three-step methodological proposal for enhancing the reliability of advanced medical imaging. (shrink)
Reliabilists hold that a belief is doxastically justified if and only if it is caused by a reliable process. But since such a process is one that tends to produce a high ratio of true to false beliefs, reliabilism is on the face of it applicable to binary beliefs, but not to degrees of confidence or credences. For while beliefs admit of truth or falsity, the same cannot be said of credences in general. A natural question now arises: Can (...) class='Hi'>reliability theories of justified belief be extended or modified to account for justified credence? In this paper, I address this question. I begin by showing that, as it stands, reliabilism cannot account for justified credence. I then consider three ways in which the reliabilist may try to do so by extending or modifying her theory, but I argue that such attempts face certain problems. After that, I turn to a version of reliabilism that incorporates evidentialist elements and argue that it allows us to avoid the problems that the other theories face. If I am right, this gives reliabilists a reason, aside from those given recently by Comesaña and Goldman, to move towards such a kind of hybrid theory. (shrink)
A Benacerraf–Field challenge is an argument intended to show that common realist theories of a given domain are untenable: such theories make it impossible to explain how we’ve arrived at the truth in that domain, and insofar as a theory makes our reliability in a domain inexplicable, we must either reject that theory or give up the relevant beliefs. But there’s no consensus about what would count here as a satisfactory explanation of our reliability. It’s sometimes suggested that (...) giving such an explanation would involve showing that our beliefs meet some modal condition, but realists have claimed that this sort of modal interpretation of the challenge deprives it of any force: since the facts in question are metaphysically necessary and so obtain in all possible worlds, it’s trivially easy, even given realism, to show that our beliefs have the relevant modal features. Here I show that this claim is mistaken—what motivates a modal interpretation of the challenge in the first place also motivates an understanding of the relevant features in terms of epistemic possibilities rather than metaphysical possibilities, and there are indeed epistemically possible worlds where the facts in question don’t obtain. (shrink)
We often evaluate belief-forming processes, agents, or entire belief states for reliability. This is normally done with the assumption that beliefs are all-or-nothing. How does such evaluation go when we’re considering beliefs that come in degrees? I consider a natural answer to this question that focuses on the degree of truth-possession had by a set of beliefs. I argue that this natural proposal is inadequate, but for an interesting reason. When we are dealing with all-or-nothing belief, high reliability (...) leads to high levels of truth-possession. However, when it comes to degrees of belief, reliability and truth-possession part ways. The natural answer thus fails to be a good way to evaluate degrees of belief for reliability. I propose and develop an alternative method based on the notion of calibration, suggested by Frank Ramsey, which does not have this problem and consider why we should care about such assessments of reliability even if they are not tied directly to truth-possession. (shrink)
According to a simple version of the reliability theory of epistemic justification, a belief is justified if and only if the process leading to that belief is reliable. The idea behind this theory is simple and attractive. There are a variety of mental or cognitive processes that result in beliefs. Some of these processes are reliable—they generally yield true beliefs—and the beliefs they produce are justified. Other processes are unreliable and the beliefs they produce are unjustified. So, for example, (...) reliable processes such as perception and memory yield justified beliefs while an unreliable process such as wishful thinking—believing something as a result of wishing that it is true—yields unjustified beliefs. (shrink)
Why believe in the findings of science? John Ziman argues that scientific knowledge is not uniformly reliable, but rather like a map representing a country we cannot visit. He shows how science has many elements, including alongside its experiments and formulae the language and logic, patterns and preconceptions, facts and fantasies used to illustrate and express its findings. These elements are variously combined by scientists in their explanations of the material world as it lies outside our everyday experience. John Ziman’s (...) book offers at once a valuably clear account and a radically challenging investigation of the credibility of scientific knowledge, searching widely across a range of disciplines for evidence about the perceptions, paradigms and analogies on which all our understanding depends. (shrink)
We think of logic as objective. We also think that we are reliable about logic. These views jointly generate a puzzle: How is it that we are reliable about logic? How is it that our logical beliefs match an objective domain of logical fact? This is an instance of a more general challenge to explain our reliability about a priori domains. In this paper, I argue that the nature of this challenge has not been properly understood. I explicate the (...) challenge both in general and for the particular case of logic. I also argue that two seemingly attractive responses – appealing to a faculty of rational insight or to the nature of concept possession – are incapable of answering the challenge. (shrink)
Standard characterizations of virtue epistemology divide the field into two camps: virtue reliabilism and virtue responsibilism. Virtue reliabilists think of intellectual virtues as reliable cognitive faculties or abilities, while virtue responsibilists conceive of them as good intellectual character traits. I argue that responsibilist character virtues sometimes satisfy the conditions of a reliabilist conception of intellectual virtue, and that consequently virtue reliabilists, and reliabilists in general, must pay closer attention to matters of intellectual character. This leads to several new questions and (...) (...) challenges for any reliabilist epistemology. (shrink)
Non-skeptical robust realists about normativity, mathematics, or any other domain of non- causal truths are committed to a correlation between their beliefs and non- causal, mind-independent facts. Hartry Field and others have argued that if realists cannot explain this striking correlation, that is a strong reason to reject their theory. Some consider this argument, known as the Benacerraf–Field argument, as the strongest challenge to robust realism about mathematics, normativity, and even logic. In this article I offer two closely related accounts (...) for the type of explanation needed in order to address Field's challenge. I then argue that both accounts imply that the striking correlation to which robust realists are committed is explainable, thereby discharging Field's challenge. Finally, I respond to some objections and end with a few unresolved worries. (shrink)
This chapter argues that olfactory experiences represent either everyday objects or ad hoc olfactory objects as having primitive olfactory properties, which happen to be uninstantiated. On this picture, olfactory experiences reliably misrepresent: they falsely represent everyday objects or ad hoc objects as having properties they do not have, and they misrepresent in the same way on multiple occasions. One might worry that this view is incompatible with the plausible claim that olfactory experiences at least sometimes justify true beliefs about the (...) world. This chapter argues that there is no such incompatibility. Since olfactory experiences reliably misrepresent, they can lead to true and justified beliefs about putatively smelly objects. (shrink)
Of all the demands that mathematics imposes on its practitioners, one of the most fundamental is that proofs ought to be correct. It has been common since the turn of the twentieth century to take correctness to be underwritten by the existence of formal derivations in a suitable axiomatic foundation, but then it is hard to see how this normative standard can be met, given the differences between informal proofs and formal derivations, and given the inherent fragility and complexity of (...) the latter. This essay describes some of the ways that mathematical practice makes it possible to reliably and robustly meet the formal standard, preserving the standard normative account while doing justice to epistemically important features of informal mathematical justification. (shrink)
Reliabilism has come under recent attack for its alleged inability to account for the value we typically ascribe to knowledge. It is charged that a reliably-produced true belief has no more value than does the true belief alone. I reply to these charges on behalf of reliabilism; not because I think reliabilism is the correct theory of knowledge, but rather because being reliably-produced does add value of a sort to true beliefs. The added value stems from the fact that a (...) reliably-held belief is non-accidental in a particular way. While it is widely acknowledged that accidentally true beliefs cannot count as knowledge, it is rarely questioned why this should be so. An answer to this question emerges from the discussion of the value of reliability; an answer that holds interesting implications for the value and nature of knowledge. (shrink)
The use of multiple means of determination to “triangulate” on the existence and character of a common phenomenon, object, or result has had a long tradition in science but has seldom been a matter of primary focus. As with many traditions, it is traceable to Aristotle, who valued having multiple explanations of a phenomenon, and it may also be involved in his distinction between special objects of sense and common sensibles. It is implicit though not emphasized in the distinction between (...) primary and secondary qualities from Galileo onward. It is arguably one of several conceptions involved in Whewell’s method of the “consilience of inductions” (Laudan 1971) and is to be found in several places in Peirce. (From M. Brewer and B. Collins, eds., (1981); Scientific Inquiry in the Social Sciences (a festschrift for Donald T. Campbell), San Francisco: Jossey-Bass, pp. 123–162.). (shrink)
Many philosophers believe that when a theory is committed to an apparently unexplainable massive correlation, that fact counts significantly against the theory. Philosophical theories that imply that we have knowledge of non-causal mind-independent facts are especially prone to this objection. Prominent examples of such theories are mathematical Platonism, robust normative realism and modal realism. It is sometimes thought that theists can easily respond to this sort of challenge and that theism therefore has an epistemic advantage over atheism. In this paper, (...) I will argue that, contrary to widespread thought, some versions of theism only push the challenge one step further and thus are in no better position than atheism. (shrink)
Chapter INTRODUCTION i. The Problem Why suppose that sense perception is, by and large, an accurate source of information about the physical environment? ...
Various studies show moral intuitions to be susceptible to framing effects. Many have argued that this susceptibility is a sign of unreliability and that this poses a methodological challenge for moral philosophy. Recently, doubt has been cast on this idea. It has been argued that extant evidence of framing effects does not show that moral intuitions have an unreliability problem. I argue that, even if the extant evidence suggests that moral intuitions are fairly stable with respect to what intuitions we (...) have, the effect of framing on the strength of those intuitions still needs to be taken into account. I argue that this by itself poses a methodological challenge for moral philosophy. (shrink)
The coherentist theory of justification provides a response to the sceptical challenge: even though the independent processes by which we gather information about the world may be of dubious quality, the internal coherence of the information provides the justification for our empirical beliefs. This central canon of the coherence theory of justification is tested within the framework of Bayesian networks, which is a theory of probabilistic reasoning in artificial intelligence. We interpret the independence of the information gathering processes (IGPs) in (...) terms of conditional independences, construct a minimal sufficient condition for a coherence ranking of information sets and assess whether the confidence boost that results from receiving information through independent IGPs is indeed a positive function of the coherence of the information set. There are multiple interpretations of what constitute IGPs of dubious quality. Do we know our IGPs to be no better than randomization processes? Or, do we know them to be better than randomization processes but not quite fully reliable, and if so, what is the nature of this lack of full reliability? Or, do we not know whether they are fully reliable or not? Within the latter interpretation, does learning something about the quality of some IGPs teach us anything about the quality of the other IGPs? The Bayesian-network models demonstrate that the success of the coherentist canon is contingent on what interpretation one endorses of the claim that our IGPs are of dubious quality. (shrink)
Rumors, for better or worse, are an important element of public discourse. The present paper focuses on rumors as an epistemic phenomenon rather than as a social or political problem. In particular, it investigates the relation between the mode of transmission and the reliability, if any, of rumors as a source of knowledge. It does so by comparing rumor with two forms of epistemic dependence that have recently received attention in the philosophical literature: our dependence on the testimony of (...) others, and our dependence on what has been called the ‘coverage-reliability’ of our social environment (Goldberg 2010). According to the latter, an environment is ‘coverage-reliable’ if, across a wide range of beliefs and given certain conditions, it supports the following conditional: If ~p were true I would have heard about it by now. However, in information-deprived social environments with little coverage-reliability, rumors may transmit information that could not otherwise be had. This suggests that a trade-off exists between levels of trust in the coverage-reliability of official sources and (warranted) trust in rumor as a source of information. (shrink)
Many now countenance the idea that certain groups can have beliefs, or at least belief-like states. If groups can have beliefs like these, the question of whether such beliefs are justified immediately arises. Recently, Goldman Essays in collective epistemology, Oxford University Press, Oxford, 2014) has considered what a reliability-based account of justified group belief might look like. In this paper I consider his account and find it wanting, and so propose a modified reliability-based account of justified group belief. (...) Lackey :341–396, 2016) has also criticized Goldman’s proposal, but for very different reasons than I do. Some of her objections, however, can be lodged against the modified account that I propose here. I also respond to these objections. Finally, I note how some formal and experimental work is relevant to those who are attracted to the kind of reliability-based account of justified group belief I develop here. (shrink)
A recent study of moral intuitions, performed by Joshua Greene and a group of researchers at Princeton University, has recently received a lot of attention. Greene and his collaborators designed a set of experiments in which subjects were undergoing brain scanning as they were asked to respond to various practical dilemmas. They found that contemplation of some of these cases (cases where the subjects had to imagine that they must use some direct form of violence) elicited greater activity in certain (...) areas of the brain associated with emotions compared with the other cases. It has been argued (e.g., by Peter Singer) that these results undermine the reliability of our moral intuitions, and therefore provide an objection to methods of moral reasoning that presuppose that they carry an evidential weight (such as the idea of reflective equilibrium). I distinguish between two ways in which Greene's findings lend support for a sceptical attitude towards intuitions. I argue that, given the first version of the challenge, the method of reflective equilibrium can easily accommodate the findings. As for the second version of the challenge, I argue that it does not so much pose a threat specifically to the method of reflective equilibrium but to the idea that moral claims can be justified through rational argumentation in general. (shrink)
The aim of the present paper is to argue that robust virtue epistemology is correct. That is, a complete account of knowledge is not in need for an additional modal criterion in order to account for knowledge-undermining epistemic luck. I begin by presenting the problems facing robust virtue epistemology by examining two prominent counterexamples—the Barney and ‘epistemic twin earth’ cases. After proposing a way in which virtue epistemology can explain away these two problematic cases, thereby, implying that cognitive abilities are (...) also safe, I offer a naturalistic explanation in support of this last claim, inspired by evolutionary epistemology. Finally, I argue that naturalized epistemology should not be thought of as being exclusively descriptive. On the contrary, the evolutionary story I offer in support of the claim that reliability implies safety can provide us with a plausible epistemic norm. (shrink)
Reliabilism has come under recent attack for its alleged inability to account for the value we typically ascribe to knowledge. It is charged that a reliably-produced true belief has no more value than does the true belief alone. I reply to these charges on behalf of reliabilism; not because I think reliabilism is the correct theory of knowledge, but rather because being reliably-produced does add value of a sort to true beliefs. The added value stems from the fact that a (...) reliably-held belief is non-accidental in a particular way. While it is widely acknowledged that accidentally true beliefs cannot count as knowledge, it is rarely questioned why this should be so. An answer to this question emerges from the discussion of the value of reliability; an answer that holds interesting implications for the value and nature of knowledge. (shrink)
Reliabilist theories propose to analyse epistemic justification in terms of reliability. This paper argues that if we pay attention to the details of probability theory we find that there is no concept of reliability that can possibly play the role required by reliabilist theories. A distinction is drawn between the general reliability of a process and the single case reliability of an individual belief, And it is argued that neither notion can serve the reliabilist adequately.
A measure of coherence is said to be truth conducive if and only if a higher degree of coherence results in a higher likelihood of truth. Recent impossibility results strongly indicate that there are no probabilistic coherence measures that are truth conducive. Indeed, this holds even if truth conduciveness is understood in a weak ceteris paribus sense. This raises the problem of how coherence could nonetheless be an epistemically important property. Our proposal is that coherence may be linked in a (...) certain way to reliability. We define a measure of coherence to be reliability conducive if and only if a higher degree of coherence results in a higher probability that the information sources are reliable. Restricting ourselves to the most basic case, we investigate which coherence measures in the literature are reliability conducive. It turns out that, while a number of measures fail to be reliability conducive, except possibly in a trivial and uninteresting sense, Shogenji's measure and several measures generated by Douven and Meijs's recipe are notable exceptions to this rule. (shrink)
The “problem of memory” in epistemology is concerned with whether and how we could have knowledge, or at least justification, for trusting our apparent memories. I defend an inductive solution—more precisely, an abductive solution—to the problem. A natural worry is that any such solution would be circular, for it would have to depend on memory. I argue that belief in the reliability of memory can be justified from the armchair, without relying on memory. The justification is, roughly, that my (...) having the sort of experience that my apparent memory should lead me to expect is best explained by the hypothesis that my memories are reliable. My solution is inspired by Harrod’s (1942) inductive solution. Coburn (1960) argued that Harrod’s solution contains a fatal flaw. I show that my solution is not vulnerable to Coburn’s objection, and respond to a number of other, recent and likely objections. (shrink)
There are many domains about which we think we are reliable. When there is prima facie reason to believe that there is no satisfying explanation of our reliability about a domain given our background views about the world, this generates a challenge to our reliability about the domain or to our background views. This is what is often called the reliability challenge for the domain. In previous work, I discussed the reliability challenges for logic and for (...) deductive inference. I argued for four main claims: First, there are reliability challenges for logic and for deduction. Second, these reliability challenges cannot be answered merely by providing an explanation of how it is that we have the logical beliefs and employ the deductive rules that we do. Third, we can explain our reliability about logic by appealing to our reliability about deduction. Fourth, there is a good prospect for providing an evolutionary explanation of the reliability of our deductive reasoning. In recent years, a number of arguments have appeared in the literature that can be applied against one or more of these four theses. In this paper, I respond to some of these arguments. In particular, I discuss arguments by Paul Horwich, Jack Woods, Dan Baras, Justin Clarke-Doane, and Hartry Field. (shrink)
We are reliable about logic in the sense that we by-and-large believe logical truths and disbelieve logical falsehoods. Given that logic is an objective subject matter, it is difficult to provide a satisfying explanation of our reliability. This generates a significant epistemological challenge, analogous to the well-known Benacerraf-Field problem for mathematical Platonism. One initially plausible way to answer the challenge is to appeal to evolution by natural selection. The central idea is that being able to correctly deductively reason conferred (...) a heritable survival advantage upon our ancestors. However, there are several arguments that purport to show that evolutionary accounts cannot even in principle explain how it is that we are reliable about logic. In this paper, I address these arguments. I show that there is no general reason to think that evolutionary accounts are incapable of explaining our reliability about logic. (shrink)
A tempting argument for human rationality goes like this: it is more conducive to survival to have true beliefs than false beliefs, so it is more conducive to survival to use reliable belief-forming strategies than unreliable ones. But reliable strategies are rational strategies, so there is a selective advantage to using rational strategies. Since we have evolved, we must use rational strategies. In this paper I argue that some criticisms of this argument offered by Stephen Stich fail because they rely (...) on unsubstantiated interpretations of some results from experimental psychology. I raise two objections to the argument: (i) even if it is advantageous to use rational strategies, it does not follow that we actually use them; and (ii) natural selection need not favor only or even primarily reliable belief-forming strategies. (shrink)
Armchair philosophers have questioned the significance of recent work in experimental philosophy by pointing out that experiments have been conducted on laypeople and undergraduate students. To challenge a practice that relies on expert intuitions, so the armchair objection goes, one needs to demonstrate that expert intuitions rather than those of ordinary people are sensitive to contingent facts such as cultural, linguistic, socio-economic, or educational background. This article does exactly that. Based on two empirical studies on populations of 573 and 203 (...) trained philosophers, respectively, it demonstrates that expert intuitions vary dramatically according to at least one contingent factor, namely, the linguistic background of the expert: philosophers make different intuitive judgments if their native language is English rather than Dutch, German, or Swedish. These findings cast doubt on the common armchair assumption that philosophical theories based on armchair intuitions are valid beyond the linguistic background against which they were developed. (shrink)
Alex Worsnip has recently argued against conciliatory views that say that the degree of doxastic revision required in light of disagreement is a function of one’s antecedent reliability estimates for oneself and one’s disputant. According to Worsnip, the degree of doxastic revision is also sensitive to the resilience of these estimates; in particular, when one has positive “net resilience,” meaning that one is more confident in one’s estimate of one’s own reliability than in one’s estimate of the disputant’s (...)reliability, less doxastic revision is required. I show that Worsnip’s Resilience Account, however intuitive it may be, sometimes issues prescriptions that are clearly irrational. I then argue that Worsnip’s criticisms of “extreme conciliationism” are mistaken. The discussion brings out several important lessons for the epistemology of disagreement: first, while positive net resilience does not affect the degree of conciliation required in one-shot disagreements, over multiple disagreements it may diminish or magnify the required degree of conciliation; second, a common way of framing the disagreement debate is misguided; and third, the focus of the disagreement debate should not be on whether reliability estimates should determine the degree of conciliation, but on what reasons may legitimately ground reliability estimates. (shrink)
Several current debates in the epistemology of testimony are implicitly motivated by concerns about the reliability of rules for changing one’s beliefs in light of others’ claims. Call such rules testimonial norms (tns). To date, epistemologists have neither (i) characterized those features of communities that influence the reliability of tns, nor (ii) evaluated the reliability of tns as those features vary. These are the aims of this paper. I focus on scientific communities, where the transmission of highly (...) specialized information is both ubiquitous and critically important. Employing a formal model of scientific inquiry, I argue that miscommunication and the “communicative structure” of science strongly influence the reliability of tns, where reliability is made precise in three ways. (shrink)
Are we entitled or justified in taking the word of others at face value? An affirmative answer to this question is associated with the views of Thomas Reid. Recently, C. A. J. Coady has defended a Reidian view in his impressive and influential book. Testimony: A Philosophical Study. His central and most Oliginal argument for his positions involves reflection upon the practice of giving and accepting reports, of making assertions and relying on the word of others. His argument purports to (...) show that testimony is, by its very nature, a “reliable form of evidence about the way the world is.” The argument moves from what we do to why we are justified in doing it. Although I am sympathetic with both the Reidian view and Coady’s attempt to connect why we rely on others with why we are entitled to rely on others, I find Coady’s argument ineffective. (shrink)
Mendelovici (forthcoming) has recently argued that (1) tracking theories of mental representation (including teleosemantics) are incompatible with the possibility of reliable misrepresentation and that (2) this is an important difficulty for them. Furthermore, she argues that this problem commits teleosemantics to an unjustified a priori rejection of color eliminativism. In this paper I argue that (1) teleosemantics can accommodate most cases of reliable misrepresentation, (2) those cases the theory fails to account for are not objectionable and (3) teleosemantics is not (...) committed to any problematic view on the color realism-antirealism debate. (shrink)
Psychological studies show that the beliefs of two agents in a hypothesis can diverge even if both agents receive the same evidence. This phenomenon of belief polarisation is often explained by invoking biased assimilation of evidence, where the agents’ prior views about the hypothesis affect the way they process the evidence. We suggest, using a Bayesian model, that even if such influence is excluded, belief polarisation can still arise by another mechanism. This alternative mechanism involves differential weighting of the evidence (...) arising when agents have different initial views about the reliability of their sources of evidence. We provide a systematic exploration of the conditions for belief polarisation in Bayesian models which incorporate opinions about source reliability, and we discuss some implications of our findings for the psychological literature. (shrink)
Background: Screen time among adults represents a continuing and growing problem in relation to health behaviors and health outcomes. However, no instrument currently exists in the literature that quantifies the use of modern screen-based devices. The primary purpose of this study was to develop and assess the reliability of a new screen time questionnaire, an instrument designed to quantify use of multiple popular screen-based devices among the US population. -/- Methods: An 18-item screen-time questionnaire was created to quantify use (...) of commonly used screen devices (e.g. television, smartphone, tablet) across different time points during the week (e.g. weekday, weeknight, weekend). Test-retest reliability was assessed through intra-class correlation coefficients (ICCs) and standard error of measurement (SEM). The questionnaire was delivered online using Qualtrics and administered through Amazon Mechanical Turk (MTurk). -/- Results: Eighty MTurk workers completed full study participation and were included in the final analyses. All items in the screen time questionnaire showed fair to excellent relative reliability (ICCs = 0.50–0.90; all < 0.000), except for the item inquiring about the use of smartphone during an average weekend day (ICC = 0.16, p = 0.069). The SEM values were large for all screen types across the different periods under study. -/- Conclusions: Results from this study suggest this self-administered questionnaire may be used to successfully classify individuals into different categories of screen time use (e.g. high vs. low); however, it is likely that objective measures are needed to increase precision of screen time assessment. (shrink)
Impaired hand proprioception can lead to difficulties in performing fine motor tasks, thereby affecting activities of daily living. The majority of children with unilateral cerebral palsy experience proprioceptive deficits, but accurately quantifying these deficits is challenging due to the lack of sensitive measurement methods. Robot-assisted assessments provide a promising alternative, however, there is a need for solutions that specifically target children and their needs. We propose two novel robotics-based assessments to sensitively evaluate active and passive position sense of the index (...) finger metacarpophalangeal joint in children. We then investigate test-retest reliability and discriminant validity of these assessments in uCP and typically developing children, and further use the robotic platform to gain first insights into fundamentals of hand proprioception. Both robotic assessments were performed in two sessions with 1-h break in between. In the passive position sense assessment, participant's finger is passively moved by the robot to a randomly selected position, and she/he needs to indicate the perceived finger position on a tablet screen located directly above the hand, so that the vision of the hand is blocked. Active position sense is assessed by asking participants to accurately move their finger to a target position shown on the tablet screen, without visual feedback of the finger position. Ten children with uCP and 10 age-matched TDC were recruited in this study. Test-retest reliability in both populations was good >0.79). Proprioceptive error was larger for children with uCP than TDC, indicating discriminant validity. The active position sense was more accurate than passive, and the scores were not correlated, underlining the need for targeted assessments to comprehensively evaluate proprioception. There was a significant effect of age on passive position sense in TDC but not uCP, possibly linked to disturbed development of proprioceptive acuity in uCP. Overall, the proposed robot-assisted assessments are reliable, valid and a promising alternative to commonly used clinical methods, which could help gain a better understanding of proprioceptive impairments in uCP, facilitating the design of novel therapies. (shrink)
How we can reliably draw inferences from data, evidence and/or experience has been and continues to be a pressing question in everyday life, the sciences, politics and a number of branches in philosophy (traditional epistemology, social epistemology, formal epistemology, logic and philosophy of the sciences). In a world in which we can now longer fully rely on our experiences, interlocutors, measurement instruments, data collection and storage systems and even news outlets to draw reliable inferences, the issue becomes even more pressing. (...) While we were working on this question using a formal epistemology approach Landes and Osimani (2020); De Pretis et al. (2019); Osimani and Landes (2020); Osimani (2020), we realised that the width of current active interests in the notion of reliability was much broader than we initially thought. Given the breadth of approaches and angles present in philosophy (even in this journal Schubert 2012; Avigad 2021; Claveau and Grenier 2019; Kummerfeld and Danks 2014; Landes 2021; Trpin et al. 2021; Schippers 2014; Schindler 2011; Kelly et al. 2016; Mayo-Wilson 2014; Olsson and Schubert 2007; Pittard 2017), we thought that it would be beneficial to provide a forum for an open exchange of ideas, in which philosophers working in different paradigms could come together. Our call for expression of interest received a great variety of promised manuscripts, the variety is reflected in the published papers. They range from fields far away from our own interests such as quantum probabilities de Ronde et al. (2021), evolvable software systems Primiero et al. (2021), to topics closer to our own research in the philosophy of medicine Lalumera et al. (2020), psychology Dutilh et al. (2021), traditional epistemology Dunn (2021); Tolly (2021) to finally close shared interests in formal epistemology Romero and Sprenger (2021) even within our own department Merdes et al. (2021). Our job is now to reliably inform you about all the contributions in the papers in this special issue. Unfortunately, that task is beyond our capabilities. What we can and instead do is to summarise the contributed papers to inform your reliable inference to read them all in great detail. (shrink)