This category needs an editor. We encourage you to help if you are qualified.
Volunteer, or read more about what this involves.
Related categories

206 found
Order:
1 — 50 / 206
  1. The Psychology of The Two Envelope Problem.J. S. Markovitch - manuscript
    This article concerns the psychology of the paradoxical Two Envelope Problem. The goal is to find instructive variants of the envelope switching problem that are capable of clear-cut resolution, while still retaining paradoxical features. By relocating the original problem into different contexts involving commutes and playing cards the reader is presented with a succession of resolved paradoxes that reduce the confusion arising from the parent paradox. The goal is to reduce confusion by understanding how we sometimes misread mathematical statements; or, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Randomness and Teleology in a Conscious Universe.Marco Masi - manuscript
    A common line of reasoning that argues against teleological conjectures in physics, cosmology, and especially evolutionary biology, resorts to statistical concepts based on notions of randomness, unpredictability or chance. A conceptual relationship between the aleatory uncertainty of a process and its inherent lack of goal-directedness is often taken for granted. This relies on a misunderstanding of the real significance of stochastic concepts importing a popular semantics into scientific considerations, which leads to unwarranted conclusions. We felt it necessary to clarify terms (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. The Future Has Thicker Tails Than the Past: Model Error as Branching Counterfactuals.Nassim N. Taleb - manuscript
    Ex ante predicted outcomes should be interpreted as counterfactuals (potential histories), with errors as the spread between outcomes. But error rates have error rates. We reapply measurements of uncertainty about the estimation errors of the estimation errors of an estimation treated as branching counterfactuals. Such recursions of epistemic uncertainty have markedly different distributial properties from conventional sampling error, and lead to fatter tails in the projections than in past realizations. Counterfactuals of error rates always lead to fat tails, regardless of (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  4. Pharmacovigilance as Personalized Evidence.Francesco De Pretis, William Peden, Jürgen Landes & Barbara Osimani - forthcoming - In Chiara Beneduce & Marta Bertolaso (eds.), Personalized Medicine in the Making. Springer.
    Personalized medicine relies on two points: 1) causal knowledge about the possible effects of X in a given statistical population; 2) assignment of the given individual to a suitable reference class. Regarding point 1, standard approaches to causal inference are generally considered to be characterized by a trade-off between how confidently one can establish causality in any given study (internal validity) and extrapolating such knowledge to specific target groups (external validity). Regarding point 2, it is uncertain which reference class leads (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Legal Burdens of Proof and Statistical Evidence.Georgi Gardiner - forthcoming - In James Chase & David Coady (eds.), The Routledge Handbook of Applied Epistemology. Routledge.
    In order to perform certain actions – such as incarcerating a person or revoking parental rights – the state must establish certain facts to a particular standard of proof. These standards – such as preponderance of evidence and beyond reasonable doubt – are often interpreted as likelihoods or epistemic confidences. Many theorists construe them numerically; beyond reasonable doubt, for example, is often construed as 90 to 95% confidence in the guilt of the defendant. -/- A family of influential cases suggests (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   21 citations  
  6. There is Cause to Randomize.Cristian Larroulet Philippi - forthcoming - Philosophy of Science.
    While practitioners think highly of randomized studies, some philosophers argue that there is no epistemic reason to randomize. Here I show that their arguments do not entail their conclusion. Moreover, I provide novel reasons for randomizing in the context of interventional studies. The overall discussion provides a unified framework for assessing baseline balance, one that holds for interventional and observational studies alike. The upshot: practitioners’ strong preference for randomized studies can be defended in some cases, while still offering a nuanced (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Scientific Metaphysics and Information.Bruce Long - forthcoming - Springer.
    This book investigates the interplay between two new and influential subdisciplines in the philosophy of science and philosophy: contemporary scientific metaphysics and the philosophy of information. Scientific metaphysics embodies various scientific realisms and has a partial intellectual heritage in some forms of neo-positivism, but is far more attuned than the latter to statistical science, theory defeasibility, scale variability, and pluralist ontological and explanatory commitments, and is averse to a-priori conceptual analysis. The philosophy of information is the combination of what has (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  8. Statistical Significance Testing in Economics.William Peden & Jan Sprenger - forthcoming - In Conrad Heilmann & Julian Reiss (eds.), The Routledge Handbook of the Philosophy of Economics.
    The origins of testing scientific models with statistical techniques go back to 18th century mathematics. However, the modern theory of statistical testing was primarily developed through the work of Sir R.A. Fisher, Jerzy Neyman, and Egon Pearson in the inter-war period. Some of Fisher's papers on testing were published in economics journals (Fisher, 1923, 1935) and exerted a notable influence on the discipline. The development of econometrics and the rise of quantitative economic models in the mid-20th century made statistical significance (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Distention for Sets of Probabilities.Rush T. Stewart & Michael Nielsen - forthcoming - Philosophy of Science.
    A prominent pillar of Bayesian philosophy is that, relative to just a few constraints, priors “wash out” in the limit. Bayesians often appeal to such asymptotic results as a defense against charges of excessive subjectivity. But, as Seidenfeld and coauthors observe, what happens in the short run is often of greater interest than what happens in the limit. They use this point as one motivation for investigating the counterintuitive short run phenomenon of dilation since, it is alleged, “dilation contrasts with (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Machine-Believers Learning Faiths & Knowledges: The New Gospel of Artificial Intelligence.Virgil W. Brower - 2021 - Internationales Jahrbuch Für Medienphilosophie 7 (1):97-121.
    One is occasionally reminded of Foucault's proclamation in a 1970 interview that "perhaps, one day this century will be known as Deleuzian." Less often is one compelled to update and restart with a supplementary counter-proclamation of the mathematician, David Lindley: "the twenty-first century would be a Bayesian era..." The verb tenses of both are conspicuous. // To critically attend to what is today often feared and demonized, but also revered, deployed, and commonly referred to as algorithm(s), one cannot avoid the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. Causal Inference From Noise.Nevin Climenhaga, Lane DesAutels & Grant Ramsey - 2021 - Noûs 55 (1):152-170.
    "Correlation is not causation" is one of the mantras of the sciences—a cautionary warning especially to fields like epidemiology and pharmacology where the seduction of compelling correlations naturally leads to causal hypotheses. The standard view from the epistemology of causation is that to tell whether one correlated variable is causing the other, one needs to intervene on the system—the best sort of intervention being a trial that is both randomized and controlled. In this paper, we argue that some purely correlational (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Statistical Inference and the Replication Crisis.Lincoln J. Colling & Dénes Szűcs - 2021 - Review of Philosophy and Psychology 12 (1):121-147.
    The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength of evidence for hypotheses. (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  13. Contested Numbers: The failed negotiation of objective statistics in a methodological review of Kinsey et al.’s sex research.Tabea Cornel - 2021 - History and Philosophy of the Life Sciences 43 (1):1-32.
    From 1950 to 1952, statisticians W.G. Cochran, C.F. Mosteller, and J.W. Tukey reviewed A.C. Kinsey and colleagues’ methodology. Neither the history-and-philosophy of science literature nor contemporary theories of interdisciplinarity seem to offer a conceptual model that fits this forced interaction, which was characterized by significant power asymmetries and disagreements on multiple levels. The statisticians initially attempted to exclude all non-technical matters from their evaluation, but their political and personal investments interfered with this agenda. In the face of McCarthy’s witch hunts, (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  14. Why Simpler Computer Simulation Models Can Be Epistemically Better for Informing Decisions.Casey Helgeson, Vivek Srikrishnan, Klaus Keller & Nancy Tuana - 2021 - Philosophy of Science 88 (2):213-233.
    For computer simulation models to usefully inform climate risk management, uncertainties in model projections must be explored and characterized. Because doing so requires running the model many ti...
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  15. Francis Galton’s Regression Towards Mediocrity and the Stability of Types.Adam Krashniak & Ehud Lamm - 2021 - Studies in History and Philosophy of Science Part A 81:6-19.
    A prevalent narrative locates the discovery of the statistical phenomenon of regression to the mean in the work of Francis Galton. It is claimed that after 1885, Galton came to explain the fact that offspring deviated less from the mean value of the population than their parents did as a population-level statistical phenomenon and not as the result of the processes of inheritance. Arguing against this claim, we show that Galton did not explain regression towards mediocrity statistically, and did not (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16. The epistemic consequences of pragmatic value-laden scientific inference.Adam P. Kubiak & Paweł Kawalec - 2021 - European Journal for Philosophy of Science 11 (2):1-26.
    In this work, we explore the epistemic import of the value-ladenness of Neyman-Pearson’s Theory of Testing Hypotheses by reconstructing and extending Daniel Steel’s argument for the legitimate influence of pragmatic values on scientific inference. We focus on how to properly understand N-P’s pragmatic value-ladenness and the epistemic reliability of N-P. We develop an account of the twofold influence of pragmatic values on N-P’s epistemic reliability and replicability. We refer to these two distinguished aspects as “direct” and “indirect”. We discuss the (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  17. Misalignment Between Research Hypotheses and Statistical Hypotheses: A Threat to Evidence-Based Medicine?Insa Lawler & Georg Zimmermann - 2021 - Topoi 40 (2):307-318.
    Evidence-based medicine frequently uses statistical hypothesis testing. In this paradigm, data can only disconfirm a research hypothesis’ competitors: One tests the negation of a statistical hypothesis that is supposed to correspond to the research hypothesis. In practice, these hypotheses are often misaligned. For instance, directional research hypotheses are often paired with non-directional statistical hypotheses. Prima facie, one cannot gain proper evidence for one’s research hypothesis employing a misaligned statistical hypothesis. This paper sheds lights on the nature of and the reasons (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. Christianity & Science in Harmony?Robert W. P. Luk - 2021 - Science and Philosophy 9 (2):61-82.
    A worldview that does not involve religion or science seems to be incomplete. However, a worldview that includes both religion and science may arouse concern of incompatibility. This paper looks at the particular religion, Christianity, and proceeds to develop a worldview in which Christianity and Science are compatible with each other. The worldview may make use of some ideas of Christianity and may involve some author’s own ideas on Christianity. It is thought that Christianity and Science are in harmony in (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Revisiting the Two Predominant Statistical Problems: The Stopping-Rule Problem and the Catch-All Hypothesis Problem.Yusaku Ohkubo - 2021 - Annals of the Japan Association for Philosophy of Science 30:23-41.
    The history of statistics is filled with many controversies, in which the prime focus has been the difference in the “interpretation of probability” between Fre- quentist and Bayesian theories. Many philosophical arguments have been elabo- rated to examine the problems of both theories based on this dichotomized view of statistics, including the well-known stopping-rule problem and the catch-all hy- pothesis problem. However, there are also several “hybrid” approaches in theory, practice, and philosophical analysis. This poses many fundamental questions. This paper (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. A Battle in the Statistics Wars: A Simulation-Based Comparison of Bayesian, Frequentist and Williamsonian Methodologies.Mantas Radzvilas, William Peden & Francesco De Pretis - 2021 - Synthese 199 (5-6):13689-13748.
    The debates between Bayesian, frequentist, and other methodologies of statistics have tended to focus on conceptual justifications, sociological arguments, or mathematical proofs of their long run properties. Both Bayesian statistics and frequentist (“classical”) statistics have strong cases on these grounds. In this article, we instead approach the debates in the “Statistics Wars” from a largely unexplored angle: simulations of different methodologies’ performance in the short to medium run. We conducted a large number of simulations using a straightforward decision problem based (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. Hacking, Ian (1936–).Samuli Reijula - 2021 - Routledge Encyclopedia of Philosophy.
    Ian Hacking (born in 1936, Vancouver, British Columbia) is most well-known for his work in the philosophy of the natural and social sciences, but his contributions to philosophy are broad, spanning many areas and traditions. In his detailed case studies of the development of probabilistic and statistical reasoning, Hacking pioneered the naturalistic approach in the philosophy of science. Hacking’s research on social constructionism, transient mental illnesses, and the looping effect of the human kinds make use of historical materials to shed (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. Inflated effect sizes and underpowered tests: how the severity measure of evidence is affected by the winner’s curse.Guillaume Rochefort-Maranda - 2021 - Philosophical Studies 178 (1):133-145.
    My aim in this paper is to show how the problem of inflated effect sizes corrupts the severity measure of evidence. This has never been done. In fact, the Winner’s Curse is barely mentioned in the philosophical literature. Since the severity score is the predominant measure of evidence for frequentist tests in the philosophical literature, it is important to underscore its flaws. It is also crucial to bring the philosophical literature up to speed with the limits of classical testing. The (...)
    Remove from this list   Direct download (4 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  23. When to Adjust Alpha During Multiple Testing: A Consideration of Disjunction, Conjunction, and Individual Testing.Mark Rubin - 2021 - Synthese 199 (3-4):10969-11000.
    Scientists often adjust their significance threshold during null hypothesis significance testing in order to take into account multiple testing and multiple comparisons. This alpha adjustment has become particularly relevant in the context of the replication crisis in science. The present article considers the conditions in which this alpha adjustment is appropriate and the conditions in which it is inappropriate. A distinction is drawn between three types of multiple testing: disjunction testing, conjunction testing, and individual testing. It is argued that alpha (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  24. Phrenology and the Average Person, 1840–1940.Fenneke Sysling - 2021 - History of the Human Sciences 34 (2):27-45.
    The popular science of phrenology is known for its preoccupation with geniuses and criminals, but this article shows that phrenologists also introduced ideas about the ‘average’ person. Popular phrenologists in the US and the UK examined the heads of their clients to give an indication of their character. Based on the publications of phrenologists and on a large collection of standardized charts with clients’ scores, this article analyses their definition of what they considered to be the ‘average’. It can be (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. Sample Representation in the Social Sciences.Kino Zhao - 2021 - Synthese (10):9097-9115.
    The social sciences face a problem of sample non-representation, where the majority of samples consist of undergraduate students from Euro-American institutions. The problem has been identified for decades with little trend of improvement. In this paper, I trace the history of sampling theory. The dominant framework, called the design-based approach, takes random sampling as the gold standard. The idea is that a sampling procedure that is maximally uninformative prevents samplers from introducing arbitrary bias, thus preserving sample representation. I show how (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  26. Are Scientific Models of Life Testable? A Lesson From Simpson's Paradox.Prasanta S. Bandyopadhyay, Don Dcruz, Nolan Grunska & Mark Greenwood - 2020 - Sci 1 (3).
    We address the need for a model by considering two competing theories regarding the origin of life: (i) the Metabolism First theory, and (ii) the RNA World theory. We discuss two interrelated points, namely: (i) Models are valuable tools for understanding both the processes and intricacies of origin-of-life issues, and (ii) Insights from models also help us to evaluate the core objection to origin-of-life theories, called “the inefficiency objection”, which is commonly raised by proponents of both the Metabolism First theory (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Reliability: An Introduction.Stefano Bonzio, Jürgen Landes & Barbara Osimani - 2020 - Synthese (Suppl 23):1-10.
    How we can reliably draw inferences from data, evidence and/or experience has been and continues to be a pressing question in everyday life, the sciences, politics and a number of branches in philosophy (traditional epistemology, social epistemology, formal epistemology, logic and philosophy of the sciences). In a world in which we can now longer fully rely on our experiences, interlocutors, measurement instruments, data collection and storage systems and even news outlets to draw reliable inferences, the issue becomes even more pressing. (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Statistical Significance Under Low Power: A Gettier Case?Daniel Dunleavy - 2020 - Journal of Brief Ideas.
    A brief idea on statistics and epistemology.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Reflexões acerca de Big Data e Cognição.Joao Kogler - 2020 - In Mariana C. Broens Edna A. De Souza (ed.), Big Data: Implicações Epistemológicas e Éticas. São Paulo, State of São Paulo, Brazil: pp. 145-157.
    In this essay we examine the relationships between Big Data and cognition, in particular human cognition. The reason for exploring such relationships lies in two aspects. First, because in the domain of cognitive science, many speculate about the benefits that the uses of Big Data analysis techniques can provide to the characterization and understanding of cognition. Secondly, because the scientific and technological sectors that promote data analysis activities, particularly statistics, computer science and data science, naturally accustomed to working with Big (...)
    Remove from this list  
    Translate
     
     
    Export citation  
     
    Bookmark  
  30. Heinrich Hartmann. The Body Populace: Military Statistics and Demography in Europe Before the First World War. Translated by Ellen Yutzy Glebe. (Transformations: Studies in the History of Science and Technology.) Xxiii + 256 Pp., Notes, Bibl., Index. Cambridge, Mass./London: MIT Press, 2018. $40 (Paper). ISBN 9780262536325. [REVIEW]Morgane Labbé - 2020 - Isis 111 (2):406-407.
  31. Scientific Self-Correction: The Bayesian Way.Felipe Romero & Jan Sprenger - 2020 - Synthese (Suppl 23):1-21.
    The enduring replication crisis in many scientific disciplines casts doubt on the ability of science to estimate effect sizes accurately, and in a wider sense, to self-correct its findings and to produce reliable knowledge. We investigate the merits of a particular countermeasure—replacing null hypothesis significance testing with Bayesian inference—in the context of the meta-analytic aggregation of effect sizes. In particular, we elaborate on the advantages of this Bayesian reform proposal under conditions of publication bias and other methodological imperfections that are (...)
    Remove from this list   Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   5 citations  
  32. “Repeated Sampling From the Same Population?” A Critique of Neyman and Pearson’s Responses to Fisher.Mark Rubin - 2020 - European Journal for Philosophy of Science 10 (3):1-15.
    Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the relevant (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  33. Conditional Degree of Belief and Bayesian Inference.Jan Sprenger - 2020 - Philosophy of Science 87 (2):319-335.
    Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis for Bayesian (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  34. Trung tâm ISR có bài ra mừng 130 năm Ngày sinh Chủ tịch Hồ Chí Minh.Hồ Mạnh Toàn - 2020 - ISR Phenikaa 2020 (5):1-3.
    Bài mới xuất bản vào ngày 19-5-2020 với tác giả liên lạc là NCS Nguyễn Minh Hoàng, cán bộ nghiên cứu của Trung tâm ISR, trình bày tiếp cận thống kê Bayesian cho việc nghiên cứu dữ liệu khoa học xã hội. Đây là kết quả của định hướng Nhóm nghiên cứu SDAG được nêu rõ ngay từ ngày 18-5-2019.
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  35. Mereological Dominance and Simpson’s Paradox.Tung-Ying Wu - 2020 - Philosophia: Philosophical Quarterly of Israel 48 (1):391–404.
    Numerous papers have investigated the transitivity principle of ‘better-than.’ A recent argument appeals to the principle of mereological dominance for transitivity. However, writers have not treated mereological dominance in much detail. This paper sets out to evaluate the generality of mereological dominance and its effectiveness in supporting the transitivity principle. I found that the mereological dominance principle is vulnerable to a counterexample based on Simpson’s Paradox. The thesis concludes that the mereological dominance principle should be revised in certain ways.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. Thinking in Multitudes: Questionnaires and Composite Cases in Early American Psychology.Jacy L. Young - 2020 - History of the Human Sciences 33 (3-4):160-174.
    In the late 19th century, the questionnaire was one means of taking the case study into the multitudes. This article engages with Forrester’s idea of thinking in cases as a means of interrogating questionnaire-based research in early American psychology. Questionnaire research was explicitly framed by psychologists as a practice involving both natural historical and statistical forms of scientific reasoning. At the same time, questionnaire projects failed to successfully enact the latter aspiration in terms of synthesizing masses of collected data into (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  37. An Automatic Ockham’s Razor for Bayesians?Gordon Belot - 2019 - Erkenntnis 84 (6):1361-1367.
    It is sometimes claimed that the Bayesian framework automatically implements Ockham’s razor—that conditionalizing on data consistent with both a simple theory and a complex theory more or less inevitably favours the simpler theory. It is shown here that the automatic razor doesn’t in fact cut it for certain mundane curve-fitting problems.
    Remove from this list   Direct download (5 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  38. Evidence Amalgamation, Plausibility, and Cancer Research.Marta Bertolaso & Fabio Sterpetti - 2019 - Synthese 196 (8):3279-3317.
    Cancer research is experiencing ‘paradigm instability’, since there are two rival theories of carcinogenesis which confront themselves, namely the somatic mutation theory and the tissue organization field theory. Despite this theoretical uncertainty, a huge quantity of data is available thanks to the improvement of genome sequencing techniques. Some authors think that the development of new statistical tools will be able to overcome the lack of a shared theoretical perspective on cancer by amalgamating as many data as possible. We think instead (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  39. Clinical Equipoise and Adaptive Clinical Trials.Nicolas Fillion - 2019 - Topoi 38 (2):457-467.
    Ethically permissible clinical trials must not expose subjects to risks that are unreasonable in relation to anticipated benefits. In the research ethics literature, this moral requirement is typically understood in one of two different ways: as requiring the existence of a state of clinical equipoise, meaning a state of honest, professional disagreement among the community of experts about the preferred treatment; or as requiring an equilibrium between individual and collective ethics. It has been maintained that this second interpretation makes it (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. To Read More Papers, or to Read Papers Better? A Crucial Point for the Reproducibility Crisis.Thiago F. A. França & José M. Monserrat - 2019 - Bioessays 41 (1):1800206.
    The overflow of scientific literature stimulates poor reading habits which can aggravate science's reproducibility crisis. Thus, solving the reproducibility crisis demands not only methodological changes, but also changes in our relationship with the scientific literature, especially our reading habits. Importantly, this does not mean reading more, it means reading better.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41. A Moral Framework for Understanding of Fair ML Through Economic Models of Equality of Opportunity.Hoda Heidari - 2019 - Proceedings of the Conference on Fairness, Accountability, and Transparency 1.
    We map the recently proposed notions of algorithmic fairness to economic models of Equality of opportunity (EOP)---an extensively studied ideal of fairness in political philosophy. We formally show that through our conceptual mapping, many existing definition of algorithmic fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, our work serves as a unifying moral framework for understanding existing notions of algorithmic fairness. Most importantly, this framework allows us to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  42. Why is Bayesian Confirmation Theory Rarely Practiced?Robert W. P. Luk - 2019 - Science and Philosophy 7 (1):3-20.
    Bayesian confirmation theory is a leading theory to decide the confirmation/refutation of a hypothesis based on probability calculus. While it may be much discussed in philosophy of science, is it actually practiced in terms of hypothesis testing by scientists? Since the assignment of some of the probabilities in the theory is open to debate and the risk of making the wrong decision is unknown, many scientists do not use the theory in hypothesis testing. Instead, they use alternative statistical tests that (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Direct Inference in the Material Theory of Induction.William Peden - 2019 - Philosophy of Science 86 (4):672-695.
    John D. Norton’s “Material Theory of Induction” has been one of the most intriguing recent additions to the philosophy of induction. Norton’s account appears to be a notably natural account of actual inductive practices, although his theory has attracted considerable criticism. I detail several novel issues for his theory but argue that supplementing the Material Theory with a theory of direct inference could address these problems. I argue that if this combination is possible, a stronger theory of inductive reasoning emerges, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44. We Are Less Free Than How We Think: Regular Patterns in Nonverbal Communication.".Alessandro Vinciarelli, Anna Esposito, Mohammad Tayarani, Giorgio Roffo, Filomena Scibelli, Perrone Francesco & Dong BachVo - 2019 - In Multimodal Behavior Analysis in the Wild Advances and Challenges Computer Vision and Pattern Recognition. pp. Pages 269-288.
    The goal of this chapter is to show that human behavior is not random but follows principles and laws that result into regular patterns that can be not only observed, but also automatically detected and analyzed. The word “behavior” accounts here for nonverbal behavioral cues (e.g., facial expressions, laughter, gestures, etc.) that people display, typically outside conscious awareness, during social interactions. In particular, the chapter shows that observable behavioral patterns typically account for social and psychological differences that cannot be observed (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  45. Regression Explanation and Statistical Autonomy.Joeri Witteveen - 2019 - Biology and Philosophy 34 (5):1-20.
    The phenomenon of regression toward the mean is notoriously liable to be overlooked or misunderstood; regression fallacies are easy to commit. But even when regression phenomena are duly recognized, it remains perplexing how they can feature in explanations. This article develops a philosophical account of regression explanations as “statistically autonomous” explanations that cannot be deepened by adducing details about causal histories, even if the explananda as such are embedded in the causal structure of the world. That regression explanations have statistical (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Multiple Regression Is Not Multiple Regressions: The Meaning of Multiple Regression and the Non-Problem of Collinearity.Michael B. Morrissey & Graeme D. Ruxton - 2018 - Philosophy, Theory, and Practice in Biology 10 (3).
    Simple regression (regression analysis with a single explanatory variable), and multiple regression (regression models with multiple explanatory variables), typically correspond to very different biological questions. The former use regression lines to describe univariate associations. The latter describe the partial, or direct, effects of multiple variables, conditioned on one another. We suspect that the superficial similarity of simple and multiple regression leads to confusion in their interpretation. A clear understanding of these methods is essential, as they underlie a large range of (...)
    Remove from this list   Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  47. Imprecise Probability and the Measurement of Keynes's "Weight of Arguments".William Peden - 2018 - IfCoLog Journal of Logics and Their Applications 5 (4):677-708.
    Many philosophers argue that Keynes’s concept of the “weight of arguments” is an important aspect of argument appraisal. The weight of an argument is the quantity of relevant evidence cited in the premises. However, this dimension of argumentation does not have a received method for formalisation. Kyburg has suggested a measure of weight that uses the degree of imprecision in his system of “Evidential Probability” to quantify weight. I develop and defend this approach to measuring weight. I illustrate the usefulness (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. Two Impossibility Results for Measures of Corroboration.Jan Sprenger - 2018 - British Journal for the Philosophy of Science 69 (1):139--159.
    According to influential accounts of scientific method, such as critical rationalism, scientific knowledge grows by repeatedly testing our best hypotheses. But despite the popularity of hypothesis tests in statistical inference and science in general, their philosophical foundations remain shaky. In particular, the interpretation of non-significant results—those that do not reject the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis, or provide a reason to accept it? Popper sought for measures of corroboration that could (...)
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  49. Probabilistic Opinion Pooling with Imprecise Probabilities.Rush T. Stewart & Ignacio Ojea Quintana - 2018 - Journal of Philosophical Logic 47 (1):17-45.
    The question of how the probabilistic opinions of different individuals should be aggregated to form a group opinion is controversial. But one assumption seems to be pretty much common ground: for a group of Bayesians, the representation of group opinion should itself be a unique probability distribution, 410–414, [45]; Bordley Management Science, 28, 1137–1148, [5]; Genest et al. The Annals of Statistics, 487–501, [21]; Genest and Zidek Statistical Science, 114–135, [23]; Mongin Journal of Economic Theory, 66, 313–351, [46]; Clemen and (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  50. Nhà khoa học Việt đứng tên một mình trên tạp chí hàng đầu về khoa học dữ liệu của Nature Research.Thùy Dương - 2017 - Dân Trí Online 2017 (10).
    Dân Trí (25/10/2017) — Lần đầu tiên, có một nhà khoa học người Việt, thực hiện công trình nghiên cứu hoàn toàn 100% tại Việt Nam, đứng tên một mình được công bố tại tạp chí Scientific Data, một tạp chí hàng đầu về khoa học dữ liệu thuộc danh mục xuất bản của Nature Research danh tiếng.
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
1 — 50 / 206