Switch to: References

Citations of:

Frequentist statistics as a theory of inductive inference

In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press (2006)

Add citations

You must login to add citations.
  1. The Climate Wars and ‘the Pause’ – Are Both Sides Wrong?Roger Jones & James Ricketts - 2016 - Victoria University, Victoria Institute of Strategic Economic Studies.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  • Error statistical modeling and inference: Where methodology meets ontology.Aris Spanos & Deborah G. Mayo - 2015 - Synthese 192 (11):3533-3555.
    In empirical modeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments of the two types of models. The key to untangling them is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • What type of Type I error? Contrasting the Neyman–Pearson and Fisherian approaches in the context of exact and direct replications.Mark Rubin - 2021 - Synthese 198 (6):5809–5834.
    The replication crisis has caused researchers to distinguish between exact replications, which duplicate all aspects of a study that could potentially affect the results, and direct replications, which duplicate only those aspects of the study that are thought to be theoretically essential to reproduce the original effect. The replication crisis has also prompted researchers to think more carefully about the possibility of making Type I errors when rejecting null hypotheses. In this context, the present article considers the utility of two (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Error and inference: an outsider stand on a frequentist philosophy.Christian P. Robert - 2013 - Theory and Decision 74 (3):447-461.
    This paper is an extended review of the book Error and Inference, edited by Deborah Mayo and Aris Spanos, about their frequentist and philosophical perspective on testing of hypothesis and on the criticisms of alternatives like the Bayesian approach.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Significance Tests: Vitiated or Vindicated by the Replication Crisis in Psychology?Deborah G. Mayo - 2020 - Review of Philosophy and Psychology 12 (1):101-120.
    The crisis of replication has led many to blame statistical significance tests for making it too easy to find impressive looking effects that do not replicate. However, the very fact it becomes difficult to replicate effects when features of the tests are tied down actually serves to vindicate statistical significance tests. While statistical significance tests, used correctly, serve to bound the probabilities of erroneous interpretations of data, this error control is nullified by data-dredging, multiple testing, and other biasing selection effects. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Statistical significance and its critics: practicing damaging science, or damaging scientific practice?Deborah G. Mayo & David Hand - 2022 - Synthese 200 (3):1-33.
    While the common procedure of statistical significance testing and its accompanying concept of p-values have long been surrounded by controversy, renewed concern has been triggered by the replication crisis in science. Many blame statistical significance tests themselves, and some regard them as sufficiently damaging to scientific practice as to warrant being abandoned. We take a contrary position, arguing that the central criticisms arise from misunderstanding and misusing the statistical tools, and that in fact the purported remedies themselves risk damaging science. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Some surprising facts about surprising facts.D. Mayo - 2014 - Studies in History and Philosophy of Science Part A 45:79-86.
    A common intuition about evidence is that if data x have been used to construct a hypothesis H, then x should not be used again in support of H. It is no surprise that x fits H, if H was deliberately constructed to accord with x. The question of when and why we should avoid such “double-counting” continues to be debated in philosophy and statistics. It arises as a prohibition against data mining, hunting for significance, tuning on the signal, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Some methodological issues in experimental economics.Deborah Mayo - 2008 - Philosophy of Science 75 (5):633-645.
    The growing acceptance and success of experimental economics has increased the interest of researchers in tackling philosophical and methodological challenges to which their work increasingly gives rise. I sketch some general issues that call for the combined expertise of experimental economists and philosophers of science, of experiment, and of inductive‐statistical inference and modeling. †To contact the author, please write to: 235 Major Williams, Virginia Tech, Blacksburg, VA 24061‐0126; e‐mail: [email protected].
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • How to discount double-counting when it counts: Some clarifications.Deborah G. Mayo - 2008 - British Journal for the Philosophy of Science 59 (4):857-879.
    The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity criterion. Taking their criticism as (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Non-Measurability, Imprecise Credences, and Imprecise Chances.Yoaav Isaacs, Alan Hájek & John Hawthorne - 2021 - Mind 131 (523):892-916.
    – We offer a new motivation for imprecise probabilities. We argue that there are propositions to which precise probability cannot be assigned, but to which imprecise probability can be assigned. In such cases the alternative to imprecise probability is not precise probability, but no probability at all. And an imprecise probability is substantially better than no probability at all. Our argument is based on the mathematical phenomenon of non-measurable sets. Non-measurable propositions cannot receive precise probabilities, but there is a natural (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • How do researchers evaluate statistical evidence when drawing inferences from data?Arianne Herrera-Bennett - 2019 - Dissertation, Ludwig Maximilians Universität, München
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Conditional Probabilities.Kenny Easwaran - 2019 - In Richard Pettigrew & Jonathan Weisberg (eds.), The Open Handbook of Formal Epistemology. PhilPapers Foundation. pp. 131-198.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations