British Journal for the Philosophy of Science 57 (2):323-357 (2006)
Authors |
|
Abstract |
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a meta-statistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies. Introduction and overview 1.1 Behavioristic and inferential rationales for Neyman–Pearson (N–P) tests 1.2 Severity rationale: induction as severe testing 1.3 Severity as a meta-statistical concept: three required restrictions on the N–P paradigm Error statistical tests from the severity perspective 2.1 N–P test T(): type I, II error probabilities and power 2.2 Specifying test T() using p-values Neyman's post-data use of power 3.1 Neyman: does failure to reject H warrant confirming H? Severe testing as a basic concept for an adequate post-data inference 4.1 The severity interpretation of acceptance (SIA) for test T() 4.2 The fallacy of acceptance (i.e., an insignificant difference): Ms Rosy 4.3 Severity and power Fallacy of rejection: statistical vs. substantive significance 5.1 Taking a rejection of H0 as evidence for a substantive claim or theory 5.2 A statistically significant difference from H0 may fail to indicate a substantively important magnitude 5.3 Principle for the severity interpretation of a rejection (SIR) 5.4 Comparing significant results with different sample sizes in T(): large n problem 5.5 General testing rules for T(), using the severe testing concept The severe testing concept and confidence intervals 6.1 Dualities between one and two-sided intervals and tests 6.2 Avoiding shortcomings of confidence intervals Beyond the N–P paradigm: pure significance, and misspecification tests Concluding comments: have we shown severity to be a basic concept in a N–P philosophy of induction?
|
Keywords | No keywords specified (fix it) |
Categories | (categorize this paper) |
DOI | 10.1093/bjps/axl003 |
Options |
![]() ![]() ![]() ![]() |
Download options
References found in this work BETA
Bayes or Bust?: A Critical Examination of Bayesian Confirmation Theory.John Earman - 1992 - Bradford.
The Enterprise of Knowledge: An Essay on Knowledge, Credal Probability, and Chance.Isaac Levi - 1980 - MIT Press.
The Logic of Scientific Discovery.K. Popper - 1959 - British Journal for the Philosophy of Science 10 (37):55-57.
Bayes or Bust?: A Critical Examination of Bayesian Confirmation Theory.John Earman - 1992 - MIT Press.
View all 57 references / Add more references
Citations of this work BETA
What Type of Type I Error? Contrasting the Neyman–Pearson and Fisherian Approaches in the Context of Exact and Direct Replications.Mark Rubin - 2021 - Synthese 198 (6):5809–5834.
The Objectivity of Subjective Bayesianism.Jan Sprenger - 2018 - European Journal for Philosophy of Science 8 (3):539-558.
What is epistemically wrong with research affected by sponsorship bias? The evidential account.Alexander Reutlinger - 2020 - European Journal for Philosophy of Science 10 (2):1-26.
Bayesian Perspectives on the Discovery of the Higgs Particle.Richard Dawid - 2017 - Synthese 194 (2):377-394.
Pursuit and Inquisitive Reasons.Will Fleisher - 2022 - Studies in History and Philosophy of Science Part A 94:17-30.
View all 60 citations / Add more citations
Similar books and articles
Models and Statistical Inference: The Controversy Between Fisher and Neyman–Pearson.Johannes Lenhard - 2006 - British Journal for the Philosophy of Science 57 (1):69-91.
Mathematical Statistics and Metastatistical Analysis.Andrés Rivadulla - 1991 - Erkenntnis 34 (2):211 - 236.
How to Discount Double-Counting When It Counts: Some Clarifications.Deborah G. Mayo - 2008 - British Journal for the Philosophy of Science 59 (4):857-879.
Behavioristic, Evidentialist, and Learning Models of Statistical Testing.Deborah G. Mayo - 1985 - Philosophy of Science 52 (4):493-516.
A New Paradigm for Hypothesis Testing in Medicine, with Examination of the Neyman Pearson Condition.G. William Moore, Grover M. Hutchins & Robert E. Miller - 1986 - Theoretical Medicine and Bioethics 7 (3).
Of Nulls and Norms.Peter Godfrey-Smith - 1994 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994:280 - 290.
In Defense of the Neyman-Pearson Theory of Confidence Intervals.Deborah G. Mayo - 1981 - Philosophy of Science 48 (2):269-280.
Analytics
Added to PP index
2009-01-28
Total views
296 ( #36,524 of 2,518,735 )
Recent downloads (6 months)
1 ( #408,070 of 2,518,735 )
2009-01-28
Total views
296 ( #36,524 of 2,518,735 )
Recent downloads (6 months)
1 ( #408,070 of 2,518,735 )
How can I increase my downloads?
Downloads