Results for ' Frequentist statistics'

998 found
Order:
  1.  36
    Frequentist statistics as a theory of inductive inference.Deborah G. Mayo & David Cox - 2006 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press.
    After some general remarks about the interrelation between philosophical and statistical thinking, the discussion centres largely on significance tests. These are defined as the calculation of p-values rather than as formal procedures for ‘acceptance‘ and ‘rejection‘. A number of types of null hypothesis are described and a principle for evidential interpretation set out governing the implications of p- values in the specific circumstances of each application, as contrasted with a long-run interpretation. A number of more complicated situ- ations are discussed (...)
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  2.  10
    Testing Simulation Models Using Frequentist Statistics.Andrew P. Robinson - 2019 - In Claus Beisbart & Nicole J. Saam (eds.), Computer Simulation Validation: Fundamental Concepts, Methodological Frameworks, and Philosophical Perspectives. Springer Verlag. pp. 465-496.
    One approach to validating simulation models is to formally compare model outputs with independent data. We consider such model validation from the point of view of Frequentist statistics. A range of estimates and tests of goodness of fit have been advanced. We review these approaches, and demonstrate that some of the tests suffer from difficulties in interpretation because they rely on the null hypothesisHypothesis that the model is similar to the observationsObservations. This reliance creates two unpleasant possibilities, namely, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  3.  12
    Frequentist statistical inference without repeated sampling.Paul Vos & Don Holbert - 2022 - Synthese 200 (2):1-25.
    Frequentist inference typically is described in terms of hypothetical repeated sampling but there are advantages to an interpretation that uses a single random sample. Contemporary examples are given that indicate probabilities for random phenomena are interpreted as classical probabilities, and this interpretation of equally likely chance outcomes is applied to statistical inference using urn models. These are used to address Bayesian criticisms of frequentist methods. Recent descriptions of p-values, confidence intervals, and power are viewed through the lens of (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4.  83
    Frequentist probability and frequentist statistics.J. Neyman - 1977 - Synthese 36 (1):97 - 131.
  5.  45
    Perspectival Realism and Frequentist Statistics: The Case of Jerzy Neyman’s Methodology and Philosophy.Adam P. Kubiak - unknown
    I investigate the extent to which perspectival realism agrees with frequentist statistical methodology and philosophy, with an emphasis on J. Neyman’s views. Based on the example of the stopping rule problem, I show how PR can naturally be associated with frequentist statistics in general. I also show that there are some aspects of Neyman’s thought that seem to confirm PR and others that disconfirm it. I argue that epistemic PR is consistent with Neyman’s frequentism to a satisfactory (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  6.  87
    Pragmatic warrant for frequentist statistical practice: the case of high energy physics.Kent W. Staley - 2017 - Synthese 194 (2).
    Amidst long-running debates within the field, high energy physics has adopted a statistical methodology that primarily employs standard frequentist techniques such as significance testing and confidence interval estimation, but incorporates Bayesian methods for limited purposes. The discovery of the Higgs boson has drawn increased attention to the statistical methods employed within HEP. Here I argue that the warrant for the practice in HEP of relying primarily on frequentist methods can best be understood as pragmatic, in the sense that (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  7.  25
    NewPerspectiveson (SomeOld) Problems of Frequentist Statistics.Deborah G. Mayo & David Cox - 2010 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. Cambridge University Press. pp. 247.
  8.  7
    How (not) to demonstrate unconscious priming: Overcoming issues with post-hoc data selection, low power, and frequentist statistics.Timo Stein, Simon van Gaal & Johannes J. Fahrenfort - 2024 - Consciousness and Cognition 119 (C):103669.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  49
    A Battle in the Statistics Wars: a simulation-based comparison of Bayesian, Frequentist and Williamsonian methodologies.Mantas Radzvilas, William Peden & Francesco De Pretis - 2021 - Synthese 199 (5-6):13689-13748.
    The debates between Bayesian, frequentist, and other methodologies of statistics have tended to focus on conceptual justifications, sociological arguments, or mathematical proofs of their long run properties. Both Bayesian statistics and frequentist (“classical”) statistics have strong cases on these grounds. In this article, we instead approach the debates in the “Statistics Wars” from a largely unexplored angle: simulations of different methodologies’ performance in the short to medium run. We conducted a large number of simulations (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  10.  69
    Statistical inference without frequentist justifications.Jan Sprenger - 2010 - In M. Dorato M. Suàrez (ed.), Epsa Epistemology and Methodology of Science. Springer. pp. 289--297.
    Statistical inference is often justified by long-run properties of the sampling distributions, such as the repeated sampling rationale. These are frequentist justifications of statistical inference. I argue, in line with existing philosophical literature, but against a widespread image in empirical science, that these justifications are flawed. Then I propose a novel interpretation of probability in statistics, the artefactual interpretation. I believe that this interpretation is able to bridge the gap between statistical probability calculations and rational decisions on the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  11.  3
    Bayesians Versus Frequentists: A Philosophical Debate on Statistical Reasoning.Jordi Vallverdú - 2016 - Berlin, Heidelberg: Imprint: Springer.
    This book analyzes the origins of statistical thinking as well as its related philosophical questions, such as causality, determinism or chance. Bayesian and frequentist approaches are subjected to a historical, cognitive and epistemological analysis, making it possible to not only compare the two competing theories, but to also find a potential solution. The work pursues a naturalistic approach, proceeding from the existence of numerosity in natural environments to the existence of contemporary formulas and methodologies to heuristic pragmatism, a concept (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  12. Reviving Frequentism.Mario Hubert - 2021 - Synthese 199:5255–5584.
    Philosophers now seem to agree that frequentism is an untenable strategy to explain the meaning of probabilities. Nevertheless, I want to revive frequentism, and I will do so by grounding probabilities on typicality in the same way as the thermodynamic arrow of time can be grounded on typicality within statistical mechanics. This account, which I will call typicality frequentism, will evade the major criticisms raised against previous forms of frequentism. In this theory, probabilities arise within a physical theory from statistical (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  13.  3
    Finite frequentism explains quantum probability.Simon Saunders - unknown
    I show that frequentism, as an explanation of probability in classical statistical mechanics, can be extended in a natural way to a decoherent quantum history space, the analogue of a classical phase space. The result is a form of finite frequentism, in which Gibbs’ concept of an infinite ensemble of gases is replaced by the quantum state expressed as a superposition of a finite number of decohering microstates. It is a form of finite and actual frequentism (as opposed to hypothetical (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  14. Why Frequentists and Bayesians Need Each Other.Jon Williamson - 2013 - Erkenntnis 78 (2):293-318.
    The orthodox view in statistics has it that frequentism and Bayesianism are diametrically opposed—two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are complementary and need to mesh if probabilistic reasoning is to be carried out correctly.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  15.  97
    A frequentist interpretation of probability for model-based inductive inference.Aris Spanos - 2013 - Synthese 190 (9):1555-1585.
    The main objective of the paper is to propose a frequentist interpretation of probability in the context of model-based induction, anchored on the Strong Law of Large Numbers (SLLN) and justifiable on empirical grounds. It is argued that the prevailing views in philosophy of science concerning induction and the frequentist interpretation of probability are unduly influenced by enumerative induction, and the von Mises rendering, both of which are at odds with frequentist model-based induction that dominates current practice. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  16. Statistical Inference and the Replication Crisis.Lincoln J. Colling & Dénes Szűcs - 2018 - Review of Philosophy and Psychology 12 (1):121-147.
    The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17. A Frequentist Solution to Lindley & Phillips’ Stopping Rule Problem in Ecological Realm.Adam P. Kubiak - 2014 - Zagadnienia Naukoznawstwa 50 (200):135-145.
    In this paper I provide a frequentist philosophical-methodological solution for the stopping rule problem presented by Lindley & Phillips in 1976, which is settled in the ecological realm of testing koalas’ sex ratio. I deliver criteria for discerning a stopping rule, an evidence and a model that are epistemically more appropriate for testing the hypothesis of the case studied, by appealing to physical notion of probability and by analyzing the content of possible formulations of evidence, assumptions of models and (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  91
    Is frequentist testing vulnerable to the base-rate fallacy?Aris Spanos - 2010 - Philosophy of Science 77 (4):565-583.
    This article calls into question the charge that frequentist testing is susceptible to the base-rate fallacy. It is argued that the apparent similarity between examples like the Harvard Medical School test and frequentist testing is highly misleading. A closer scrutiny reveals that such examples have none of the basic features of a proper frequentist test, such as legitimate data, hypotheses, test statistics, and sampling distributions. Indeed, the relevant error probabilities are replaced with the false positive/negative rates (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  19.  23
    Statistics‐based research – a pig in a poke?James Penston - 2011 - Journal of Evaluation in Clinical Practice 17 (5):862-867.
  20.  10
    Prior Information in Frequentist Research Designs: The Case of Neyman’s Sampling Theory.Adam P. Kubiak & Paweł Kawalec - 2022 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 53 (4):381-402.
    We analyse the issue of using prior information in frequentist statistical inference. For that purpose, we scrutinise different kinds of sampling designs in Jerzy Neyman’s theory to reveal a variety of ways to explicitly and objectively engage with prior information. Further, we turn to the debate on sampling paradigms (design-based vs. model-based approaches) to argue that Neyman’s theory supports an argument for the intermediate approach in the frequentism vs. Bayesianism debate. We also demonstrate that Neyman’s theory, by allowing non-epistemic (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21.  26
    Bayesian and frequentist models: legitimate choices for different purposes of clinical research.Zackary Berger - 2010 - Journal of Evaluation in Clinical Practice 16 (6):1045-1047.
  22.  40
    Mathematical statistics and metastatistical analysis.Andrés Rivadulla - 1991 - Erkenntnis 34 (2):211 - 236.
    This paper deals with meta-statistical questions concerning frequentist statistics. In Sections 2 to 4 I analyse the dispute between Fisher and Neyman on the so called logic of statistical inference, a polemic that has been concomitant of the development of mathematical statistics. My conclusion is that, whenever mathematical statistics makes it possible to draw inferences, it only uses deductive reasoning. Therefore I reject Fisher's inductive approach to the statistical estimation theory and adhere to Neyman's deductive one. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23. Statistical Significance Testing in Economics.William Peden & Jan Sprenger - 2021 - In Conrad Heilmann & Julian Reiss (eds.), The Routledge Handbook of the Philosophy of Economics.
    The origins of testing scientific models with statistical techniques go back to 18th century mathematics. However, the modern theory of statistical testing was primarily developed through the work of Sir R.A. Fisher, Jerzy Neyman, and Egon Pearson in the inter-war period. Some of Fisher's papers on testing were published in economics journals (Fisher, 1923, 1935) and exerted a notable influence on the discipline. The development of econometrics and the rise of quantitative economic models in the mid-20th century made statistical significance (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  24. Classical versus Bayesian Statistics.Eric Johannesson - 2020 - Philosophy of Science 87 (2):302-318.
    In statistics, there are two main paradigms: classical and Bayesian statistics. The purpose of this article is to investigate the extent to which classicists and Bayesians can agree. My conclusion is that, in certain situations, they cannot. The upshot is that, if we assume that the classicist is not allowed to have a higher degree of belief in a null hypothesis after he has rejected it than before, then he has to either have trivial or incoherent credences to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25.  83
    On propensity-frequentist models for stochastic phenomena; with applications to bell's theorem.Tomasz Placek - unknown
    The paper develops models of statistical experiments that combine propensities with frequencies, the underlying theory being the branching space-times (BST) of Belnap (1992). The models are then applied to analyze Bell's theorem. We prove the so-called Bell-CH inequality via the assumptions of a BST version of Outcome Independence and of (non-probabilistic) No Conspiracy. Notably, neither the condition of probabilistic No Conspiracy nor the condition of Parameter Independence is needed in the proof. As the Bell-CH inequality is most likely experimentally falsified, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  32
    Statistical Data and Mathematical Propositions.Cory Juhl - 2015 - Pacific Philosophical Quarterly 96 (1):100-115.
    Statistical tests of the primality of some numbers look similar to statistical tests of many nonmathematical, clearly empirical propositions. Yet interpretations of probability prima facie appear to preclude the possibility of statistical tests of mathematical propositions. For example, it is hard to understand how the statement that n is prime could have a frequentist probability other than 0 or 1. On the other hand, subjectivist approaches appear to be saddled with ‘coherence’ constraints on rational probabilities that require rational agents (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Error and inference: an outsider stand on a frequentist philosophy.Christian P. Robert - 2013 - Theory and Decision 74 (3):447-461.
    This paper is an extended review of the book Error and Inference, edited by Deborah Mayo and Aris Spanos, about their frequentist and philosophical perspective on testing of hypothesis and on the criticisms of alternatives like the Bayesian approach.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  28.  19
    Severity and Trustworthy Evidence: Foundational Problems versus Misuses of Frequentist Testing.Aris Spanos - 2022 - Philosophy of Science 89 (2):378-397.
    For model-based frequentist statistics, based on a parametric statistical model ${{\cal M}_\theta }$, the trustworthiness of the ensuing evidence depends crucially on the validity of the probabilistic assumptions comprising ${{\cal M}_\theta }$, the optimality of the inference procedures employed, and the adequateness of the sample size to learn from data by securing –. It is argued that the criticism of the postdata severity evaluation of testing results based on a small n by Rochefort-Maranda is meritless because it conflates (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  19
    The Role of Randomization in Bayesian and Frequentist Design of Clinical Trial.Paola Berchialla, Dario Gregori & Ileana Baldi - 2019 - Topoi 38 (2):469-475.
    A key role in inference is played by randomization, which has been extensively used in clinical trials designs. Randomization is primarily intended to prevent the source of bias in treatment allocation by producing comparable groups. In the frequentist framework of inference, randomization allows also for the use of probability theory to express the likelihood of chance as a source for the difference of end outcome. In the Bayesian framework, its role is more nuanced. The Bayesian analysis of clinical trials (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Cognitive Constructivism, Eigen-Solutions, and Sharp Statistical Hypotheses.Julio Michael Stern - 2007 - Cybernetics and Human Knowing 14 (1):9-36.
    In this paper epistemological, ontological and sociological questions concerning the statistical significance of sharp hypotheses in scientific research are investigated within the framework provided by Cognitive Constructivism and the FBST (Full Bayesian Significance Test). The constructivist framework is contrasted with the traditional epistemological settings for orthodox Bayesian and frequentist statistics provided by Decision Theory and Falsificationism.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  31.  19
    The Statistical Philosophy of High Energy Physics: Pragmatism.Kent Staley - unknown
    The recent discovery of a Higgs boson prompted increased attention of statisticians and philosophers of science to the statistical methodology of High Energy Physics. Amidst long-standing debates within the field, HEP has adopted a mixed statistical methodology drawing upon both frequentist and Bayesian methods, but with standard frequentist techniques such as significance testing and confidence interval estimation playing a primary role. Physicists within HEP typically deny that their methodological decisions are guided by philosophical convictions, but are instead based (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Revisiting the two predominant statistical problems: the stopping-rule problem and the catch-all hypothesis problem.Yusaku Ohkubo - 2021 - Annals of the Japan Association for Philosophy of Science 30:23-41.
    The history of statistics is filled with many controversies, in which the prime focus has been the difference in the “interpretation of probability” between Fre- quentist and Bayesian theories. Many philosophical arguments have been elabo- rated to examine the problems of both theories based on this dichotomized view of statistics, including the well-known stopping-rule problem and the catch-all hy- pothesis problem. However, there are also several “hybrid” approaches in theory, practice, and philosophical analysis. This poses many fundamental questions. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  33. Improving Bayesian statistics understanding in the age of Big Data with the bayesvl R package.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Manh-Toan Ho, Manh-Tung Ho & Peter Mantello - 2020 - Software Impacts 4 (1):100016.
    The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language’s no-U-turn (NUTS) sampler. The package (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  34. Why do we need to employ Bayesian statistics and how can we employ it in studies of moral education?: With practical guidelines to use JASP for educators and researchers.Hyemin Han - 2018 - Journal of Moral Education 47 (4):519-537.
    ABSTRACTIn this article, we discuss the benefits of Bayesian statistics and how to utilize them in studies of moral education. To demonstrate concrete examples of the applications of Bayesian statistics to studies of moral education, we reanalyzed two data sets previously collected: one small data set collected from a moral educational intervention experiment, and one big data set from a large-scale Defining Issues Test-2 survey. The results suggest that Bayesian analysis of data sets collected from moral educational studies (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  35. Foundational Issues in Statistical Modeling : Statistical Model Specification.Aris Spanos - 2011 - Rationality, Markets and Morals 2:146-178.
    Statistical model specification and validation raise crucial foundational problems whose pertinent resolution holds the key to learning from data by securing the reliability of frequentist inference. The paper questions the judiciousness of several current practices, including the theory-driven approach, and the Akaike-type model selection procedures, arguing that they often lead to unreliable inferences. This is primarily due to the fact that goodness-of-fit/prediction measures and other substantive and pragmatic criteria are of questionable value when the estimated model is statistically misspecified. (...)
     
    Export citation  
     
    Bookmark  
  36.  76
    Ethics and statistical methodology in clinical trials.C. R. Palmer - 1993 - Journal of Medical Ethics 19 (4):219-222.
    Statisticians in medicine can disagree on appropriate methodology applicable to the design and analysis of clinical trials. So called Bayesians and frequentists both claim ethical superiority. This paper, by defining and then linking together various dichotomies, argues there is a place for both statistical camps. The choice between them depends on the phase of clinical trial, disease prevalence and severity, but supremely on the ethics underlying the particular trial. There is always a tension present between physicians primarily obligated to their (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  37.  98
    A comparison of a Bayesian vs. a frequentist method for profiling hospital performance.Peter C. Austin, C. David Naylor & Jack V. Tu - 2001 - Journal of Evaluation in Clinical Practice 7 (1):35-45.
  38.  49
    Bayeswatch: an overview of Bayesian statistics.Peter C. Austin, Lawrence J. Brunner & S. M. Janet E. Hux Md - 2002 - Journal of Evaluation in Clinical Practice 8 (2):277-286.
    Increasingly, clinical research is evaluated on the quality of its statistical analysis. Traditionally, statistical analyses in clinical research have been carried out from a ‘frequentist’ perspective. The presence of an alternative paradigm – the Bayesian paradigm – has been relatively unknown in clinical research until recently. There is currently a growing interest in the use of Bayesian statistics in health care research. This is due both to a growing realization of the limitations of frequentist methods and to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  39.  19
    The ethics of randomised controlled trials: A matter of statistical belief? [REVIEW]Jane L. Hutton - 1996 - Health Care Analysis 4 (2):95-102.
    This paper outlines the approaches of two apparently competing schools of statistics. The criticisms made by supporters of Bayesian statistics about conventional Frequentist statistics are explained, and the Bayesian claim that their method enables research into new treatments without the need for clinical trials is examined in detail. Several further important issues are considered, including: the use of historical controls and data routinely collected on patients; balance in randomised trials; the possibility of giving information to patients; (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Enviromental genotoxicity evaluation: Bayesian approach for a mixture statistical model.Julio Michael Stern, Angela Maria de Souza Bueno, Carlos Alberto de Braganca Pereira & Maria Nazareth Rabello-Gay - 2002 - Stochastic Environmental Research and Risk Assessment 16:267–278.
    The data analyzed in this paper are part of the results described in Bueno et al. (2000). Three cytogenetics endpoints were analyzed in three populations of a species of wild rodent – Akodon montensis – living in an industrial, an agricultural, and a preservation area at the Itajaí Valley, State of Santa Catarina, Brazil. The polychromatic/normochromatic ratio, the mitotic index, and the frequency of micronucleated polychromatic erythrocites were used in an attempt to establish a genotoxic profile of each area. It (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. Can the Behavioral Sciences Self-correct? A Social Epistemic Study.Felipe Romero - 2016 - Studies in History and Philosophy of Science Part A 60 (C):55-69.
    Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist (...) may converge upon a correct estimate or not depending on the social structure of the community that uses it. Based on this study, I argue that methodological explanations of the “replicability crisis” in psychology are limited and propose an alternative explanation in terms of biases. Finally, I conclude suggesting that scientific self-correction should be understood as an interaction effect between inference methods and social structures. (shrink)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  42. On the correct interpretation of p values and the importance of random variables.Guillaume Rochefort-Maranda - 2016 - Synthese 193 (6):1777-1793.
    The p value is the probability under the null hypothesis of obtaining an experimental result that is at least as extreme as the one that we have actually obtained. That probability plays a crucial role in frequentist statistical inferences. But if we take the word ‘extreme’ to mean ‘improbable’, then we can show that this type of inference can be very problematic. In this paper, I argue that it is a mistake to make such an interpretation. Under minimal assumptions (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  43.  57
    The evaluation of measurement uncertainties and its epistemological ramifications.Nadine de Courtenay & Fabien Grégis - 2017 - Studies in History and Philosophy of Science Part A 65:21-32.
    The way metrologists conceive of measurement has undergone a major shift in the last two decades. This shift can in great part be traced to a change in the statistical methods used to deal with the expression of measurement results, and, more particularly, with the calculation of measurement uncertainties. Indeed, as we show, the incapacity of the frequentist approach to the calculus of uncertainty to deal with systematic errors has prompted the replacement of the customary frequentist methods by (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  44.  35
    Why I Am Not a Likelihoodist.Greg Gandenberger - 2016 - Philosophers' Imprint 16.
    Frequentist statistical methods continue to predominate in many areas of science despite prominent calls for "statistical reform." They do so in part because their main rivals, Bayesian methods, appeal to prior probability distributions that arguably lack an objective justification in typical cases. Some methodologists find a third approach called likelihoodism attractive because it avoids important objections to frequentism without appealing to prior probabilities. However, likelihoodist methods do not provide guidance for belief or action, but only assessments of data as (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Constructive Verification, Empirical Induction, and Falibilist Deduction: A Threefold Contrast.Julio Michael Stern - 2011 - Information 2 (4):635-650.
    This article explores some open questions related to the problem of verification of theories in the context of empirical sciences by contrasting three epistemological frameworks. Each of these epistemological frameworks is based on a corresponding central metaphor, namely: (a) Neo-empiricism and the gambling metaphor; (b) Popperian falsificationism and the scientific tribunal metaphor; (c) Cognitive constructivism and the object as eigen-solution metaphor. Each of one of these epistemological frameworks has also historically co-evolved with a certain statistical theory and method for testing (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  46.  10
    Logic and Combinatorics: Proceedings of the AMS-IMS-SIAM Joint Summer Research Conference Held August 4-10, 1985.Stephen G. Simpson, American Mathematical Society, Institute of Mathematical Statistics & Society for Industrial and Applied Mathematics - 1987 - American Mathematical Soc..
    In recent years, several remarkable results have shown that certain theorems of finite combinatorics are unprovable in certain logical systems. These developments have been instrumental in stimulating research in both areas, with the interface between logic and combinatorics being especially important because of its relation to crucial issues in the foundations of mathematics which were raised by the work of Kurt Godel. Because of the diversity of the lines of research that have begun to shed light on these issues, there (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  73
    The likelihood principle and the reliability of experiments.Andrew Backe - 1999 - Philosophy of Science 66 (3):361.
    The likelihood principle of Bayesian statistics implies that information about the stopping rule used to collect evidence does not enter into the statistical analysis. This consequence confers an apparent advantage on Bayesian statistics over frequentist statistics. In the present paper, I argue that information about the stopping rule is nevertheless of value for an assessment of the reliability of the experiment, which is a pre-experimental measure of how well a contemplated procedure is expected to discriminate between (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  48. Functional Thought Experiments.Denny Borsboom, Gideon J. Mellenbergh & Jaap Van Heerden - 2002 - Synthese 130 (3):379-387.
    The literature on thought experiments has been mainly concernedwith thought experiments that are directed at a theory, be it in aconstructive or a destructive manner. This has led somephilosophers to argue that all thought experiments can beformulated as arguments. The aim of this paper is to drawattention to a type of thought experiment that is not directed ata theory, but fulfills a specific function within a theory. Suchthought experiments are referred to as functional thoughtexperiments, and they are routinely used in (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  49.  77
    A New Proof of the Likelihood Principle.Greg Gandenberger - 2015 - British Journal for the Philosophy of Science 66 (3):475-503.
    I present a new proof of the likelihood principle that avoids two responses to a well-known proof due to Birnbaum ([1962]). I also respond to arguments that Birnbaum’s proof is fallacious, which if correct could be adapted to this new proof. On the other hand, I urge caution in interpreting proofs of the likelihood principle as arguments against the use of frequentist statistical methods. 1 Introduction2 The New Proof3 How the New Proof Addresses Proposals to Restrict Birnbaum’s Premises4 A (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  50.  66
    The quantitative-qualitative distinction and the Null hypothesis significance testing procedure.Nimal Ratnesar & Jim Mackenzie - 2006 - Journal of Philosophy of Education 40 (4):501–509.
    Conventional discussion of research methodology contrast two approaches, the quantitative and the qualitative, presented as collectively exhaustive. But if qualitative is taken as the understanding of lifeworlds, the two approaches between them cover only a tiny fraction of research methodologies; and the quantitative, taken as the routine application to controlled experiments of frequentist statistics by way of the Null Hypothesis Significance Testing Procedure, is seriously flawed. It is contrary to the advice both of Fisher and of Neyman and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 998