Results for ' statistical criterion'

986 found
Order:
  1.  56
    Updating Probability: Tracking Statistics as Criterion.Bas C. van Fraassen & Joseph Y. Halpern - 2016 - British Journal for the Philosophy of Science:axv027.
    ABSTRACT For changing opinion, represented by an assignment of probabilities to propositions, the criterion proposed is motivated by the requirement that the assignment should have, and maintain, the possibility of matching in some appropriate sense statistical proportions in a population. This ‘tracking’ criterion implies limitations on policies for updating in response to a wide range of types of new input. Satisfying the criterion is shown equivalent to the principle that the prior must be a convex combination (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  2.  10
    Updating Probability: Tracking Statistics as Criterion.Bas C. van Fraassen & Joseph Y. Halpern - 2017 - British Journal for the Philosophy of Science 68 (3):725-743.
    For changing opinion, represented by an assignment of probabilities to propositions, the criterion proposed is motivated by the requirement that the assignment should have, and maintain, the possibility of matching in some appropriate sense statistical proportions in a population. This ‘tracking’ criterion implies limitations on policies for updating in response to a wide range of types of new input. Satisfying the criterion is shown equivalent to the principle that the prior must be a convex combination of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3.  65
    Statistical decisions under ambiguity.Jörg Stoye - 2011 - Theory and Decision 70 (2):129-148.
    This article provides unified axiomatic foundations for the most common optimality criteria in statistical decision theory. It considers a decision maker who faces a number of possible models of the world (possibly corresponding to true parameter values). Every model generates objective probabilities, and von Neumann–Morgenstern expected utility applies where these obtain, but no probabilities of models are given. This is the classic problem captured by Wald’s (Statistical decision functions, 1950) device of risk functions. In an Anscombe–Aumann environment, I (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  4. On statistical criteria of algorithmic fairness.Brian Hedden - 2021 - Philosophy and Public Affairs 49 (2):209-231.
    Predictive algorithms are playing an increasingly prominent role in society, being used to predict recidivism, loan repayment, job performance, and so on. With this increasing influence has come an increasing concern with the ways in which they might be unfair or biased against individuals in virtue of their race, gender, or, more generally, their group membership. Many purported criteria of algorithmic fairness concern statistical relationships between the algorithm’s predictions and the actual outcomes, for instance requiring that the rate of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   35 citations  
  5.  50
    ABET Criterion 3.f: How Much Curriculum Content is Enough?B. E. Barry & M. W. Ohland - 2012 - Science and Engineering Ethics 18 (2):369-392.
    Even after multiple cycles of ABET accreditation, many engineering programs are unsure of how much curriculum content is needed to meet the requirements of ABET’s Criterion 3.f (an understanding of professional and ethical responsibility). This study represents the first scholarly attempt to assess the impact of curriculum reform following the introduction of ABET Criterion 3.f. This study sought to determine how much professional and ethical responsibility curriculum content was used between 1995 and 2005, as well as how, when, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  6. Intermediate Role of the Criterion of Focus on the Students Benefiting in the Relationship between Adopting the Criterion of Partnership and Resources and Achieving Community Satisfaction in the Palestinian Universities.Suliman A. El Talla, Ahmed M. A. FarajAllah, Samy S. Abu-Naser & Mazen J. Al Shobaki - 2019 - International Journal of Academic Multidisciplinary Research (IJAMR) 2 (12):47-59.
    The study aimed at identifying the intermediate role of the criterion of emphasis on students and beneficiaries in the relationship between adopting the criterion of partnership and resources and achieving the satisfaction of the society. The study used the analytical descriptive method. The study was conducted on university leadership in Al-Azhar, Islamic and Al-Aqsa Universities. The sample of the study consisted of (200) individuals, 182 of whom responded, and the questionnaire was used in collecting the data. The study (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  7.  50
    Statistical Explanations.James H. Fetzer - 1972 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1972:337 - 347.
    The purpose of this paper is to provide a systematic appraisal of the covering law and statistical relevance theories of statistical explanation advanced by Carl G. Hempel and by Wesley C. Salmon, respectively. The analysis is intended to show that the difference between these accounts is inprinciple analogous to the distinction between truth and confirmation, where Hempel's analysis applies to what is taken to be the case and Salmon's analysis applies to what is the case. Specifically, it is (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  8.  27
    Thoughts on Jun Otsuka’s Thinking about Statistics – the Philosphical Foundations.Elliott Sober - 2024 - Asian Journal of Philosophy 3 (1):1-11.
    Jun Otsuka’s excellent book, Thinking about Statistics - the Philosophical Foundations (Otsuka 2023) is mostly organized around the idea that different statistical approaches can be illuminated by linking them to different ideas in general epistemology. Otsuka connects Bayesianism to internalism and foundationalism, frequentism to reliabilism, and the Akaike Information Criterion in model selection theory to instrumentalism. This useful mapping doesn’t cover all the interesting ideas he presents. His discussions of causal inference and machine learning are philosophically insightful, as (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  7
    Using Statistical Model to Study the Daily Closing Price Index in the Kingdom of Saudi Arabia.Hassan M. Aljohani & Azhari A. Elhag - 2021 - Complexity 2021:1-5.
    Classification in statistics is usually used to solve the problems of identifying to which set of categories, such as subpopulations, new observation belongs, based on a training set of data containing information whose category membership is known. The article aims to use the Gaussian Mixture Model to model the daily closing price index over the period of 1/1/2013 to 16/8/2020 in the Kingdom of Saudi Arabia. The daily closing price index over the period declined, which might be the effect of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  7
    Inventing Industrial Statistics.Michael Zakim - 2010 - Theoretical Inquiries in Law 11 (1):283-318.
    This Article explores the success of the new science of statistics in establishing order within the pandemonium of industrial revolution in the nineteenth century. This success was based on the fact that the expanding circulation of both men and goods that characterized capitalism constituted the ontological foundation of statistics as well. In this respect, one can say that statistics turned variety and multiplicity into the basis of system, if not of uniformity. The study focuses on the 1850 federal census of (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  42
    Nonparametric statistics in multicriteria analysis.Antonino Scarelli & Lorenzo Venzi - 1997 - Theory and Decision 43 (1):89-105.
    The paper deals with a method of hierarchization of alternatives in a multicriteria environment by means of statistical nonparametric procedures. For each criterion, alternatives are disposed on an ordinal scale. After that a procedure similar to ANOVA is activated on the data. The differences relating to the average ranks of each action are used to build the hierarchical algorithm. The concept of outranking, in a probabilistic meaning, is reached in this way. Thereafter, we arrive at the concept of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  12.  4
    Evaluating the Relative Importance of Wordhood Cues Using Statistical Learning.Elizabeth Pankratz, Simon Kirby & Jennifer Culbertson - 2024 - Cognitive Science 48 (3):e13429.
    Identifying wordlike units in language is typically done by applying a battery of criteria, though how to weight these criteria with respect to one another is currently unknown. We address this question by investigating whether certain criteria are also used as cues for learning an artificial language—if they are, then perhaps they can be relied on more as trustworthy top‐down diagnostics. The two criteria for grammatical wordhood that we consider are a unit's free mobility and its internal immutability. These criteria (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. A Proposed Hybrid Effect Size Plus p -Value Criterion: Empirical Evidence Supporting its Use.William M. Goodman - 2019 - The American Statistician 73 (Sup(1)):168-185.
    DOI: 10.1080/00031305.2018.1564697 When the editors of Basic and Applied Social Psychology effectively banned the use of null hypothesis significance testing (NHST) from articles published in their journal, it set off a fire-storm of discussions both supporting the decision and defending the utility of NHST in scientific research. At the heart of NHST is the p-value which is the probability of obtaining an effect equal to or more extreme than the one observed in the sample data, given the null hypothesis and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  14.  11
    Institutionalizing the Statistics of Nationality in Prussia in the 19th Century.Morgane Labbé - 2007 - Centaurus 49 (4):289-306.
    By the end of the 19th century, the Prussian censuses registered regularly the nationality of the population according to a standard criterion: the mother tongue. The background to this institutionalization could be mapped out in terms of the early creation of the statistical office, the reform of the bureaucracy, and the political challenge following the annexation of the western part of the former Polish state. However, this paper gives a different account of that goes beyond a state-level history (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  74
    Theory Change and Bayesian Statistical Inference.Jan-Willem Romeijn - 2005 - Philosophy of Science 72 (5):1174-1186.
    This paper addresses the problem that Bayesian statistical inference cannot accommodate theory change, and proposes a framework for dealing with such changes. It first presents a scheme for generating predictions from observations by means of hypotheses. An example shows how the hypotheses represent the theoretical structure underlying the scheme. This is followed by an example of a change of hypotheses. The paper then presents a general framework for hypotheses change, and proposes the minimization of the distance between hypotheses as (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  16.  23
    Updating Statistical Measures of Causal Strength.Hrishikesh Vinod - 2020 - Science and Philosophy 8 (1):3-20.
    We address Northcott’s criticism of Pearson’s correlation coefficient ‘r’ in measuring causal strength by replacing Pearson’s linear regressions by nonparametric nonlinear kernel regressions. Although new proof shows that Suppes’ intuitive causality condition is neither necessary nor sufficient, we resurrect Suppes’ probabilistic causality theory by using nonlinear tools. We use asymmetric generalized partial correlation coefficients from Vinod [2014] as our third criterion in addition to two more criteria. We aggregate the three criteria into one unanimity index, UI in [-100; 100], (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  49
    Harm should not be a necessary criterion for mental disorder: some reflections on the DSM-5 definition of mental disorder.Maria Cristina Amoretti & Elisabetta Lalumera - 2019 - Theoretical Medicine and Bioethics 40 (4):321-337.
    The general definition of mental disorder stated in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders seems to identify a mental disorder with a harmful dysfunction. However, the presence of distress or disability, which may be bracketed as the presence of harm, is taken to be merely usual, and thus not a necessary requirement: a mental disorder can be diagnosed as such even if there is no harm at all. In this paper, we focus on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  18. Measurement and statistics: Towards a clarification of the theory of "permissible statistics".Richard E. Robinson - 1965 - Philosophy of Science 32 (3/4):229-243.
    Much of the criticism of Stevens's criterion for permissible statistics as applied to measurement data results from a lack of clarity in Stevens's position. In this paper set-theoretical notions have been used to clarify that position. We define a sig-function as a function defined on numerical assignments. If A and R are empirical and numerical relational systems, respectively, then a sig-function F is constant on A with respect to R if, and only if, the value of F is the (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark  
  19.  75
    Curve Fitting, the Reliability of Inductive Inference, and the Error‐Statistical Approach.Aris Spanos - 2007 - Philosophy of Science 74 (5):1046-1066.
    The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-of-fit (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  20. A Causal Safety Criterion for Knowledge.Jonathan Vandenburgh - forthcoming - Erkenntnis:1-21.
    Safety purports to explain why cases of accidentally true belief are not knowledge, addressing Gettier cases and cases of belief based on statistical evidence. However, problems arise for using safety as a condition on knowledge: safety is not necessary for knowledge and cannot always explain the Gettier cases and cases of statistical evidence it is meant to address. In this paper, I argue for a new modal condition designed to capture the non-accidental relationship between facts and evidence required (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  21.  42
    The scientists' criterion of true observation.D. G. Ellson - 1963 - Philosophy of Science 30 (1):41-52.
    A theory of true observation is developed as a generalization of the method of inter-observer agreement that scientists use to determine the objectivity and reliability of observations. A true observation is defined as a statement included in a set of statements in which there is statistical dependence and perfect agreement between the statements made by a universe of experimentally independent persons. Meaningfulness--the existence of an objective referent--for each form of statement included in the set is inferred from statistical (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  57
    Prediction, explanation, and testability as criteria for judging statistical theories.Brown Grier - 1975 - Philosophy of Science 42 (4):373-383.
    For the case of statistical theories, the criteria of explanation, prediction, and testability can all be viewed as particular instances of a more general evaluation scheme. Using the ideas of a gain matrix and expected gain from statistical decision theory, these three criteria can be compared in terms of the elements in their associated gain matrices. This analysis leads to (1) further understanding of the interrelationship between the current criteria, (2) the proposal of an ordering for the criteria, (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  42
    On the classical approximation in the quantum statistics of equivalent particles.Armand Siegel - 1970 - Foundations of Physics 1 (2):145-171.
    It is shown here that the microcanonical ensemble for a system of noninteracting bosons and fermions contains a subensemble of state vectors for which all particles of the system are distinguishable. This “IQC” (inner quantum-classical) subensemble is therefore fully classical, except for a rather extreme quantization of particle momentum and position, which appears as the natural price that must be paid for distinguishability. The contribution of the IQC subensemble to the entropy is readily calculated, and the criterion for this (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  23
    On After-Trial Criticisms of Neyman-Pearson Theory of Statistics.Deborah G. Mayo - 1982 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1982:145 - 158.
    Despite its widespread use in science, the Neyman-Pearson Theory of Statistics (NPT) has been rejected as inadequate by most philosophers of induction and statistics. They base their rejection largely upon what the author refers to as after-trial criticisms of NPT. Such criticisms attempt to show that NPT fails to provide an adequate analysis of specific inferences after the trial is made, and the data is known. In this paper, the key types of after-trial criticisms are considered and it is argued (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  25.  40
    Speed-Accuracy Tradeoff in Reaction Time: Effect of Discrete Criterion Times.Robert G. Pachella & Richard W. Pew - 1968 - Journal of Experimental Psychology 76 (1p1):19.
  26.  7
    Generics.Bernhard Nickel - 1997 - In Bob Hale, Crispin Wright & Alexander Miller (eds.), A Companion to the Philosophy of Language. Chichester, West Sussex, UK: Wiley-Blackwell. pp. 437–462.
    Generics exhibit genericity, and though a theory of generics is closely connected to a theory of genericity, the two are distinct. They raise a host of interesting linguistic and philosophical issues, both separately and in their interaction. This chapter begins with a fairly manifest phenomenon one can observe in natural language. There is a range of sentences that, speaking intuitively, one can use to talk about kinds. It argues that there's no simple statistical criterion that systematically captures the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  27.  66
    Is default logic a reinvention of inductive-statistical reasoning?Yao-Hua Tan - 1997 - Synthese 110 (3):357-379.
    Currently there is hardly any connection between philosophy of science and Artificial Intelligence research. We argue that both fields can benefit from each other. As an example of this mutual benefit we discuss the relation between Inductive-Statistical Reasoning and Default Logic. One of the main topics in AI research is the study of common-sense reasoning with incomplete information. Default logic is especially developed to formalise this type of reasoning. We show that there is a striking resemblance between inductive-statistical (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  28.  67
    Environmental Factors Contributing to Wrongdoing in Medicine: A Criterion-Based Review of Studies and Cases.James M. DuBois, Emily E. Anderson, Kelly Carroll, Tyler Gibb, Elena Kraus, Timothy Rubbelke & Meghan Vasher - 2012 - Ethics and Behavior 22 (3):163 - 188.
    In this article we describe our approach to understanding wrongdoing in medical research and practice, which involves the statistical analysis of coded data from a large set of published cases. We focus on understanding the environmental factors that predict the kind and the severity of wrongdoing in medicine. Through review of empirical and theoretical literature, consultation with experts, the application of criminological theory, and ongoing analysis of our first 60 cases, we hypothesize that 10 contextual features of the medical (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  29. Equalized Odds is a Requirement of Algorithmic Fairness.David Gray Grant - 2023 - Synthese 201 (3).
    Statistical criteria of fairness are formal measures of how an algorithm performs that aim to help us determine whether an algorithm would be fair to use in decision-making. In this paper, I introduce a new version of the criterion known as “Equalized Odds,” argue that it is a requirement of procedural fairness, and show that it is immune to a number of objections to the standard version.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  30.  7
    Probability in 1919/20: the von Mises-Pólya-Controversy.Reinhard Siegmund-Schultze - 2006 - Archive for History of Exact Sciences 60 (5):431-515.
    The correspondence between Richard von Mises and George Pólya of 1919/20 contains reflections on two well-known articles by von Mises on the foundations of probability in the Mathematische Zeitschrift of 1919, and one paper from the Physikalische Zeitschrift of 1918. The topics touched on in the correspondence are: the proof of the central limit theorem of probability theory, von Mises' notion of randomness, and a statistical criterion for integer-valuedness of physical data. The investigation will hint at both the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. Hypotheses that attribute false beliefs: A two‐part epistemology.William Roche & Elliott Sober - 2020 - Mind and Language 36 (5):664-682.
    Is there some general reason to expect organisms that have beliefs to have false beliefs? And after you observe that an organism occasionally occupies a given neural state that you think encodes a perceptual belief, how do you evaluate hypotheses about the semantic content that that state has, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32.  50
    Decision Making in `Random in a Broad Sense' Environments.V. I. Ivanenko & B. Munier - 2000 - Theory and Decision 49 (2):127-150.
    It is shown that the uncertainty connected with a `random in a broad sense' (not necessarily stochastic) event always has some `statistical regularity' (SR) in the form of a family of finite-additive probability distributions. The specific principle of guaranteed result in decision making is introduced. It is shown that observing this principle of guaranteed result leads to determine the one optimality criterion corresponding to a decision system with a given `statistical regularity'.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  33. The robust beauty of improper linear models in decision making.Robyn M. Dawes - 1979 - American Psychologist 34 (7):571-582.
    Proper linear models are those in which predictor variables are given weights such that the resulting linear composite optimally predicts some criterion of interest; examples of proper linear models are standard regression analysis, discriminant function analysis, and ridge regression analysis. Research summarized in P. Meehl's book on clinical vs statistical prediction and research stimulated in part by that book indicate that when a numerical criterion variable is to be predicted from numerical predictor variables, proper linear models outperform (...)
    Direct download  
     
    Export citation  
     
    Bookmark   76 citations  
  34.  47
    Model Selection, Simplicity, and Scientific Inference.Wayne C. Myrvold & William L. Harper - 2002 - Philosophy of Science 69 (S3):S135-S149.
    The Akaike Information Criterion can be a valuable tool of scientific inference. This statistic, or any other statistical method for that matter, cannot, however, be the whole of scientific methodology. In this paper some of the limitations of Akaikean statistical methods are discussed. It is argued that the full import of empirical evidence is realized only by adopting a richer ideal of empirical success than predictive accuracy, and that the ability of a theory to turn phenomena into (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  35. Cosmopolitanism and Competition: Probing the Limits of Egalitarian Justice.David Wiens - 2017 - Economics and Philosophy 33 (1):91-124.
    This paper develops a novel competition criterion for evaluating institutional schemes. Roughly, this criterion says that one institutional scheme is normatively superior to another to the extent that the former would engender more widespread political competition than the latter. I show that this criterion should be endorsed by both global egalitarians and their statist rivals, as it follows from their common commitment to the moral equality of all persons. I illustrate the normative import of the competition (...) by exploring its potential implications for the scope of egalitarian principles of distributive justice. In particular, I highlight the challenges it raises for global egalitarians' efforts to justify extending the scope of egalitarian justice beyond the state. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36.  80
    The Pauli Exclusion Principle. Can It Be Proved?I. G. Kaplan - 2013 - Foundations of Physics 43 (10):1233-1251.
    The modern state of the Pauli exclusion principle studies is discussed. The Pauli exclusion principle can be considered from two viewpoints. On the one hand, it asserts that particles with half-integer spin (fermions) are described by antisymmetric wave functions, and particles with integer spin (bosons) are described by symmetric wave functions. This is a so-called spin-statistics connection. The reasons why the spin-statistics connection exists are still unknown, see discussion in text. On the other hand, according to the Pauli exclusion principle, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  37. Model selection, simplicity, and scientific inference.Wayne C. Myrvold & William L. Harper - 2002 - Proceedings of the Philosophy of Science Association 2002 (3):S135-S149.
    The Akaike Information Criterion can be a valuable tool of scientific inference. This statistic, or any other statistical method for that matter, cannot, however, be the whole of scientific methodology. In this paper some of the limitations of Akaikean statistical methods are discussed. It is argued that the full import of empirical evidence is realized only by adopting a richer ideal of empirical success than predictive accuracy, and that the ability of a theory to turn phenomena into (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  38.  71
    In defense of the Neyman-Pearson theory of confidence intervals.Deborah G. Mayo - 1981 - Philosophy of Science 48 (2):269-280.
    In Philosophical Problems of Statistical Inference, Seidenfeld argues that the Neyman-Pearson (NP) theory of confidence intervals is inadequate for a theory of inductive inference because, for a given situation, the 'best' NP confidence interval, [CIλ], sometimes yields intervals which are trivial (i.e., tautologous). I argue that (1) Seidenfeld's criticism of trivial intervals is based upon illegitimately interpreting confidence levels as measures of final precision; (2) for the situation which Seidenfeld considers, the 'best' NP confidence interval is not [CIλ] as (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  39. Cultural evolution in Vietnam’s early 20th century: a Bayesian networks analysis of Hanoi Franco-Chinese house designs.Quan-Hoang Vuong, Quang-Khiem Bui, Viet-Phuong La, Thu-Trang Vuong, Manh-Toan Ho, Hong-Kong T. Nguyen, Hong-Ngoc Nguyen, Kien-Cuong P. Nghiem & Manh-Tung Ho - 2019 - Social Sciences and Humanities Open 1 (1):100001.
    The study of cultural evolution has taken on an increasingly interdisciplinary and diverse approach in explicating phenomena of cultural transmission and adoptions. Inspired by this computational movement, this study uses Bayesian networks analysis, combining both the frequentist and the Hamiltonian Markov chain Monte Carlo (MCMC) approach, to investigate the highly representative elements in the cultural evolution of a Vietnamese city’s architecture in the early 20th century. With a focus on the façade design of 68 old houses in Hanoi’s Old Quarter (...)
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  40.  26
    Belief Revision and Relevance.Peter Gardenfors - 1990 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1990:349 - 365.
    A general criterion for the theory of belief revision is that when we revise a state of belief by a sentence A, as much of the old information as possible should be retained in the revised state of belief. The motivating idea in this paper is that if a belief B is irrelevant to A, then B should still be believed in the revised state. The problem is that the traditional definition of statistical relevance suffers from some serious (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  41.  90
    Generalized EPR-paradox.F. Selleri - 1982 - Foundations of Physics 12 (7):645-659.
    A generalized reality criterion which attributes physical properties to statistical ensembles is used in order to deduce an inequality which is violated by quantum mechanics in realistic conditions. This new form of the Einstein-Podolsky-Rosen paradox is developed exclusively from the reality criterion and from separability without any use of the previously introduced restrictive assumptions about the probabilistic scheme.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  42.  39
    Managers’ Moral Decision-Making Patterns Over Time: A Multidimensional Approach.Johanna Kujala, Anna-Maija Lämsä & Katriina Penttilä - 2011 - Journal of Business Ethics 100 (2):191-207.
    Taking multidimensional ethics scale approach, this article describes an empirical survey of top managers’ moral decision-making patterns and their change from 1994 to 2004 during morally problematic situations in the Finnish context. The survey questionnaire consisted of four moral dilemmas and a multidimensional scale with six ethical dimensions: justice, deontology, relativism, utilitarianism, egoism and female ethics. The managers evaluated their decision-making in the problems using the multidimensional ethics scale. Altogether 880 questionnaires were analysed statistically. It is concluded that relying on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  43. Patterns of abduction.Gerhard Schurz - 2008 - Synthese 164 (2):201-234.
    This article describes abductions as special patterns of inference to the best explanation whose structure determines a particularly promising abductive conjecture and thus serves as an abductive search strategy. A classification of different patterns of abduction is provided which intends to be as complete as possible. An important distinction is that between selective abductions, which choose an optimal candidate from given multitude of possible explanations, and creative abductions, which introduce new theoretical models or concepts. While selective abduction has dominated the (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   95 citations  
  44.  34
    Likelihood, Model Selection, and the Duhem-Quine Problem.Elliott Sober - 2004 - Journal of Philosophy 101 (5):221-241.
    In what follows I will discuss an example of the Duhem-Quine problem in which Pr(H A), Pr(A H), and Pr(OI +H& ?A) (where H is the hypothesis, A the auxiliary assumptions, and O the observational prediction) can be construed objectively; however, only some of those quantities are relevant to the analysis that I provide. The example involves medical diagnosis. The goal is to test the hypothesis that someone has tuberculosis; the auxiliary assumptions describe the er- ror characteristics of the test (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  45.  57
    Wittgenstein on aesthetics and philosophy.Severin Schroeder - 2019 - Revista de Historiografía 32:11-21.
    Wittgenstein offers three objections to the idea of aesthetics as a branch of psychology: (i) Statistical data about people’s preferences have no normative force. (ii) Artistic value is not instrumental value, a capacity to produce independently identifiable – and scientifically measurable – psychological effects. (iii) While psychological investigations may bring to light the causes of aesthetic preferences, they fail to provide reasons for them. According to Wittgenstein, aesthetic explanations (unlike scientific explanations) are poignant synoptic representations of aspects of a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  46.  79
    Vienna indeterminism: Mach, Boltzmann, exner.Michael Stöltzner - 1999 - Synthese 119 (1-2):85-111.
    The present paper studies a specific way of addressing the question whether the laws involving the basic constituents of nature are statistical. While most German physicists, above all Planck, treated the issues of determinism and causality within a Kantian framework, the tradition which I call Vienna Indeterminism began from Mach’s reinterpretation of causality as functional dependence. This severed the bond between causality and realism because one could no longer avail oneself of a priori categories as a criterion for (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  47. Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   63 citations  
  48.  84
    Underdetermination in causal inference.Jiji Zhang - unknown
    One conception of underdetermination is that it corresponds to the impossibility of reliable inquiry. In other words, underdetermination is defined to be the situation where, given a set of background assumptions and a space of hypotheses, it is logically impossible for any hypothesis selection method to meet a given reliability standard. From this perspective, underdetermination in a given subject of inquiry is a matter of interplay between background assumptions and reliability or success criteria. In this paper I discuss underdetermination in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  15
    Individual selection criteria for optimal team composition.Lu Hong & Scott E. Page - forthcoming - Theory and Decision:1-20.
    In this paper, we derive necessary and sufficient conditions on team based tasks in order for a selection criterion applied to individuals to produce optimal teams. We assume only that individuals have types and that a team’s performance depends on its size and the type composition of its members. We first derive the selection principle which states that if a selection criterion exists, it must rank types by homogeneous team performance, the performance of a team consisting only of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  71
    Fairness and Risk: An Ethical Argument for a Group Fairness Definition Insurers Can Use.Joachim Baumann & Michele Loi - 2023 - Philosophy and Technology 36 (3):1-31.
    Algorithmic predictions are promising for insurance companies to develop personalized risk models for determining premiums. In this context, issues of fairness, discrimination, and social injustice might arise: Algorithms for estimating the risk based on personal data may be biased towards specific social groups, leading to systematic disadvantages for those groups. Personalized premiums may thus lead to discrimination and social injustice. It is well known from many application fields that such biases occur frequently and naturally when prediction models are applied to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 986