Normative judgments involve two gradable features. First, the judgments themselves can come in degrees; second, the strength of reasons represented in the judgments can come in degrees. Michael Smith has argued that non-cognitivism cannot accommodate both of these gradable dimensions. The degrees of a non-cognitive state can stand in for degrees of judgment, or degrees of reason strength represented in judgment, but not both. I argue that (a) there are brands of noncognitivism that can surmount Smith’s challenge, and (b) any (...) brand of non-cognitivism that has even a chance of solving the Frege–Geach Problem and some related problems involving probabilistic consistency can also thereby solve Smith’s problem. Because only versions of non-cognitivism that can solve the Frege–Geach Problem are otherwise plausible, all otherwise plausible versions of noncognitivism can meet Smith’s challenge. (shrink)
Recent developments in pure mathematics and in mathematical logic have uncovered a fundamental duality between "existence" and "information." In logic, the duality is between the Boolean logic of subsets and the logic of quotient sets, equivalence relations, or partitions. The analogue to an element of a subset is the notion of a distinction of a partition, and that leads to a whole stream of dualities or analogies--including the development of new logical foundations for information theory parallel to Boole's development of (...) logical finite probability theory. After outlining these dual concepts in mathematical terms, we turn to a more metaphysical speculation about two dual notions of reality, a fully definite notion using Boolean logic and appropriate for classical physics, and the other objectively indefinite notion using partition logic which turns out to be appropriate for quantum mechanics. The existence-information duality is used to intuitively illustrate these two dual notions of reality. The elucidation of the objectively indefinite notion of reality leads to the "killer application" of the existence-information duality, namely the interpretation of quantum mechanics. (shrink)
Gives two pared-down versions of the argument from design, which may prove more persuasive as to a Creator, discusses briefly the mathematics underpinning disbelief and nonbelief and its misuse and some proper uses, moves to why the full argument is needed anyway, viz., to demonstrate Providence, offers a theory as to how miracles (open and hidden) occur, viz. the replacement of any particular mathematics underlying a natural law (save logic) by its most appropriate nonstandard variant. -/- Note: This is an (...) extended abstract; there are no present plans to complete it. (shrink)
I argue for a connection between two debates in the philosophy of probability. On the one hand, there is disagreement about conditional probability. Is it to be defined in terms of unconditional probability, or should we instead take conditional probability as the primitive notion? On the other hand, there is disagreement about how additive probability is. Is it merely finitely additive, or is it additionally countably additive? My thesis is that, if conditional probability is primitive, then it is not countably (...) additive. (shrink)
Epistemic modals have peculiar logical features that are challenging to account for in a broadly classical framework. For instance, while a sentence of the form ‘p, but it might be that not p’ appears to be a contradiction, 'might not p' does not entail 'not p', which would follow in classical logic. Likewise, the classical laws of distributivity and disjunctive syllogism fail for epistemic modals. Existing attempts to account for these facts generally either under- or over-correct. Some theories predict that (...) 'p and might not p', a so-called epistemic contradiction, is a contradiction only in an etiolated sense, under a notion of entailment that does not allow substitution of logical equivalents; these theories underpredict the infelicity of embedded epistemic contradictions. Other theories savage classical logic, eliminating not just rules that intuitively fail, like distributivity and disjunctive syllogism, but also rules like non-contradiction, excluded middle, De Morgan’s laws, and disjunction introduction, which intuitively remain valid for epistemic modals. In this paper, we aim for a middle ground, developing a semantics and logic for epistemic modals that makes epistemic contradictions genuine contradictions and that invalidates distributivity and disjunctive syllogism but that otherwise preserves classical laws that intuitively remain valid. We start with an algebraic semantics, based on ortholattices instead of Boolean algebras, and then propose a more concrete possibility semantics, based on partial possibilities related by compatibility. Both semantics yield the same consequence relation, which we axiomatize. Then we show how to extend our semantics to explain parallel phenomena involving probabilities and conditionals. The goal throughout is to retain what is desirable about classical logic while accounting for the non-classicality of epistemic vocabulary. (shrink)
This chapter starts with a simple conventional presentation of time reversal in physics, and then returns to analyse it, rejects the conventional analysis, and establishes correct principles in their place.
The conventional claims and concepts of 5* - 8* are a hang-over from the classical theory of thermodynamics – i.e. thermodynamics based on a fully deterministic micro-theory, developed in the time of Boltzmann, Loschmidt and Gibbs in the late C19th. The classical theory has well-known ‘reversibility paradoxes’ when applied to the universe as a whole. But the introduction of intrinsic probabilities in quantum mechanics, and its consequent time asymmetry, fundamentally changes the picture.
Four intuitions are recurrent and influential in theories about conditionals: the Ramsey’s test, the Adams’ Thesis, the Equation, and the robustness requirement. For simplicity’s sake, I call these intuitions ‘the big four’. My aim is to show that: (1) the big four are interdependent; (2) they express our inferential dispositions to employ a conditional on a modus ponens; (3) the disposition to employ conditionals on a modus ponens doesn’t have the epistemic significance that is usually attributed to it, since the (...) acceptability or truth conditions of a conditional is not necessarily associated with its employability on a modus ponens. (shrink)
Ex ante predicted outcomes should be interpreted as counterfactuals (potential histories), with errors as the spread between outcomes. But error rates have error rates. We reapply measurements of uncertainty about the estimation errors of the estimation errors of an estimation treated as branching counterfactuals. Such recursions of epistemic uncertainty have markedly different distributial properties from conventional sampling error, and lead to fatter tails in the projections than in past realizations. Counterfactuals of error rates always lead to fat tails, regardless of (...) the probability distribution used. A mere .01% branching error rate about the STD (itself an error rate), and .01% branching error rate about that error rate, etc. (recursing all the way) results in explosive (and infinite) moments higher than 1. Missing any degree of regress leads to the underestimation of small probabilities and concave payoffs (a standard example of which is Fukushima). The paper states the conditions under which higher order rates of uncertainty (expressed in spreads of counterfactuals) alters the shapes the of final distribution and shows which a priori beliefs about conterfactuals are needed to accept the reliability of conventional probabilistic methods (thin tails or mildly fat tails). (shrink)
Abstract: Four main forms of Doomsday Argument (DA) exist—Gott’s DA, Carter’s DA, Grace’s DA and Universal DA. All four forms use different probabilistic logic to predict that the end of the human civilization will happen unexpectedly soon based on our early location in human history. There are hundreds of publications about the validity of the Doomsday argument. Most of the attempts to disprove the Doomsday Argument have some weak points. As a result, we are uncertain about the validity of DA (...) proofs and rebuttals. In this article, a meta-DA is introduced, which uses the idea of logical uncertainty over the DA’s validity estimated based on a virtual prediction market of the opinions of different scientists. The result is around 0.4 for the validity of some form of DA, and even smaller for “Strong DA”, which predicts the end of the world in the near term. We discuss many examples of the validity of the DA in real life as an instrument to prove it “experimentally”. We also show that DA becomes strongest if it is based on the idea of the “natural reference class” of observers, that is, the observers who know about the DA (i.e. a Self-Referenced DA). Such a DA predicts that there is a high probability of a global catastrophe with human extinction in the 21st century, which aligns with what we already know based on analysis of different technological risks. (shrink)
In the following we will investigate whether von Mises’ frequency interpretation of probability can be modified to make it philosophically acceptable. We will reject certain elements of von Mises’ theory, but retain others. In the interpretation we propose we do not use von Mises’ often criticized ‘infinite collectives’ but we retain two essential claims of his interpretation, stating that probability can only be defined for events that can be repeated in similar conditions, and that exhibit frequency stabilization. The central idea (...) of the present article is that the mentioned ‘conditions’ should be well-defined and ‘partitioned’. More precisely, we will divide probabilistic systems into object, initializing, and probing subsystem, and show that such partitioning allows to solve problems. Moreover we will argue that a key idea of the Copenhagen interpretation of quantum mechanics (the determinant role of the observing system) can be seen as deriving from an analytic definition of probability as frequency. Thus a secondary aim of the article is to illustrate the virtues of analytic definition of concepts, consisting of making explicit what is implicit. (shrink)
This article analyzes the implications of protective measurement for the meaning of the wave function. According to protective measurement, a charged quantum system has mass and charge density proportional to the modulus square of its wave function. It is shown that the mass and charge density is not real but effective, formed by the ergodic motion of a localized particle with the total mass and charge of the system. Moreover, it is argued that the ergodic motion is not continuous but (...) discontinuous and random. This result suggests a new interpretation of the wave function, according to which the wave function is a description of random discontinuous motion of particles, and the modulus square of the wave function gives the probability density of the particles being in certain locations. It is shown that the suggested interpretation of the wave function disfavors the de Broglie-Bohm theory and the many-worlds interpretation but favors the dynamical collapse theories, and the random discontinuous motion of particles may provide an appropriate random source to collapse the wave function. (shrink)
The meaning of the wave function and its evolution are investigated. First, we argue that the wave function in quantum mechanics is a description of random discontinuous motion of particles, and the modulus square of the wave function gives the probability density of the particles being in certain locations in space. Next, we show that the linear non-relativistic evolution of the wave function of an isolated system obeys the free Schrödinger equation due to the requirements of spacetime translation invariance and (...) relativistic invariance. Thirdly, we argue that the random discontinuous motion of particles may lead to a stochastic, nonlinear collapse evolution of the wave function. A discrete model of energy-conserved wavefunction collapse is proposed and shown consistent with existing experiments and our macroscopic experience. Besides, we also give a critical analysis of the de Broglie-Bohm theory, the many-worlds interpretation and other dynamical collapse theories, and briefly discuss the issues of unifying quantum mechanics and relativity. (shrink)
We show that the physical meaning of the wave function can be derived based on the established parts of quantum mechanics. It turns out that the wave function represents the state of random discontinuous motion of particles, and its modulus square determines the probability density of the particles appearing in certain positions in space.
I present a solution to the epistemological or characterisation problem of induction. In part I, Bayesian Confirmation Theory (BCT) is discussed as a good contender for such a solution but with a fundamental explanatory gap (along with other well discussed problems); useful assigned probabilities like priors require substantive degrees of belief about the world. I assert that one does not have such substantive information about the world. Consequently, an explanation is needed for how one can be licensed to act as (...) if one has substantive information about the world when one does not. I sketch the outlines of a solution in part I, showing how it differs from others, with full details to follow in subsequent parts. The solution is pragmatic in sentiment (though differs in specifics to arguments from, for example, William James); the conceptions we use to guide our actions are and should be at least partly determined by preferences. This is cashed out in a reformulation of decision theory motivated by a non-reductive formulation of hypotheses and logic. A distinction emerges between initial assumptions--that can be non-dogmatic--and effective assumptions that can simultaneously be substantive. An explanation is provided for the plausibility arguments used to explain assigned probabilities in BCT. -/- In subsequent parts, logic is constructed from principles independent of language and mind. In particular, propositions are defined to not have form. Probabilities are logical and uniquely determined by assumptions. The problems considered fatal to logical probabilities--Goodman's `grue' problem and the uniqueness of priors problem are dissolved due to the particular formulation of logic used. Other problems such as the zero-prior problem are also solved. -/- A universal theory of (non-linguistic) meaning is developed. Problems with counterfactual conditionals are solved by developing concepts of abstractions and corresponding pictures that make up hypotheses. Spaces of hypotheses and the version of Bayes' theorem that utilises them emerge from first principles. -/- Theoretical virtues for hypotheses emerge from the theory. Explanatory force is explicated. The significance of effective assumptions is partly determined by combinatoric factors relating to the structure of hypotheses. I conjecture that this is the origin of simplicity. (shrink)
The Ontology of Knowledge (OK) does not claim to expose the truth of reality but only to propose a coherent model of representation according to which: -Reality is not subject to form or time. -The Knowing Subject is a wave of meaning running through the immobile reality.
This paper investigates the logic of reasons. Its aim is to provide an analysis of the sentences of the form ‘p is a reason for q’ that yields a coherent account of their logical properties. The idea that we will develop is that ‘p is a reason for q’ is acceptable just in case a suitably defined relation of incompatibility obtains between p and ¬q. As we will suggest, a theory of reasons based on this idea can solve three challenging (...) puzzles that concern, respectively, contraposing reasons, conflicting reasons, and supererogatory reasons, and opens a new perspective on some classical issues concerning non-deductive inferences. (shrink)
In this work, we focus on the philosophical aspects and technical challenges that underlie the axiomatization of the non-Kolmogorovian probability framework, in connection with the problem of quantum contextuality. This fundamental feature of quantum theory has received a lot of attention recently, given that it might be connected to the speed-up of quantum computers—a phenomenon that is not fully understood. Although this problem has been extensively studied in the physics community, there are still many philosophical questions that should be properly (...) formulated. We analyzed different problems from a conceptual standpoint using the non-Kolmogorovian probability approach as a technical tool. (shrink)
A number of authors, including me, have argued that the output of our most complex climate models, that is, of global climate models and Earth system models, should be assessed possibilistically. Worries about the viability of doing so have also been expressed. I examine the assessment of the output of relatively simple climate models in the context of discovery and point out that this assessment is of epistemic possibilities. At the same time, I show that the concept of epistemic possibility (...) used in the relevant studies does not fit available analyses of this concept. Moreover, I provide an alternative analysis that does fit the studies and broad climate modelling practices as well as meshes with my existing view that climate model assessment should typically be of real possibilities. On my analysis, to assert that a proposition is epistemically possible is to assert that it is not known to be false and is consistent with at least approximate knowledge of the basic way things are. I, finally, consider some of the implications of my discussion for available possibilistic views of climate model assessment and for worries about such views. I conclude that my view helps to address worries about such assessment and permits using the full range of climate models in it. (shrink)
No es pensando como creamos mundos. Es comprendiendo el mundo como aprendemos a pensar. Cosmovisión es un término que debe significar un conjunto de fundamentos a partir de los cuales emerge una comprensión sistémica del Universo, sus componentes como la vida, el mundo en que vivimos, la naturaleza, el fenómeno humano y sus relaciones. Es, por tanto, un campo de la filosofía analítica alimentado por las ciencias, cuyo objetivo es ese conocimiento agregado y epistemológicamente sostenible sobre todo lo que somos (...) y contenemos, que nos rodea y con lo que nos relacionamos de alguna manera. Es algo tan antiguo como el pensamiento humano y, además de utilizar elementos de la cosmología científica, engloba todo lo que en filosofía y ciencia se refiere al universo y la vida. Una cosmovisión no es un conjunto de ideas, hipótesis y suposiciones, sino un sistema basado en la observación, el análisis, la evidencia y la demostración. Ninguna cosmovisión pretende definir, establecer, proponer, sino solo comprender, analizar e interpretar. Cada uno de nosotros construimos y transportamos nuestra cosmovisión a lo largo de nuestra vida, sin establecer formas, como trasfondo de nuestro pensamiento y comportamiento. Lingüísticamente, el término “cosmovisión” se deriva del alemán, equivalente al concepto de “Weltanschauung” utilizado por varios filósofos. Sin embargo, esta relación lingüística no es aplicable porque va en contra de lo que proponemos como cosmovisión. Esta palabra alemana hace referencia a una visión prelógica o protoexperimental de la realidad, con un contexto intuitivo y alejado de un conocimiento crítico que aún no existía en el momento de su formulación. Sin duda, las cosmovisiones, no tiene sentido en que las entendamos, cobijen y utilicen estos elementos proto-experimentales o prelógicos que incluyen la historia, el inconsciente colectivo y todos los arquetipos que portamos. Sin embargo, en el concepto que aquí aplicamos, la cosmovisión va mucho más allá de este contenido, primero al someterlo constantemente al pensamiento crítico actual y, finalmente, al hacer de la experiencia analítica (y no del pensamiento o la intuición en sí) su verdadero universo. António Lopes expone la amplitud de este contenido: -/- “Las cosmovisiones no son el producto del pensamiento. No nacen del simple deseo de saber. La aprehensión de la realidad es un momento importante en su configuración, sin embargo, es uno solo. Proviene de la conducta vital, de la experiencia de vida, de la estructura de nuestra totalidad psíquica. El apoyo de la vida a la conciencia en el conocimiento de la realidad, en la apreciación de la vida y en la realidad volitiva, es el trabajo lento y arduo que ha hecho la humanidad en el desarrollo de los conceptos de vida. (W. Dilthey, 1992 : 120)” . En este trabajo buscamos esbozar una cosmovisión a partir de las realidades que ofrece la ciencia en la actualidad. No nos proponemos, en ningún momento, hacer ciencia; o teorizar la filosofía, pero siempre buscaremos ser apoyados por ellos o, al menos, protegidos por ellos de las distorsiones cognitivas que solemos llevar. -/- . (shrink)
It seems like we care about at least two features of our credence function: gradational-accuracy and verisimilitude. Accuracy-first epistemology requires that we care about one feature of our credence function: gradational-accuracy. So if you want to be a verisimilitude-valuing accuracy-firster, you must be able to think of the value of verisimilitude as somehow built into the value of gradational-accuracy. Can this be done? In a recent article, Oddie has argued that it cannot, at least if we want the accuracy measure (...) to be proper. I argue that it can. 1Introduction2Some Nuts and Bolts3First Attempts4Oddie’s Constraint5The Good5.1Proximity over the disagreement metric 5.2Proximity over the magnitude metric 6The Bad and the Ugly 7Some More Good: The Role of Evenness of Distribution 8Some More Bad: Which Propositions to Privilege? 9Concluding Thoughts: Accuracy and Practical Value. (shrink)
Being a researcher is challenging, especially in the beginning. Early Career Researchers (ECRs) need achievements to secure and expand their careers. In today’s academic landscape, researchers are under many pressures: data collection costs, the expectation of novelty, analytical skill requirements, lengthy publishing process, and the overall competitiveness of the career. Innovative thinking and the ability to turn good ideas into good papers are the keys to success.
While it is natural to assume that contradiction between alleged witness testimonies to some event disconfirms the event, this generalization is subject to important qualifications. I consider a series of increasingly complex probabilistic cases that help us to understand the effect of contradictions more precisely. Due to the possibility of honest error on a difficult detail even on the part of highly reliable witnesses, agreement on such a detail can confirm H much more than contradiction disconfirms H. It is also (...) possible to model scenarios where we strongly suspect ahead of time that one source has copied another. In these cases, contradiction on a detail due to witness error can even confirm H by disconfirming collusion or copying. Finally, still more complex scenarios show that indirect confirmation, as opposed to exact agreement, provides the “best of both worlds,” simultaneously disconfirming suspected copying while permitting the statements of both sources to be true. (shrink)
The Borel-Kolmogorov paradox is often presented as an obscure problem that certain mathematical accounts of conditional probability must face. In this article, we point out that the paradox arises in the physical sciences, for physical probability or chance. By carefully formulating the paradox in this setting, we show that it is a puzzle for everyone, regardless of one’s preferred probability formalism. We propose a treatment that is inspired by the approach that scientists took when confronted with these cases.
The Dutch Book Argument for Probabilism assumes Ramsey's Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey's Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)--I call this the Expected Utility Objection. According to this objection, it is Maximise Subjective Expected Utility (MSEU) that determines the prices an agent is required to pay for a bet, and this often disagrees with Ramsey's Thesis. I (...) suggest two responses to Hedden's objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the Dutch Book Argument for Probabilism, which I call the Permissive Dutch Book Argument. Second, I suggest that even the proponent of the Expected Utility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument. (shrink)
One of the most widely discussed philosophical issues is the problem of future contingents. Basically, the challenge is to create an adequate semantic theory of future-tensed sentences. Twardowski (1900) suggests that future contingent statements should be analyzed using the concept of probability. The aim of this paper is to show that (1) such an analysis is not appropriate and (2) that Twardowski’s main theses imply the Thin Red Line Theory. I discuss three potential arguments against my proposal and sketch the (...) connection with Schaffer’s Parallelism Thesis (2012). (shrink)
(This is for the series Elements of Decision Theory published by Cambridge University Press and edited by Martin Peterson) -/- Our beliefs come in degrees. I believe some things more strongly than I believe others. I believe very strongly that global temperatures will continue to rise during the coming century; I believe slightly less strongly that the European Union will still exist in 2029; and I believe much less strongly that Cardiff is east of Edinburgh. My credence in something is (...) a measure of the strength of my belief in it; it represents my level of confidence in it. These are the states of mind we report when we say things like ‘I’m 20% confident I switch off the gas before I left' or ‘I’m 99.9% confident that it is raining outside'. -/- There are laws that govern these credences. For instance, I shouldn't be more confident that sea levels will rise by over 2 metres in the next 100 years than I am that they'll rise by over 1 metre, since the latter is true if the former is. This book is about a particular way we might try to establish these laws of credence: the Dutch Book arguments (For briefer overviews of these arguments, see Alan Hájek’s entry in the Oxford Handbook of Rational and Social Choice and Susan Vineberg’s entry in the Stanford Encyclopaedia.) -/- We begin, in Chapter 2, with the standard formulation of the various Dutch Book arguments that we'll consider: arguments for Probabilism, Countable Additivity, Regularity, and the Principal Principle. In Chapter 3, we subject this standard formulation to rigorous stress-testing, and make some small adjustments so that it can withstand various objections. What we are left with is still recognisably the orthodox Dutch Book argument. In Chapter 4, we set out the Dutch Strategy argument for Conditionalization. In Chapters 5 and 6, we consider two objections to Dutch Book arguments that cannot be addressed by making small adjustments. Instead, we must completely redesign those arguments, replacing them with ones that share a general approach but few specific details. In Chapter 7, we consider a further objection to which I do not have a response. In Chapter 8, we'll ask what happens to the Dutch Book arguments if we change certain features of the basic framework in which we've been working: first, we ask how Dutch Book arguments fare when we consider credences in self-locating propositions, such as It is Monday; second, we lift the assumption that the background logic is classical and explore Dutch Book arguments for non-classical logics; third, we lift the assumption that an agent's credal state can be represented by a single assignment of numerical values to the propositions she considers. In Chapter 9, we present the mathematical results that underpin these arguments. (shrink)
Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the relevant (...) and irrelevant aspects of their tests and populations to be sure that their replications will be equivalent to one another. Pearson also interpreted the alpha level as a personal rule that guides researchers’ behavior during hypothesis testing. However, this interpretation fails to acknowledge that the same researcher may use different alpha levels in different testing situations. Addressing this problem, Neyman proposed that the average alpha level adopted by a particular researcher can be viewed as an indicator of that researcher’s typical Type I error rate. Researchers’ average alpha levels may be informative from a metascientific perspective. However, they are not useful from a scientific perspective. Scientists are more concerned with the error rates of specific tests of specific hypotheses, rather than the error rates of their colleagues. It is concluded that neither Neyman nor Pearson adequately rebutted Fisher’s “repeated sampling” criticism. Fisher’s significance testing approach is briefly considered as an alternative to the Neyman-Pearson approach. (shrink)
Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis for Bayesian (...) reasoning with idealized models in science. (shrink)
Catastrophic risk raises questions that are not only of practical importance, but also of great philosophical interest, such as how to define catastrophe and what distinguishes catastrophic outcomes from non-catastrophic ones. Catastrophic risk also raises questions about how to rationally respond to such risks. How to rationally respond arguably partly depends on the severity of the uncertainty, for instance, whether quantitative probabilistic information is available, or whether only comparative likelihood information is available, or neither type of information. Finally, catastrophic risk (...) raises important ethical questions about what to do when catastrophe avoidance conflicts with equity promotion. (shrink)
Bài mới xuất bản vào ngày 19-5-2020 với tác giả liên lạc là NCS Nguyễn Minh Hoàng, cán bộ nghiên cứu của Trung tâm ISR, trình bày tiếp cận thống kê Bayesian cho việc nghiên cứu dữ liệu khoa học xã hội. Đây là kết quả của định hướng Nhóm nghiên cứu SDAG được nêu rõ ngay từ ngày 18-5-2019.
What is required for an action to promote the satisfaction of a desire? We reject extant answers and propose an alternative. Our account differs from competing answers in two ways: first, it is contrastive, in that actions promote the satisfaction of desires only as contrasted with other possible actions. Second, it employs a notion of expected fit between desire and world, defined as the weighted sum of the fit between the desire and the world in all possible outcomes, where each (...) weight is given by the probability of the agent’s obtaining the relevant outcome. According to our proposal, then, an action promotes a desire when the expected fit for the desire given that the agent performs the action is greater than the expected fit of the desire given that the agent performs the contrasting action. We highlight this account’s attractive features and explain how it improves on its competitors. (shrink)
The problem of inferring probability comparisons between events from an initial set of comparisons arises in several contexts, ranging from decision theory to artificial intelligence to formal semantics. In this paper, we treat the problem as follows: beginning with a binary relation ≥ on events that does not preclude a probabilistic interpretation, in the sense that ≥ has extensions that are probabilistically representable, we characterize the extension ≥+ of ≥ that is exactly the intersection of all probabilistically representable extensions of (...) ≥. This extension ≥+ gives us all the additional comparisons that we are entitled to infer from ≥, based on the assumption that there is some probability measure of which ≥ gives us partial qualitative information. We pay special attention to the problem of extending an order on states to an order on events. In addition to the probabilistic interpretation, this problem has a more general interpretation involving measurement of any additive quantity: e.g., given comparisons between the weights of individual objects, what comparisons between the weights of groups of objects can we infer? (shrink)
We present a puzzle about knowledge, probability and conditionals. We show that in certain cases some basic and plausible principles governing our reasoning come into conflict. In particular, we show that there is a simple argument that a person may be in a position to know a conditional the consequent of which has a low probability conditional on its antecedent, contra Adams’ Thesis. We suggest that the puzzle motivates a very strong restriction on the inference of a conditional from a (...) disjunction. (shrink)
This paper examines and evaluates a range of methodologies that have been proposed for making useful claims about the probability of phenomena that would contribute to existential risk. Section One provides a brief discussion of the nature of such claims, the contexts in which they tend to be made and the kinds of probability that they can contain. Section Two provides an overview of the methodologies that have been developed to arrive at these probabilities and assesses their advantages and disadvantages. (...) Section Three contains four suggestions to improve best practice in existential risk assessment. These suggestions centre on the types of probabilities used in risk assessment, the role of methodology rankings including the ranking of probabilistic information, the extended use of expert elicitation, and the use of confidence measures to better communicate uncertainty in probability assessments. Finally, Section Four provides an annotated literature review of catastrophic and existential risk probability claims as well as the methodologies that were used to produce each of them. (shrink)
The paper will compare two methods used in the design of diagnostic strategies. The first one is a method that precises predictive value of diagnostic tests. The second one is based on the use of Bayes’ theorem. The main aim of this article is to identify the epistemological assumptions underlying both of these methods. For the purpose of this objective, example projects of one and multi-stage diagnostic strategy developed using both methods will be considered.
Probability is a central concept in utilitarian moral theory, almost impossible to do without. I attempt to clarify the role of probability, so that we can be clear about what we are aiming for when we apply utilitarian theory to real cases. I point out the close relationship between utilitarianism and expected-utility theory, a normative standard for individual decision-making. I then argue that the distinction between “ambiguity” and risk is a matter of perception. We do not need this distinction in (...) the theory itself. In order to make this argument I rely on the personalist theory of probability, and I try to show that, within this theory, we do not need to give up completely on the idea that a “true probability” exists. Finally, I discuss several examples of applied utilitarianism, emphasizing the role of probability in each example: reasonable doubt, the precautionary principle in risk regulation, charity, climate change, and voting. (shrink)
Autism has been defined as a disorder of social cognition, interaction and communication where ritualistic, repetitive behaviors are commonly observed. But how should we understand the behavioral and cognitive differences that have been the main focus of so much autism research? Can high-level cognitive processes and behaviors be identified as the core issues people with autism face, or do these characteristics perhaps often rather reflect individual attempts to cope with underlying physiological issues? Much research presented in this volume will point (...) to the latter possibility, i.e. that people on the autism spectrum cope with issues at much lower physiological levels pertaining not only to Central Nervous Systems (CNS) function, but also to peripheral and autonomic systems (PNS, ANS) (Torres, Brincker, et al. 2013). The question that we pursue in this chapter is what might be fruitful ways of gaining objective measures of the large-scale systemic and heterogeneous effects of early atypical neurodevelopment; how to track their evolution over time and how to identify critical changes along the continuum of human development and aging. We suggest that the study of movement variability—very broadly conceived as including all minute fluctuations in bodily rhythms and their rates of change over time (coined micro-movements (Figure 1A-B) (Torres, Brincker, et al. 2013))—offers a uniquely valuable and entirely objectively quantifiable lens to better assess, understand and track not only autism but cognitive development and degeneration in general. This chapter presents the rationale firstly behind this focus on micro-movements and secondly behind the choice of specific kinds of data collection and statistical metrics as tools of analysis (Figure 1C). In brief the proposal is that the micro-movements (defined in Part I – Chapter 1), obtained using various time scales applied to different physiological data-types (Figure 1), contain information about layered influences and temporal adaptations, transformations and integrations across anatomically semi-independent subsystems that crosstalk and interact. Further, the notion of sensorimotor re-afference is used to highlight the fact that these layered micro-motions are sensed and that this sensory feedback plays a crucial role in the generation and control of movements in the first place. In other words, the measurements of various motoric and rhythmic variations provide an access point not only to the “motor systems”, but also access to much broader central and peripheral sensorimotor and regulatory systems. Lastly, we posit that this new lens can also be used to capture influences from systems of multiple entry points or collaborative control and regulation, such as those that emerge during dyadic social interactions. (shrink)
The main point of the paper is to show how popular probabilistic measures of incremental confirmation and statistical relevance with qualitatively different features can be embedded smoothly in generalized parametric families. In particular, I will show that the probability difference, log probability ratio, log likelihood ratio, odds difference, so-called improbability difference, and Gaifman’s measures of confirmation can all be subsumed within a convenient biparametric continuum. One intermediate step of this project may have interest on its own, as it provides a (...) unified representation of graded belief of which both probabilities and odds are special cases. (shrink)
B.K.Matilal, and earlier J.F.Staal, have suggested a reading of the `Nyaya five limb schema' (also sometimes referred to as the Indian Schema or Hindu Syllogism) from Gotama's Nyaya-Sutra in terms of a binary occurrence relation. In this paper we provide a rational justification of a version of this reading as Analogical Reasoning within the framework of Polyadic Pure Inductive Logic.
Analyses of singular causation often make use of the idea that a cause increases the probability of its effect. Of particular salience in such accounts are the values of the probability function of the effect, conditional on the presence and absence of the putative cause, analysed around the times of the events in question: causes are characterized by the effect’s probability function being greater when conditionalized upon them. Put this way, it becomes clearer that the ‘behaviour’ of probability functions in (...) small intervals about the times in question ought to be of concern. In this article, I make an extended case that causal theorists employing the ‘probability raising’ idea should pay attention to the continuity question. Specifically, if the probability functions are ‘jumping about’ in ways typical of discontinuous functions, then the stability of the relevant probability increase is called into question. The rub, however, is that sweeping requirements for either continuity or discontinuity are problematic and, as I argue, this constitutes a ‘continuity bind’. Hence more subtle considerations and constraints are needed, two of which I consider: utilizing discontinuous first derivatives of continuous probability functions, and abandoning point probability for imprecise probability. _1_ Introduction _2_ Probability Trajectories and Continuity _2.1_ Probability trajectories _2.2_ Causation as discontinuous jumps _2.3_ Against systematic discontinuity _3_ Broader Discontinuity Concerns _4_ The Continuity Bind _4.1_ Retaining continuity with discontinuous first derivatives _4.2_ Imprecise probability trajectories _5_ Concluding Remarks Appendix. (shrink)
According to the fitting-attitude analysis of value (FA-analysis), to be valuable is to be a fitting object of a pro-attitude. In earlier publications, setting off from this format of analysis, I proposed a modelling of value relations which makes room for incommensurability in value. In this paper, I first recapitulate the value modelling and then move on to suggest adopting a structurally similar analysis of probability. Indeed, many probability theorists from Poisson onwards did adopt an analysis of this kind. This (...) move allows to formally model probability and probability relations in essentially the same way as value and value relations. One of the advantages of the model is that we get a new account of Keynesian incommensurable probabilities, which goes beyond Keynes in distinguishing between different types of incommensurability. It also becomes possible to draw a clear distinction between incommensurability and vagueness (indeterminacy) in probability comparisons. (shrink)
Charles Stein discovered a paradox in 1955 that many statisticians think is of fundamental importance. Here we explore its philosophical implications. We outline the nature of Stein’s result and of subsequent work on shrinkage estimators; then we describe how these results are related to Bayesianism and to model selection criteria like AIC. We also discuss their bearing on scientific realism and instrumentalism. We argue that results concerning shrinkage estimators underwrite a surprising form of holistic pragmatism.
Suppose several individuals (e.g., experts on a panel) each assign probabilities to some events. How can these individual probability assignments be aggregated into a single collective probability assignment? This article reviews several proposed solutions to this problem. We focus on three salient proposals: linear pooling (the weighted or unweighted linear averaging of probabilities), geometric pooling (the weighted or unweighted geometric averaging of probabilities), and multiplicative pooling (where probabilities are multiplied rather than averaged). We present axiomatic characterisations of each class of (...) pooling functions (most of them classic, but one new) and argue that linear pooling can be justified procedurally, but not epistemically, while the other two pooling methods can be justified epistemically. The choice between them, in turn, depends on whether the individuals' probability assignments are based on shared information or on private information. We conclude by mentioning a number of other pooling methods. (shrink)
While pragmatic arguments for numerical probability axioms have received much attention, justifications for axioms of qualitative probability have been less discussed. We offer an argument for the requirement that an agent’s qualitative judgments be probabilistically representable, inspired by, but importantly different from, the Money Pump argument for transitivity of preference and Dutch book arguments for quantitative coherence. The argument is supported by a theorem, to the effect that a subject is systematically susceptible to dominance given her preferred acts, if and (...) only if the subject’s comparative judgments preclude representation by a standard probability measure. (shrink)
Genetic drift (variously called “random drift”, “random genetic drift”, or sometimes just “drift”) has been a source of ongoing controversy within the philosophy of biology and evolutionary biology communities, to the extent that even the question of what drift is has become controversial. There seems to be agreement that drift is a chance (or probabilistic or statistical) element within population genetics and within evolutionary biology more generally, and that the term “random” isn’t invoking indeterminism or any technical mathematical meaning, but (...) that’s about where agreement ends. Yet genetic drift models are a staple topic in population genetics textbooks and research, with genetic drift described as one of the main factors of evolution alongside selection, mutation, and migration. Some claim that genetic drift has played a major role in evolution (particularly molecular evolution), while others claim it to be minor. This article examines these and other controversies. -/- In order to break through the logjam of competing definitions of drift, this entry begins with a brief history of the concept, before examining various philosophical claims about the proper characterization of drift and whether it can be distinguished from natural selection; the relation of drift to debates over statisticalism; whether drift can be detected empirically and if so, how; and the proper understanding of drift as a model and as a (purported) law. (shrink)
There is a widespread view that in order to be rational we must mostly know what we believe. In the probabilistic tradition this is defended by arguments that a person who failed to have this knowledge would be vulnerable to sure loss, or probabilistically incoherent. I argue that even gross failure to know one's own beliefs need not expose one to sure loss, and does not if we follow a generalization of the standard bridge principle between first-order and second-order beliefs. (...) This makes it possible for a subject to use probabilistic decision theory to manage in a rational way cases of potential failure of this self-knowledge, as we find in implicit bias. Through such cases I argue that it is possible for uncertainty about what our beliefs are to be not only rationally permissible but advantageous. (shrink)