The current COVID-19 pandemic and the previous SARS/MERS outbreaks of 2003 and 2012 have resulted in a series of major global public health crises. We argue that in the interest of developing effective and safe vaccines and drugs and to better understand coronaviruses and associated disease mechenisms it is necessary to integrate the large and exponentially growing body of heterogeneous coronavirus data. Ontologies play an important role in standard-based knowledge and data representation, integration, sharing, and analysis. Accordingly, we initiated the (...) development of the community-based Coronavirus Infectious Disease Ontology in early 2020. -/- As an Open Biomedical Ontology (OBO) library ontology, CIDO is open source and interoperable with other existing OBO ontologies. CIDO is aligned with the Basic Formal Ontology and Viral Infectious Disease Ontology. CIDO has imported terms from over 30 OBO ontologies. For example, CIDO imports all SARS-CoV-2 protein terms from the Protein Ontology, COVID-19-related phenotype terms from the Human Phenotype Ontology, and over 100 COVID-19 terms for vaccines (both authorized and in clinical trial) from the Vaccine Ontology. CIDO systematically represents variants of SARS-CoV-2 viruses and over 300 amino acid substitutions therein, along with over 300 diagnostic kits and methods. CIDO also describes hundreds of host-coronavirus protein-protein interactions (PPIs) and the drugs that target proteins in these PPIs. CIDO has been used to model COVID-19 related phenomena in areas such as epidemiology. The scope of CIDO was evaluated by visual analysis supported by a summarization network method. CIDO has been used in various applications such as term standardization, inference, natural language processing (NLP) and clinical data integration. We have applied the amino acid variant knowledge present in CIDO to analyze differences between SARS-CoV-2 Delta and Omicron variants. CIDO's integrative host-coronavirus PPIs and drug-target knowledge has also been used to support drug repurposing for COVID-19 treatment. -/- CIDO represents entities and relations in the domain of coronavirus diseases with a special focus on COVID-19. It supports shared knowledge representation, data and metadata standardization and integration, and has been used in a range of applications. (shrink)
This important and comprehensive work of 18th-century Islamic religious thought written in Arabic by a pre-eminent South Asian scholar provides an extensive and detailed picture of Muslim theology and interpretive strategies on the eve of the modern period.
The Borel–Kolmogorov Paradox is typically taken to highlight a tension between our intuition that certain conditional probabilities with respect to probability zero conditioning events are well defined and the mathematical definition of conditional probability by Bayes’ formula, which loses its meaning when the conditioning event has probability zero. We argue in this paper that the theory of conditional expectations is the proper mathematical device to conditionalize and that this theory allows conditionalization with respect to probability zero events. The conditional probabilities (...) on probability zero events in the Borel–Kolmogorov Paradox also can be calculated using conditional expectations. The alleged clash arising from the fact that one obtains different values for the conditional probabilities on probability zero events depending on what conditional expectation one uses to calculate them is resolved by showing that the different conditional probabilities obtained using different conditional expectations cannot be interpreted as calculating in different parametrizations of the conditional probabilities of the same event with respect to the same conditioning conditions. We conclude that there is no clash between the correct intuition about what the conditional probabilities with respect to probability zero events are and the technically proper concept of conditionalization via conditional expectations—the Borel–Kolmogorov Paradox is just a pseudo-paradox. (shrink)
We investigate the general properties of general Bayesian learning, where “general Bayesian learning” means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect to the probability measure. If (...) a state can be learned from another this way, then it is said to be Bayes accessible from the evidence. It is shown that the Bayes accessibility relation is reflexive, antisymmetric and non-transitive. If every state is Bayes accessible from some other defined on the same set of random variables, then the set of states is called weakly Bayes connected. It is shown that the set of states is not weakly Bayes connected if the probability space is standard. The set of states is called weakly Bayes connectable if, given any state, the probability space can be extended in such a way that the given state becomes Bayes accessible from some other state in the extended space. It is shown that probability spaces are weakly Bayes connectable. Since conditioning using the theory of conditional expectations includes both Bayes’ rule and Jeffrey conditionalization as special cases, the results presented generalize substantially some results obtained earlier for Jeffrey conditionalization. (shrink)
The Bayes Blind Spot of a Bayesian Agent is the set of probability measures on a Boolean algebra that are absolutely continuous with respect to the background probability measure of a Bayesian Agent on the algebra and which the Bayesian Agent cannot learn by conditionalizing no matter what evidence he has about the elements in the Boolean algebra. It is shown that if the Boolean algebra is finite, then the Bayes Blind Spot is a very large set: it has the (...) same cardinality as the set of all probability measures ; it has the same measure as the measure of the set of all probability measures ; and is a ``fat'' set in topological sense in the set of all probability measures taken with its natural topology. (shrink)
In Bayesian belief revision a Bayesian agent revises his prior belief by conditionalizing the prior on some evidence using Bayes’ rule. We define a hierarchy of modal logics that capture the logical features of Bayesian belief revision. Elements in the hierarchy are distinguished by the cardinality of the set of elementary propositions on which the agent’s prior is defined. Inclusions among the modal logics in the hierarchy are determined. By linking the modal logics in the hierarchy to the strongest modal (...) companion of Medvedev’s logic of finite problems it is shown that the modal logic of belief revision determined by probabilities on a finite set of elementary propositions is not finitely axiomatizable. (shrink)
We prove that under some technical assumptions on a general, non-classical probability space, the probability space is extendible into a larger probability space that is common cause closed in the sense of containing a common cause of every correlation between elements in the space. It is argued that the philosophical significance of this common cause completability result is that it allows the defence of the Common Cause Principle against certain attempts of falsification. Some open problems concerning possible strengthening of the (...) common cause completability result are formulated. (shrink)
In Bayesian belief revision a Bayesian agent revises his prior belief by conditionalizing the prior on some evidence using Bayes’ rule. We define a hierarchy of modal logics that capture the logical features of Bayesian belief revision. Elements in the hierarchy are distinguished by the cardinality of the set of elementary propositions on which the agent’s prior is defined. Inclusions among the modal logics in the hierarchy are determined. By linking the modal logics in the hierarchy to the strongest modal (...) companion of Medvedev’s logic of finite problems it is shown that the modal logic of belief revision determined by probabilities on a finite set of elementary propositions is not finitely axiomatizable. (shrink)
In the article [2] a hierarchy of modal logics has been defined to capture the logical features of Bayesian belief revision. Elements in that hierarchy were distinguished by the cardinality of the set of elementary propositions. By linking the modal logics in the hierarchy to the modal logics of Medvedev frames it has been shown that the modal logic of Bayesian belief revision determined by probabilities on a finite set of elementary propositions is not finitely axiomatizable. However, the infinite case (...) remained open. In this article we prove that the modal logic of Bayesian belief revision determined by standard Borel spaces is also not finitely axiomatizable. (shrink)
The classical interpretation of probability together with the principle of indifference is formulated in terms of probability measure spaces in which the probability is given by the Haar measure. A notion called labelling invariance is defined in the category of Haar probability spaces; it is shown that labelling invariance is violated, and Bertrand’s paradox is interpreted as the proof of violation of labelling invariance. It is shown that Bangu’s attempt to block the emergence of Bertrand’s paradox by requiring the re-labelling (...) of random events to preserve randomness cannot succeed non-trivially. A non-trivial strategy to preserve labelling invariance is identified, and it is argued that, under the interpretation of Bertrand’s paradox suggested in the paper, the paradox does not undermine either the principle of indifference or the classical interpretation and is in complete harmony with how mathematical probability theory is used in the sciences to model phenomena. It is shown in particular that violation of labelling invariance does not entail that labelling of random events affects the probabilities of random events. It also is argued, however, that the content of the principle of indifference cannot be specified in such a way that it can establish the classical interpretation of probability as descriptively accurate or predictively successful. 1 The Main Claims2 The Elementary Classical Interpretation of Probability3 The General Classical Interpretation of Probability in Terms of Haar Measures4 Labelling Invariance and Labelling Irrelevance5 General Bertrand’s Paradox6 Attempts to Save Labelling Invariance7 Comments on the Classical Interpretation of Probability. (shrink)
“It is this distancing of personal relationships, combined with their replacement by written contractual terms and conditions, which make the discussion of ethics within a corporate institutionalised context highly limited and problematic.’ The challenge is to find means of personalising modern corporations so as to encourage ethical behaviour. Atul K. Shah PhD ACA gained his doctorate from the London School of Economics and is Lecturer in the Department of Accounting and Financial Management, at the University of Essex, Wivenhoe Park, (...) Colchester CO4 3SQ; e‐mail [email protected] This article was conceived while he was Visiting Assistant Professor at the College of Business, University of Maryland, USA. The author wishes to thank Dan Ostas, Lee Preston and Stephen Loeb for helpful comments on earlier drafts. (shrink)
We continue the investigations initiated in the recent papers where Bayes logics have been introduced to study the general laws of Bayesian belief revision. In Bayesian belief revision a Bayesian agent revises his prior belief by conditionalizing the prior on some evidence using the Bayes rule. In this paper we take the more general Jeffrey formula as a conditioning device and study the corresponding modal logics that we call Jeffrey logics, focusing mainly on the countable case. The containment relations among (...) these modal logics are determined and it is shown that the logic of Bayes and Jeffrey updating are very close. It is shown that the modal logic of belief revision determined by probabilities on a finite or countably infinite set of elementary propositions is not finitely axiomatizable. The significance of this result is that it clearly indicates that axiomatic approaches to belief revision might be severely limited. (shrink)
The literature on antecedents of corporate social responsibility strategies of firms has been predominately content driven. Informed by the managerial sense-making process perspective, we develop a contingency theoretical framework explaining how political ideology of managers affects the choice of CSR strategy for their firms through their CSR mindset. We also explain to what extent the outcome of this process is shaped by the firm’s internal institutional arrangements and external factors impacting on the firm. We develop and test several hypotheses using (...) data collected from 129 Chinese managers. The results show that managers with a stronger socialist ideology are likely to develop a mindset favouring CSR, which induces the adoption of a proactive CSR strategy. The CSR mindset mediates the link between socialist ideology and CSR strategy. The strength of the relationship between the CSR mindset and the choice of CSR strategy is moderated by customer response to CSR, industry competition, the role of government, and CSR-related managerial incentives. (shrink)
A probability space is common cause closed if it contains a Reichenbachian common cause of every correlation in it and common cause incomplete otherwise. It is shown that a probability space is common cause incomplete if and only if it contains more than one atom and that every space is common cause completable. The implications of these results for Reichenbach's Common Cause Principle are discussed, and it is argued that the principle is only falsifiable if conditions on the common cause (...) are imposed that go beyond the requirements formulated by Reichenbach in the definition of common cause. (shrink)
The role of measure theoretic atomicity in common cause closedness of general probability theories with non-distributive event structures is raised and investigated. It is shown that if a general probability space is non-atomic then it is common cause closed. Conditions are found that entail that a general probability space containing two atoms is not common cause closed but it is common cause closed if it contains only one atom. The results are discussed from the perspective of the Common Cause Principle.
In this paper we discuss the ``admissibility troubles'' for Bayesian accounts of direct inference proposed in, which concern the existence of surprising, unintuitive defeaters even for mundane cases of direct inference. We first show that one could reasonably suspect that the source of these troubles was informal talk about higher-order probabilities: for cardinality-related reasons, classical probability spaces abound in defeaters for direct inference. We proceed to discuss the issues in the context of the rigorous framework of Higher Probability Spaces. However, (...) we show that the issues persist; we prove a few facts which pertain both to classical probability spaces and to HOPs, in our opinion capturing the essence of the problem. In effect we strengthen the message from the admissibility troubles: they arise not only for approaches using classical probability spaces---which are thus necessarily informal about metaprobabilistic phenomena like agents having credences in propositions about chances---but also for at least one respectable framework specifically tailored for rigorous discussion of higher-order probabilities. (shrink)
In this paper we dispel the supposed ``admissibility troubles'' for Bayesian accounts of direct inference proposed by Wallmann and Hawthorne, which concern the existence of surprising, unintuitive defeaters even for mundane cases of direct inference. We show that if one follows the majority of authors in the field in using classical probability spaces unimbued with any additional structure, one should expect similar phenomena to arise and should consider them unproblematic in themselves: defeaters abound! We then show that the framework of (...) Higher Probability Spaces allows the natural modelling of the discussed cases which produces no troubles of this kind. (shrink)
This paper formulates a notion of independence of subobjects of an object in a general category. Subobject independence is the categorial generalization of what is known as subsystem independence in the context of algebraic relativistic quantum field theory. The content of subobject independence formulated in this paper is morphism co-possibility: two subobjects of an object will be defined to be independent if any two morphisms on the two subobjects of an object are jointly implementable by a single morphism on the (...) larger object. The paper investigates features of subobject independence in general, and subobject independence in the category of C∗ - algebras with respect to operations as morphisms is suggested as a natural subsystem independence axiom to express relativistic locality of the covariant functor in the categorial approach to quantum field theory. (shrink)
This experiment investigated the effect of format (line vs. bar), viewers’ familiarity with variables, and viewers’ graphicacy (graphical literacy) skills on the comprehension of multivariate (three variable) data presented in graphs. Fifty-five undergraduates provided written descriptions of data for a set of 14 line or bar graphs, half of which depicted variables familiar to the population and half of which depicted variables unfamiliar to the population. Participants then took a test of graphicacy skills. As predicted, the format influenced viewers’ interpretations (...) of data. Specifically, viewers were more likely to describe x–y interactions when viewing line graphs than when viewing bar graphs, and they were more likely to describe main effects and “z–y” (the variable in the legend) interactions when viewing bar graphs than when viewing line graphs. Familiarity of data presented and individuals’ graphicacy skills interacted with the influence of graph format. Specifically, viewers were most likely to generate inferences only when they had high graphicacy skills, the data were familiar and thus the information inferred was expected, and the format supported those inferences. Implications for multivariate data display are discussed. (shrink)
The classical interpretation of probability together with the principle of indifference is formulated in terms of probability measure spaces in which the probability is given by the Haar measure. A notion called labelling invariance is defined in the category of Haar probability spaces; it is shown that labelling invariance is violated, and Bertrand’s paradox is interpreted as the proof of violation of labelling invariance. It is shown that Bangu’s attempt to block the emergence of Bertrand’s paradox by requiring the re-labelling (...) of random events to preserve randomness cannot succeed non-trivially. A non-trivial strategy to preserve labelling invariance is identified, and it is argued that, under the interpretation of Bertrand’s paradox suggested in the paper, the paradox does not undermine either the principle of indifference or the classical interpretation and is in complete harmony with how mathematical probability theory is used in the sciences to model phenomena. It is shown in particular that violation of labelling invariance does not entail that labelling of random events affects the probabilities of random events. It also is argued, however, that the content of the principle of indifference cannot be specified in such a way that it can establish the classical interpretation of probability as descriptively accurate or predictively successful. (shrink)
In this paper we study the interaction between symmetric logic and probability. In particular, we axiomatize the convex hull of the set of evaluations of symmetric logic, yielding the notion of probability in symmetric logic. This answers an open problem of Williams ( 2016 ) and Paris ( 2001 ).
The Bayes Blind Spot of a Bayesian Agent is, by definition, the set of probability measures on a Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma $$\end{document}-algebra that are absolutely continuous with respect to the background probability measure of a Bayesian Agent on the algebra and which the Bayesian Agent cannot learn by a single conditionalization no matter what evidence he has about the elements in the Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma (...) $$\end{document}-algebra. It is shown that if the Boolean algebra is finite, then the Bayes Blind Spot is a very large set: it has the same cardinality as the set of all probability measures ; it has the same measure as the measure of the set of all probability measures ; and is a “fat” set in topological sense in the set of all probability measures taken with its natural topology. Features of the Bayes Blind Spot are determined from the perspective of repeated Bayesian learning when the Boolean algebra is finite. Open problems about the Bayes Blind Spot are formulated in probability spaces with infinite Boolean σ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sigma $$\end{document}-algebras. The results are discussed from the perspective of Bayesianism. (shrink)
In this paper we show that the one-generated free three dimensional polyadic and substitutional algebras Fr1PA3 and Fr1SCA3 are not atomic. What is more, their corresponding logics have the Gödel’s incompleteness property. This provides a partial solution to a longstanding open problem of Németi and Maddux going back to Alfred Tarski via the book [12].
We show a somewhat surprising result concerning the relationship between the Principal Principle and its allegedly generalized form. Then, we formulate a few desiderata concerning chance-credence norms and argue that none of the norms widely discussed in the literature satisfies all of them. We suggest that the New Principle comes out as the best contender.
In an important 2006 paper, Nishi Shah defends ‘evidentialism’, the position that only evidence for a proposition’s truth constitutes a reason to believe this proposition. In opposition to Shah, Anthony Robert Booth, Andrew Reisner and Asbjørn Steglich-Petersen argue that things other than evidence of truth, so-called non-evidential or ‘pragmatic’ reasons, constitute reasons to believe a proposition. I argue that we can effectively respond to Shah’s pragmatist critics if, following Shah, we are careful to distinguish the evaluation (...) of the reasons for a belief from the process of actually forming a belief and allowing it to influence action. Drawing this distinction is assisted if we utilize Rudolf Carnap’s probabilistic interpretation of what it means to be disposed to believe a claim. (shrink)
We investigate the general properties of general Bayesian learning, where ``general Bayesian learning'' means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect to the probability measure. If (...) a state can be learned from another this way, then it is said to be Bayes accessible from the evidence. It is shown that the Bayes accessibility relation is reflexive, antisymmetric and non-transitive. If every state is Bayes accessible from some other defined on the same set of random variables, then the set of states is called weakly Bayes connected. It is shown that the set of states is not weakly Bayes connected if the probability space is standard. The set of states is called weakly Bayes connectable if, given any state, the probability space can be extended in such a way that the given state becomes Bayes accessible from some other state in the extended space. It is shown that probability spaces are weakly Bayes connectable. Since conditioning using the theory of conditional expectations includes both Bayes' rule and Jeffrey conditionalization as special cases, the results presented generalize substantially some results obtained earlier for Jeffrey conditionalization. (shrink)
Why, when asking oneself whether to believe that p, must one immediately recognize that this question is settled by, and only by, answering the question whether p is true? Truth is not an optional end for first-personal doxastic deliberation, providing an instrumental or extrinsic reason that an agent may take or leave at will. Otherwise there would be an inferential step between discovering the truth with respect to p and determining whether to believe that p, involving a bridge premise that (...) it is good to believe the truth with respect to p. But there is no such gap between the two questions within the first-personal deliberative perspective; the question whether to believe that p seems to collapse into the question whether p is true. (shrink)
The paper takes thePrincipal Principle to be a norm demanding that subjective degrees of belief of a Bayesian agent be equal to the objective probabilities once the agent has conditionalized his subjective degrees of beliefs on the values of the objective probabilities, where the objective probabilities can be not only chances but any other quantities determined objectively. Weak and strong consistency of the Abstract Principal Principle are defined in terms of classical probability measure spaces. It is proved that the Abstract (...) Principal Principle is weakly consistent and that it is strongly consistent in the category of probability measure spaces where the Boolean algebra representing the objective random events is finite. It is argued that it is desirable to strengthen the Abstract Principal Principle by adding a stability requirement to it. Weak and strong consistency of the resulting Stable Abstract Principal Principle are defined, and the strong consistency of the Abstract Principal Principle is interpreted as necessary for a non-omniscient Bayesian agent to be able to have rational degrees of belief in all epistemic situations. It is shown that the Stable Abstract Principal Principle is weakly consistent, but the strong consistency of the Stable Abstract Principal principle remains an open question. We conclude that we do not yet have proof that Bayesian agents can have rational degrees of belief in every epistemic situation. (shrink)