The concern of deductive logic is generally viewed as the systematic recognition of logical principles, i.e., of logical truths. This paper presents and analyzes different instantiations of the three main interpretations of logical principles, viz. as ontological principles, as empirical hypotheses, and as true propositions in virtue of meanings. I argue in this paper that logical principles are true propositions in virtue of the meanings of the logical terms within a certain linguistic framework. Since these principles also regulate and control (...) the process of deduction in inquiry, i.e., they are prescriptive for the use of language and thought in inquiry, I argue that logic may, and should, be seen as an instrument or as a way of proceeding (modus procedendi) in inquiry. (shrink)
This paper attempts to motivate the view that instead of rejecting modus ponens as invalid in certain situations, one could preserve its validity by associating such situations with non-normal interpretations of logical connectives.
Vann McGee has recently argued that Belnap’s criteria constrain the formal rules of classical natural deduction to uniquely determine the semantic values of the propositional logical connectives and quantifiers if the rules are taken to be open-ended, i.e., if they are truth-preserving within any mathematically possible extension of the original language. The main assumption of his argument is that for any class of models there is a mathematically possible language in which there is a sentence true in just those models. (...) I show that this assumption does not hold for the class of models of classical propositional logic. In particular, I show that the existence of non-normal models for negation undermines McGee’s argument. (shrink)
Starting from certain metalogical results, I argue that first-order logical truths of classical logic are a priori and necessary. Afterwards, I formulate two arguments for the idea that first-order logical truths are also analytic, namely, I first argue that there is a conceptual connection between aprioricity, necessity, and analyticity, such that aprioricity together with necessity entails analyticity; then, I argue that the structure of natural deduction systems for FOL displays the analyticity of its truths. Consequently, each philosophical approach to these (...) truths should account for this evidence, i.e., that first-order logical truths are a priori, necessary, and analytic, and it is my contention that the semantic account is a better candidate. (shrink)
The problem analysed in this paper is whether we can gain knowledge by using valid inferences, and how we can explain this process from a model-theoretic perspective. According to the paradox of inference (Cohen & Nagel 1936/1998, 173), it is logically impossible for an inference to be both valid and its conclusion to possess novelty with respect to the premises. I argue in this paper that valid inference has an epistemic significance, i.e., it can be used by an agent to (...) enlarge his knowledge, and this significance can be accounted in model-theoretic terms. I will argue first that the paradox is based on an equivocation, namely, it arises because logical containment, i.e., logical implication, is identified with epistemological containment, i.e., the knowledge of the premises entails the knowledge of the conclusion. Second, I will argue that a truth-conditional theory of meaning has the necessary resources to explain the epistemic significance of valid inferences. I will explain this epistemic significance starting from Carnap’s semantic theory of meaning and Tarski’s notion of satisfaction. In this way I will counter (Prawitz 2012b)’s claim that a truth-conditional theory of meaning is not able to account the legitimacy of valid inferences, i.e., their epistemic significance. (shrink)
We argue that, if taken seriously, Kripke's view that a language for science can dispense with a negation operator is to be rejected. Part of the argument is a proof that positive logic, i.e., classical propositional logic without negation, is not categorical.
K. R. Popper distinguished between two main uses of logic, the demonstrational one, in mathematical proofs, and the derivational one, in the empirical sciences. These two uses are governed by the following methodological constraints: in mathematical proofs one ought to use minimal logical means (logical minimalism), while in the empirical sciences one ought to use the strongest available logic (logical maximalism). In this paper I discuss whether Popper’s critical rationalism is compatible with a revision of logic in the empirical sciences, (...) given the condition of logical maximalism. Apparently, if one ought to use the strongest logic in the empirical sciences, logic would remain immune to criticism and, thus, non-revisable. I will show that critical rationalism is theoretically compatible with a revision of logic in the empirical sciences. However, a question that remains to be clarified by the critical rationalists is what kind of evidence would lead them to revise the system of logic that underlies a physical theory, such as quantum mechanics? Popper’s falsificationist methodology will be compared with the recently advocated extension of the abductive methodology from the empirical sciences to logic by T. Williamson, since both of them arrive at the same conclusion concerning the status of classical logic. (shrink)
Logical inferentialism maintains that the formal rules of inference fix the meanings of the logical terms. The categoricity problem points out to the fact that the standard formalizations of classical logic do not uniquely determine the intended meanings of its logical terms, i.e., these formalizations are not categorical. This means that there are different interpretations of the logical terms that are consistent with the relation of logical derivability in a logical calculus. In the case of the quantificational logic, the categoricity (...) problem is generated by the finite nature of the standard calculi and one direction in which it can be solved is to strengthen the deductive systems by adding infinite rules (such as the ω-rule), i.e., to construct a full formalization. Another main direction is to provide a natural semantics for the standard rules of inference, i.e., a semantics for which these rules are categorical. My aim in this paper is to analyze some recent approaches for solving the categoricity problem and to argue that a logical inferentialist should accept the infinite rules of inference for the first order quantifiers, since our use of the expressions “all” and “there is” leads us beyond the concrete and finite reasoning, and human beings do sometimes employ infinite rules of inference in their reasoning. (shrink)
A system of logic usually comprises a language for which a model-theory and a proof-theory are defined. The model-theory defines the semantic notion of model-theoretic logical consequence (⊨), while the proof-theory defines the proof- theoretic notion of logical consequence (or logical derivability, ⊢). If the system in question is sound and complete, then the two notions of logical consequence are extensionally equivalent. The concept of full formalization is a more restrictive one and requires in addition the preservation of the standard (...) meanings of the logical terms in all the admissible interpretations of the logical calculus, as it is proof-theoretically defined. Although classical first-order logic is sound and complete, its standard formalizations fall short to be full formalizations since they allow non-intended interpretations. This fact poses a challenge for the logical inferentialism program, whose main tenet is that the meanings of the logical terms are uniquely determined by the formal axioms or rules of inference that govern their use in a logical calculus, i.e., logical inferentialism requires a categorical calculus. This paper is the first part of a more elaborated study which will analyze the categoricity problem from its beginning until the most recent approaches. I will first start by describing the problem of a full formalization in the general framework in which Carnap (1934/1937, 1943) formulated it for classical logic. Then, in sections IV and V, I shall discuss the way in which the mathematicians B.A. Bernstein (1932) and E.V. Huntington (1933) have previously formulated and analyzed it in algebraic terms for propositional logic and, finally, I shall discuss some critical reactions Nagel (1943), Hempel (1943), Fitch (1944), and Church (1944) formulated to these approaches. (shrink)
As recently stakeholders complain about the use of conflict minerals in consumer products that are often invisible to them in final products, firms across industries implement conflict mineral management practices. Conflict minerals are those, whose systemic exploitation and trade contribute to human right violations in the country of extraction and surrounding areas. Particularly, supply chain managers in the Western world are challenged taking reasonable steps to identify and prevent risks associated with these resources due to the globally dispersed nature of (...) supply chains and the opacity of the origin of commodities. Supply chain due diligence represents a holistic concept to proactively manage supply chains reducing the likelihood of the use of conflict minerals effectively. Based on an exploratory study with 27 semi-structured interviews within five European industries, we provide insights into patterns of implementation, key motivational factors, barriers and enablers, and impacts of SCDD in mineral supply chains. Our results contribute to both theory and practice as we provide first insights to SCDD practices and make recommendations for an industry-wide implementation of SCDD. Altogether, this study provides the basis for future theory testing research in the context of SCDD and conflict mineral management. (shrink)
Media increasingly accuse firms of exploiting suppliers, and these allegations often result in lurid headlines that threaten the reputations and therefore business successes of these firms. Neither has the phenomenon of supplier exploitation been investigated from a rigorous, ethical standpoint, nor have answers been provided regarding why some firms pursue exploitative approaches. By systemically contrasting economic liberalism and just prices as two divergent perspectives on supplier exploitation, we introduce a distinction of common business practice and unethical supplier exploitation. Since supplier (...) exploitation is based on power, we elucidate several levels of power as antecedents and investigate the role of ethical climate as a moderator. This study extends Victor and Cullen’s ethical climate matrix according to a supply chain dimension and is summarized in an integrated, conceptual model of five propositions for future theory testing. Results provide a frame of reference for executives and scholars, who can now delineate unethical exploitation and understand important antecedents of the phenomenon better. (shrink)
We interpret solution rules on a class of simple allocation problems as data on the choices of a policy maker. We analyze conditions under which the policy maker’s choices are (i) rational (ii) transitive-rational, and (iii) representable; that is, they coincide with maximization of a (i) binary relation, (ii) transitive binary relation, and (iii) numerical function on the allocation space. Our main results are as follows: (i) a well-known property, contraction independence (a.k.a. IIA) is equivalent to rationality; (ii) every contraction (...) independent and other-c monotonic rule is transitive-rational; and (iii) every contraction independent and other-c monotonic rule, if additionally continuous, can be represented by a numerical function. (shrink)
We propose and axiomatically analyze a class of rational solutions to simple allocation problems where a policy-maker allocates an endowment $E$ among $n$ agents described by a characteristic vector c. We propose a class of recursive rules which mimic a decision process where the policy-maker initially starts with a reference allocation of $E$ in mind and then uses the data of the problem to recursively adjust his previous allocation decisions. We show that recursive rules uniquely satisfy rationality, c-continuity, and other-c (...) monotonicity. We also show that a well-known member of this class, the Equal Gains rule, uniquely satisfies rationality, c-continuity, and equal treatment of equals. (shrink)
BackgroundAlthough the number of reporting guidelines has grown rapidly, few have gone through an updating process. The STARD statement, published in 2003 to help improve the transparency and completeness of reporting of diagnostic accuracy studies, was recently updated in a systematic way. Here, we describe the steps taken and a justification for the changes made.ResultsA 4-member Project Team coordinated the updating process; a 14-member Steering Committee was regularly solicited by the Project Team when making critical decisions. First, a review of (...) the literature was performed to identify topics and items potentially relevant to the STARD updating process. After this, the 85 members of the STARD Group were invited to participate in two online surveys to identify items that needed to be modified, removed from, or added to the STARD checklist. Based on the results of the literature review process, 33 items were presented to the STARD Group in the online survey: 25 original items and 8 new items; 73 STARD Group members completed the first survey, and 79 STARD Group members completed the second survey.Then, an in-person consensus meeting was organized among the members of the Project Team and Steering Committee to develop a consensual draft version of STARD 2015. This version was piloted in three rounds among a total of 32 expert and non-expert users. Piloting mostly led to rewording of items. After this, the update was finalized. The updated STARD 2015 list now consists of 30 items. Compared to the previous version of STARD, three original items were each converted into two new items, four original items were incorporated into other items, and seven new items were added.ConclusionsAfter a systematic updating process, STARD 2015 provides an updated list of 30 essential items for reporting diagnostic accuracy studies. (shrink)
The Royal Society possesses three long-focus simple lenses of diameters 195, 210 and 230 mm, all inscribed with the signature ‘C. Huygens’ and various dates in the year 1686. These prove to have been made by Constantine Huygens, the elder brother of the famous Christiaan Huygens. All three lenses have been examined by a variety of physical and chemical methods, both to define their optical characteristics and to establish the composition of dated samples of late-seventeenth-century Continental glass. The focal lengths (...) of 37·9, 50·1 and 65·2 metres found by combination agree with Huygens' own values of 122, 170 and 210 feet respectively, and are so great that practical employment of the lenses in aerial telescopes has rarely been achieved. All are made from the same very poor glass—a heterogeneous and discoloured potash-rich ‘forest glass’—with a refractive index of 1·516, a costringence of 60, and a density of 2·5 g cm−3. The three lenses were ground with just two concave laps, of radii of curvature 27·11 and 71·15 metres, one lens being plano-convex to a high degree of accuracy. A claim by previous investigators that one lens was a ‘light flint’ glass has been disproved. (shrink)
It has been frequently asserted that the western Roman supreme commander Stilicho’s neglect of the Transalpine provinces during the usurpation of Constantine III contributed to his eventual downfall in 408. Stilicho’s fatal flaw, in this recurring opinion, seems to have been a desire to annex eastern Illyricum for which he sought to employ Alaric. In a volte-face, he then wished to use Alaric as the leader of the western field army that was supposed to bring down Constantine. The aim of (...) this article is to advance several notes of critique on this narrative that has had a long life in Stilichonian scholarship. Instead it will demonstrate that a) the threat of Constantine has been overestimated, b) Stilicho had no designs on annexing eastern Illyricum, c) he had a military strategy ready against Constantine that was sound and in tandem with earlier civil wars, and d) that the intended role of Alaric during this enterprise has been misunderstood. Nevertheless, Stilicho’s military strategy in 408 proved to be fundamentally corrosive towards his hitherto carefully built-up political capital. Olympius, the architect of his demise and his precise knowledge of Stilicho’s army preparations, as befitted the magister officiorium, provided the former with the perfect material to fabricate stories of Stilicho coveting a throne while neglecting the west. This set in motion the plot that ultimately brought down Stilicho. (shrink)
Before now, there has been no comprehensive analysis of the multiple relations between A. Comte’s and J.S. Mill’s positive philosophy and Franz Brentano’s work. The present volume aims to fill this gap and to identify Brentano’s position in the context of the positive philosophy of the 19th century by analyzing the following themes: the concept of positive knowledge; philosophy and empirical, genetic and descriptive psychology as sciences in Brentano, Comte and Mill; the strategies for the rebirth of philosophy in these (...) three authors; the theory of the ascending stages of thought, of their decline, of the intentionality in Comte and Brentano; the reception of Comte’s positivism in Whewell and Mill; induction and phenomenalism in Brentano, Mill and Bain; the problem of the "I" in Hume and Brentano; mathematics as a foundational science in Brentano, Kant and Mill; Brentano’s critique of Mach’s positivism; the concept of positive science in Brentano’s metaphysics and in Husserl’s early phenomenology; the reception of Brentano’s psychology in Twardowski; The Brentano Institute at Oxford. The volume also contains the translation of the most significant writings of Brentano regarding philosophy as science. I. Tănăsescu, Romanian Academy; A. Bejinariu, Romanian Society of Phenomenology; S. Krantz Gabriel, Saint Anselm College; C. Stoenescu, University of Bucharest. (shrink)
En dépit des réformes dites libérales des années 1860, l'Empire russe resta tout au long du XIX e siècle un empire dont le souverain s'intitulait officiellement : « Tsar et Autocrate de toutes les Russies ». L'autocratie différait fortement de la monarchie française d'Ancien Régime, qu'embarrassaient et que contenaient mille traditions et reliques du passé : droit coutumier et droit romain, vieilles lois de sources diverses toujours en vigueur, privilèges, prérogatives, immunités, franchises, exceptions et exemptions, Église indépendante, etc. Dans le (...) système russe du « césaropapisme », l'empereur faisait également office de souverain pontife de la religion orthodoxe : il en était le « défenseur et gardien ». - En 1881, le tsar Alexandre II fut tué dans un attentat, alors que l'octroi d'une Constitution semblait imrm'nent. Son règne avait été marqué par une série de réformes, dont la plus célèbre (1861) est celle qui avait émancipé les paysans serfs. Constantin Pétrovitch Pobedonostsev (1827-1907), célèbre juriste et homme d'État russe, qui avait été le précepteur des fils d'Alexandre II, va alors devenir le principal représentant d'une politique de contre-réformes, d'une politique refusant que soit faite la moindre concession aux idées libérales. Nommé par Alexandre III procureur du Saint-Synode, il exercera une influence prépondérante en Russie pendant la première partie du règne de ce tsar (c'està-dire de 1881 à 1887). Certains de ses écrits, pamphlets ou manifestes ont été réunis dans un Recueil de Moscou, paru en 1896. Nous y verrons Pobedonostsev dénoncer une à une toutes les institutions qui pourraient, si jamais elles étaient importées d'Occident, limiter les prérogatives du tsar autocrate : séparation de l'Église et de l'État ; suffrage universel et discours sur la prétendue souveraineté du peuple ; instruction gratuite et obligatoire, impliquant une limitation du travail des enfants ; liberté de la presse et invocation constante de l' « opinion publique » ; institution (à l'anglaise) de jurys populaires dans les tribunaux. - Mais nous aurons la surprise de trouver également, dans ce même Recueil de Moscou, un catalogue très raisonné des principales « pathologies » qui, selon Pobedonostsev, accompagnent nécessairement un régime de démocratie représentative : corruption des représentants ; combinaisons et tractations incessantes entre les partis ; indifférentisme massif et hypertrophie du moi personnel chez les électeurs ; omniprésence d'une presse dépourvue de tout mandat électif et parlant cependant au nom du public ; etc. Un tel catalogue constitue à n'en pas douter ce que l'œuvre de Pobedonostsev, ce réactionnaire « sur toute la ligne », contient de moins inactuel. (shrink)
One thing nearly all epistemologists agree upon is that Gettier cases are decisive counterexamples to the tripartite analysis of knowledge; whatever else is true of knowledge, it is not merely belief that is both justified and true. They now agree that knowledge is not justified true belief because this is consistent with there being too much luck present in the cases, and that knowledge excludes such luck. This is to endorse what has become known as the 'anti-luck platitude'. <br /><br (...) />But what if generations of philosophers have been mistaken about this, blinded at least partially by a deeply entrenched professional bias? There has been another, albeit minority, response to Gettier: to deny that the cases are counterexamples at all. <br /><br />Stephen Hetherington, a principal and vocal proponent of this view, advances what he calls the 'Knowing Luckily Proposal'. If Hetherington is correct, this would call for a major re-evaluation and re-orientation of post-Gettier analytic epistemology, since much of it assumes the anti-luck platitude both in elucidating the concept of knowledge, and in the application of such accounts to central philosophical problems. It is therefore imperative that the Knowing Luckily Proposal be considered and evaluated in detail. <br /><br />In this paper I critically assess the Knowing Luckily Proposal. I argue that while it draws our attention to certain important features of knowledge, ultimately it fails, and the anti-luck platitude emerges unscathed. Whatever else is true of knowledge, therefore, it is non-lucky true belief. For a proposition to count as knowledge, we cannot arrive at its truth accidentally or for the wrong reason. (shrink)