An argument that the major metaphysical theories of facts give us no good reason to accept facts in our catalog of the world. -/- In this book Arianna Betti argues that we have no good reason to accept facts in our catalog of the world, at least as they are described by the two major metaphysical theories of facts. She claims that neither of these theories is tenable—neither the theory according to which facts are special structured building blocks of reality (...) nor the theory according to which facts are whatever is named by certain expressions of the form “the fact that such and such.” There is reality, and there are entities in reality that we are able to name, but, Betti contends, among these entities there are no facts. -/- Drawing on metaphysics, the philosophy of language, and linguistics, Betti examines the main arguments in favor of and against facts of the two major sorts, which she distinguishes as compositional and propositional, giving special attention to methodological presuppositions. She criticizes compositional facts (facts as special structured building blocks of reality) and the central argument for them, Armstrong's truthmaker argument. She then criticizes propositional facts (facts as whatever is named in “the fact that” statements) and what she calls the argument from nominal reference, which draws on Quine's criterion of ontological commitment. Betti argues that metaphysicians should stop worrying about facts, and philosophers in general should stop arguing for or against entities on the basis of how we use language. (shrink)
Throughout more than two millennia philosophers adhered massively to ideal standards of scientific rationality going back ultimately to Aristotle’s Analytica posteriora . These standards got progressively shaped by and adapted to new scientific needs and tendencies. Nevertheless, a core of conditions capturing the fundamentals of what a proper science should look like remained remarkably constant all along. Call this cluster of conditions the Classical Model of Science . In this paper we will do two things. First of all, we will (...) propose a general and systematized account of the Classical Model of Science. Secondly, we will offer an analysis of the philosophical significance of this model at different historical junctures by giving an overview of the connections it has had with a number of important topics. The latter include the analytic-synthetic distinction, the axiomatic method, the hierarchical order of sciences and the status of logic as a science. Our claim is that particularly fruitful insights are gained by seeing themes such as these against the background of the Classical Model of Science. In an appendix we deal with the historiographical background of this model by considering the systematizations of Aristotle’s theory of science offered by Heinrich Scholz, and in his footsteps by Evert W. Beth. (shrink)
According to Vallicella's 'Relations, Monism, and the Vindication of Bradley's Regress' (2002), if relations are to relate their relata, some special operator must do the relating. No other options will do. In this paper we reject Vallicella's conclusion by considering an important option that becomes visible only if we hold onto a precise distinction between the following three feature-pairs of relations: internality/externality, universality/particularity, relata-specificity/relata-unspecificity. The conclusion we reach is that if external relations are to relate their relata, they must be (...) relata-specific (and no special operator is needed). As it eschews unmereological complexes, this outcome is of relevance to defenders of the extensionality of composition. (shrink)
The History of Ideas is presently enjoying a certain renaissance after a long period of disrepute. Increasing quantities of digitally available historical texts and the availability of computational tools for the exploration of such masses of sources, it is suggested, can be of invaluable help to historians of ideas. The question is: how exactly? In this paper, we argue that a computational history of ideas is possible if the following two conditions are satisfied: (i) Sound Method . A computational history (...) of ideas must be built upon a sound theoretical foundation for its methodology, and the only such foundation is given by the use of models , i.e., fully explicit and revisable interpretive frameworks or networks of concepts developed by the historians of ideas themselves. (ii) Data Organisation. Interpretive models in our sense must be seen as topic-specific knowledge organisation systems (KOS) implementable (i.e. formalisable) as e.g. computer science ontologies. We thus require historians of ideas to provide explicitly structured semantic framing of domain knowledge before investigating texts computationally, and to constantly re-input findings from the interpretive point of view. In this way, a computational history of ideas maximally profits from computer methods while also keeping humanities experts in the loop. We elucidate our proposal with reference to a model of the notion of axiomatic science in 18th -19th century Europe. (shrink)
How can we best reconstruct the origin of a notion, its development, and possible spread to multiple fields? We present a pilot study on the spread of the notion of conceptual scheme. Though the notion is philosophically important, its origin, development, and spread are unclear. Several purely qualitative and competing historical hypotheses have been offered, which rely on disconnected disciplinary traditions, and have never been tested all at once in a single comprehensive investigation fitting the scope of its subject matter. (...) As a step toward such an investigation, we trace the use of the bigram “conceptual scheme” in about 42,000 US journal articles in social sciences from 1888-1959 by using a novel method combining a quantitative procedure aided by basic computational techniques with qualitative elements informed by Betti and van den Berg (2014)’s ‘model approach to the history of ideas’. (shrink)
We propose a new method for the history of ideas that has none of the shortcomings so often ascribed to this approach. We call this method the model approach to the history of ideas. We argue that any adequately developed and implementable method to trace continuities in the history of human thought, or concept drift, will require that historians use explicit interpretive conceptual frameworks. We call these frameworks models. We argue that models enhance the comprehensibility of historical texts, and provide (...) historians of ideas with a method that, unlike existing approaches, is susceptible neither to common holistic criticisms nor to Skinner's objections that the history of ideas yields arbitrary and biased reconstructions. To illustrate our proposal, we discuss the so-called Classical Model of Science and draw upon work in computer science and cognitive psychology. (shrink)
Leśniewski’s systems deviate greatly from standard logic in some basic features. The deviant aspects are rather well known, and often cited among the reasons why Leśniewski’s work enjoys little recognition. This paper is an attempt to explain why those aspects should be there at all. Leśniewski built his systems inspired by a dream close to Leibniz’s characteristica universalis: a perfect system of deductive theories encoding our knowledge of the world, based on a perfect language. My main claim is that Leśniewski (...) built his characteristica universalis following the conditions of de Jong and Betti’s Classical Model of Science (2008) to an astounding degree. While showing this I give an overview of the architecture of Leśniewski’s systems and of their fundamental characteristics. I suggest among others that the aesthetic constraints Leśniewski put on axioms and primitive terms have epistemological relevance. (shrink)
The step to e-research in philosophy depends on the availability of high quality, easily and freely accessible corpora in a sustainable format composed from multi-language, multi-script books from different historical periods. Corpora matching these needs are at the moment virtually non-existing. Within @PhilosTei, we have addressed this corpus building problem by developing an open source, web-based, user-friendly workflow from textual images to TEI, based on state-of-the-art open source OCR software, to wit Tesseract, and a multi-language version of TICCL, a powerful (...) OCR post-correction tool. We have demonstrated the utility of the tool by applying it to a multilingual, multi-script corpus of important eighteenth to twentieth-century European philosophical texts. (shrink)
This paper is a contribution to the reconstruction of Tarski’s semantic background in the light of the ideas of his master, Stanislaw Lesniewski. Although in his 1933 monograph Tarski credits Lesniewski with crucial negative results on the semantics of natural language, the conceptual relationship between the two logicians has never been investigated in a thorough manner. This paper shows that it was not Tarski, but Lesniewski who first avowed the impossibility of giving a satisfactory theory of truth for ordinary language, (...) and the necessity of sanitation of the latter for scientific purposes. In an early article (1913) Lesniewski gave an interesting solution to the Liar Paradox, which, although different from Tarski’s in detail, is nevertheless important to Tarski’s semantic background. To illustrate this I give an analysis of Lesniewski’s solution and of some related aspects of Lesniewski’s later thought. (shrink)
The paper [Tarski: Les fondements de la géométrie des corps, Annales de la Société Polonaise de Mathématiques, pp. 29—34, 1929] is in many ways remarkable. We address three historico-philosophical issues that force themselves upon the reader. First we argue that in this paper Tarski did not live up to his own methodological ideals, but displayed instead a much more pragmatic approach. Second we show that Leśniewski's philosophy and systems do not play the significant role that one may be tempted to (...) assign to them at first glance. Especially the role of background logic must be at least partially allocated to Russell's systems of Principia mathematica. This analysis leads us, third, to a threefold distinction of the technical ways in which the domain of discourse comes to be embodied in a theory. Having all of this in place, we discuss why we have to reject the argument in [Gruszczyński and Pietruszczak: Full development of Tarski's Geometry of Solids, The Bulletin of Symbolic Logic, vol. 4, no. 4, pp. 481—540] according to which Tarski has made a certain mistake. (shrink)
In several manuscripts, written between 1894 and 1897, Twardowski developed a new theory of judgement with two types of judgement: existential and relational judgements. In Zur Lehre he tried to stay within a Brentanian framework, although he introduced the distinction between content and object in the theory of judgement. The introduction of this distinction forced Twardowski to revise further Brentano'stheory.His changes concerned judgements about relations and about non-present objects. The latter are considered special cases of relational judgements. The existential judgements (...) are analysed in a Brentanian way; whereas relational judgements are analysed in a Brentanian way only as far as the act is concerned, but not when it comes to the object: the object of a relational judgement is a relationship. With this notion of relationship Twardowski comes close to introducing a concept of state of affairs for the object of (relational) judgements. (shrink)
In this position paper, we describe a number of methodological and philosophical challenges that arose within our interdisciplinary Digital Humanities project CatVis, which is a collaboration between applied geometric algorithms and visualization researchers, data scientists working at OCLC, and philosophers who have a strong interest in the methodological foundations of visualization research. The challenges we describe concern aspects of one single epistemic need: that of methodologically securing (an increase in) trust in visualizations. We discuss the lack of ground truths in (...) the (digital) humanities and argue that trust in visualizations requires that we evaluate visualizations on the basis of ground truths that humanities scholars themselves create. We further argue that trust in visualizations requires that a visualization provides provable guarantees on the faithfulness of the visual representation and that we must clearly communicate to the users which part of the visualization can be trusted and how much. Finally, we discuss transparency and accessibility in visualization research and provide measures for securing transparency and accessibility. (shrink)
ABSTRACTTwardowski'sOn the Content and Object of Presentations is one of the most influential works that Austrian philosophy has left to posterity. The manuscriptLogik supplements that work and allows us to reconstruct Twardowski's theory of judgement. These texts raise several issues, in particular whether Twardowski accepts propositions and states of affairs in his theory of judgement and whether his theory is acceptable. This article presents Twardowski's theory, shows that he accepts states of affairs, that he has a notion of proposition, and (...) that his theory is interesting and sophisticated. (shrink)
Libraries provide access to large amounts of library metadata. Unfortunately, many libraries only offer textual interfaces for searching and browsing their holdings. Visualizations provide simpler, faster, and more efficient ways to navigate, search and study large quantities of metadata. This paper presents GlamMap, a visualization tool that displays library metadata on an interactive, computer-generated geographic map. We provide detailed discussion of how GlamMap benefits the work of librarians and researchers. We show how geographic representations help librarians to perform tasks such (...) as collection assessment and how geographic information helps researchers to identify important scientific resources. (shrink)
This paper presents the current state of development of GlamMap, a visualisation tool that displays library metadata on an interactive, computer-generated geographic map. The focus in the paper is on the most crucial improvement achieved in the development of the tool: GlamMapping Trove. The visualisation of Trove’s sixtymillion book records is possible thanks to an improved database structure, more efficient data retrieval, and more scalable visualisation algorithms. The paper analyses problems encountered in visualising massive datasets, describes remaining challenges for the (...) tool, and presents a use case demonstrating GlamMap’s ability to serve researchers in the history of ideas. (shrink)
This paper presents GlamMap, a visualization tool for large, multi-variate georeferenced humanities data sets. Our approach visualizes the data as glyphs on a zoomable geographic map, and performs clustering and data aggregation at each zoom level to avoid clutter and to prevent overlap of symbols. GlamMap was developed for the Galleries, Libraries, Archives, and Museums (GLAM) domain in cooperation with researchers in philosophy. We demonstrate the usefulness of our approach by a case study on history of logic, which involves navigation (...) and exploration of 7100 bibliographic records, and scalability on a data set of sixty million book records. (shrink)
We propose a novel type of low distortion radial embedding which focuses on one specific entity and its closest neighbors. Our embedding preserves near-exact distances to the focus entity and aims to minimize distortion between the other entities. We present an interactive exploration tool SolarView which places the focus entity at the center of a "solar system" and embeds its neighbors guided by concentric circles. SolarView provides an implementation of our novel embedding and several state-of-the-art dimensionality reduction and embedding techniques, (...) which we adapted to our setting in various ways. We experimentally evaluated our embedding and compared it to these state-of-the-art techniques. The results show that our embedding competes with these techniques and achieves low distortion in practice. Our method performs particularly well when the visualization, and hence the embedding, adheres to the solar system design principle of our application. Nonetheless - as with all dimensionality reduction techniques - the distortion may be high. We leverage interaction techniques to give clear visual cues that allow users to accurately judge distortion. We illustrate the use of SolarView by exploring the high-dimensional metric space of bibliographic entity similarities. (shrink)