Artefacts do not always do what they are supposed to, due to a variety of reasons, including manufacturing problems, poor maintenance, and normal wear-and-tear. Since software is an artefact, it should be subject to malfunctioning in the same sense in which other artefacts can malfunction. Yet, whether software is on a par with other artefacts when it comes to malfunctioning crucially depends on the abstraction used in the analysis. We distinguish between “negative” and “positive” notions of malfunction. A negative malfunction, (...) or dysfunction, occurs when an artefact token either does not or cannot do what it is supposed to. A positive malfunction, or misfunction, occurs when an artefact token may do what is supposed to but, at least occasionally, it also yields some unintended and undesirable effects. We argue that software, understood as type, may misfunction in some limited sense, but cannot dysfunction. Accordingly, one should distinguish software from other technical artefacts, in view of their design that makes dysfunction impossible for the former, while possible for the latter. (shrink)
Computing, today more than ever before, is a multi-faceted discipline which collates several methodologies, areas of interest, and approaches: mathematics, engineering, programming, and applications. Given its enormous impact on everyday life, it is essential that its debated origins are understood, and that its different foundations are explained. On the Foundations of Computing offers a comprehensive and critical overview of the birth and evolution of computing, and it presents some of the most important technical results and philosophical problems of the discipline, (...) combining both historical and systematic analyses. -/- The debates this text surveys are among the latest and most urgent ones: the crisis of foundations in mathematics and the birth of the decision problem, the nature of algorithms, the debates on computational artefacts and malfunctioning, and the analysis of computational experiments. By covering these topics, On the Foundations of Computing provides a much-needed resource to contextualize these foundational issues. -/- For practitioners, researchers, and students alike, a historical and philosophical approach such as what this volume offers becomes essential to understand the past of the discipline and to figure out the challenges of its future. (shrink)
The phenomenon of digital computation is explained (often differently) in computer science, computer engineering and more broadly in cognitive science. Although the semantics and implications of malfunctions have received attention in the philosophy of biology and philosophy of technology, errors in computational systems remain of interest only to computer science. Miscomputation has not gotten the philosophical attention it deserves. Our paper fills this gap by offering a taxonomy of miscomputations. This taxonomy is underpinned by a conceptual analysis of the design (...) and implementation of conventional computational systems at various levels of abstraction. It shows that ‘malfunction’ as it is typically used in the philosophy of artefacts only represents one type of miscomputation. (shrink)
We provide a full characterization of computational error states for information systems. The class of errors considered is general enough to include human rational processes, logical reasoning, scientific progress and data processing in some functional programming languages. The aim is to reach a full taxonomy of error states by analysing the recovery and processing of data. We conclude by presenting machine-readable checking and resolve algorithms.
This paper introduces a multi-modal polymorphic type theory to model epistemic processes characterized by trust, defined as a second-order relation affecting the communication process between sources and a receiver. In this language, a set of senders is expressed by a modal prioritized context, whereas the receiver is formulated in terms of a contextually derived modal judgement. Introduction and elimination rules for modalities are based on the polymorphism of terms in the language. This leads to a multi-modal non-homogeneous version of a (...) type theory, in which we show the embedding of the modal operators into standard group knowledge operators. (shrink)
The constructive reformulation of the semantic theory suggests two basic principles to be assumed: first, the distinction between proper knowledge, expressed in judgemental form, and the assertion conditions for such knowledge; second, ...
We offer a formal treatment of the semantics of both complete and incomplete mistrustful or distrustful information transmissions. The semantics of such relations is analysed in view of rules that define the behaviour of a receiving agent. We justify this approach in view of human agent communications and secure system design. We further specify some properties of such relations.
Malware has been around since the 1980s and is a large and expensive security concern today, constantly growing over the past years. As our social, professional and financial lives become more digitalised, they present larger and more profitable targets for malware. The problem of classifying and preventing malware is therefore urgent, and it is complicated by the existence of several specific approaches. In this paper, we use an existing malware taxonomy to formulate a general, language independent functional description of malware (...) as transformers between states of the host system and described by a trust relation with its components. This description is then further generalised in terms of mechanisms, thereby contributing to a general understanding of malware. The aim is to use the latter in order to present an improved classification method for malware. (shrink)
The process of completing, correcting and prioritising specifications is an essential but very complex task for the maintenance and improvement of software systems. The preservation of functionalities and the ability to accommodate changes are main objectives of the software development cycle to guarantee system reliability. Logical theories able to fully model such processes are still insufficient. In this paper we propose a full formalisation of such operations on software systems inspired by the Alchourrón–Gärdenfors–Makinson paradigm for belief revision of human epistemic (...) states. We represent specifications as finite sets of formulas equipped with a priority relation that models functional entrenchment of properties. We propose to handle specification incompleteness through ordered expansion, inconsistency through ordered safe contraction and prioritisation through revision with reordering, and model all three in an algorithmic fashion. We show how the system satisfies basic properties of the AGM paradigm, including Levi’s and Harper’s identities. We offer a concrete example and complexity results for the inference and model checking problems on revision. We conclude by describing resilience and evolvability of software systems based on such revision operators. (shrink)
Software-intensive science challenges in many ways our current scientific methods. This affects significantly our notion of science and scientific interpretation of the world, driving at the same time the philosophical debate. We consider some issues prompted by SIS in the light of the philosophical categories of ontology and epistemology.
The epistemology of computer simulations has become a mainstream topic in the philosophy of technology. Within this large area, significant differences hold between the various types of models and simulation technologies. Agent-based and multi-agent systems simulations introduce a specific constraint on the types of agents and systems modelled. We argue that such difference is crucial and that simulation for the artificial sciences requires the formulation of its own specific epistemological principles. We present a minimally committed epistemology which relies on the (...) methodological principles of the Philosophy of Information and requires weak assumptions on the usability of the simulation and the controllability of the model. We use these principles to provide a new definition of simulation for the context of interest. (shrink)
Various conceptual approaches to the notion of information can currently be traced in the literature in logic and formal epistemology. A main issue of disagreement is the attribution of truthfulness to informational data, the so called Veridicality Thesis (Floridi 2005). The notion of Epistemic Constructive Information (Primiero 2007) is one of those rejecting VT. The present paper develops a formal framework for ECI. It extends on the basic approach of Artemov’s logic of proofs (Artemov 1994), representing an epistemic logic based (...) on dependent justifications, where the definition of information relies on a strict distinction from factual truth. The definition obtained by comparison with a Normal Modal Logic translates a constructive logic for “becoming informed”: its distinction from the logic of “being informed”—which internalizes truthfulness—is essential to a general evaluation of information with respect to truth. The formal disentanglement of these two logics, and the description of the modal version of the former as a weaker embedding into the latter, allows for a proper understanding of the Veridicality Thesis with respect to epistemic states defined in terms of information. (shrink)
Contextual type theories are largely explored in their applications to programming languages, but less investigated for knowledge representation purposes. The combination of a constructive language with a modal extension of contexts appears crucial to explore the attractive idea of a type-theoretical calculus of provability from refutable assumptions for non-monotonic reasoning. This paper introduces such a language: the modal operators are meant to internalize two different modes of correctness, respectively with necessity as the standard notion of constructive verification and possibility as (...) provability up to refutation of contextual conditions. (shrink)
This paper contributes to the computer ethics debate on software ownership protection by examining the ontological, methodological, and ethical problems related to property right infringement that should come prior to any legal discussion. The ontological problem consists in determining precisely what it is for a computer program to be a copy of another one, a largely neglected problem in computer ethics. The methodological problem is defined as the difficulty of deciding whether a given software system is a copy of another (...) system. And the ethical problem corresponds to establishing when a copy constitutes, or does not constitute, a property right infringement. The ontological problem is solved on the logical analysis of abstract machines, and the latter are argued to be the appropriate level of abstraction for software at which the methodological and the ethical problems can be successfully addressed. (shrink)
The Editors’ vision for this volume is that it should be a selection of essays, contributed by the academics who have worked, studied, collaborated and disagreed with Göran Sundholm; engaging in debated issues and exploring untouched areas maybe only suggested or hinted at in Sundholm’s own work. "Acts of Knowledge" characterizes the papers contained in this volume as bringing something scientifically valuable in their respective fields: all the papers present cutting-edge research in their own style, contributing to very lively debates (...) occurring in the literature in logic, philosophical logic and history of logic. But it also hints at Göran’s constructivist background, which has been an influence or a challenge for many of the contributors. "History, Philosophy and Logic" refers directly to Göran's broad interests into the various aspects of the Philosophy of Logic, Mathematics, and Language, their origins and development, especially with the focus on the Modern History of Logic and the philosophical implications thereof. The readers will find scattered all along this volume pieces of -- and reflections on -- all these themes. (shrink)
We present the methodological principles underlying the scientific activities of the DHST Commission on the History and Philosophy of Computing. This volume collects refereed selected papers from the First International Conference organized by the Commission.
The relation between logic and knowledge has been at the heart of a lively debate since the 1960s. On the one hand, the epistemic approaches based their formal arguments in the mathematics of Brouwer and intuitionistic logic. Following Michael Dummett, they started to call themselves `antirealists'. Others persisted with the formal background of the Frege-Tarski tradition, where Cantorian set theory is linked via model theory to classical logic. Jaakko Hintikka tried to unify both traditions by means of what is now (...) known as `explicit epistemic logic'. Under this view, epistemic contents are introduced into the object language as operators yielding propositions from propositions, rather than as metalogical constraints on the notion of inference. The Realism-Antirealism debate has thus had three players: classical logicians, intuitionists and explicit epistemic logicians. The editors of the present volume believe that in the age of Alternative Logics, where manifold developments in logic happen at a breathtaking pace, this debate should be revisited. Contributors to this volume happily took on this challenge and responded with new approaches to the debate from both the explicit and the implicit epistemic point of view. (shrink)
This article presents an historical and conceptual overview on different approaches to logical abstraction. Two main trends concerning abstraction in the history of logic are highlighted, starting from the logical notions of concept and function. This analysis strictly relates to the philosophical discussion on the nature of abstract objects. I develop this issue further with respect to the procedure of abstraction involved by (typed) λ-systems, focusing on the crucial change about meaning and predicability. In particular, the analysis of the nature (...) of logical types in the context of Constructive Type Theory allows elucidation of the role of the previously introduced notions. Finally, the connection to the analysis of abstraction in computer science is drawn, and the methodological contribution provided by the notion of information is considered, showing its conceptual and technical relevance. Future research shall focus on the notion of information in distributed systems, analysing the paradigm of information hiding in dependent type theories. (shrink)
The present paper introduces a belief merging procedure by majority using the standard format of Adaptive Logics. The core structure of the logic ADM c (Adaptive Doxastic Merging by Counting) consists in the formulation of the conflicts arising from the belief bases of the agents involved in the procedure. A strategy is then defined both semantically and proof-theoretically which selects the consistent contents answering to a majority principle. The results obtained are proven to be equivalent to a standard majority operator (...) for bases with partial support. (shrink)
Machine awareness is a disputed research topic, in some circles considered a crucial step in realising Artificial General Intelligence. Understanding what that is, under which conditions such feature could arise and how it can be controlled is still a matter of speculation. A more concrete object of theoretical analysis is algorithmic iteration for computational intelligence, intended as the theoretical and practical ability of algorithms to design other algorithms for actions aimed at solving well-specified tasks. We know this ability is already (...) shown by current AIs, and understanding its limits is an essential step in qualifying claims about machine awareness and Super-AI. We propose a formal translation of algorithmic iteration in a fragment of modal logic, formulate principles of transparency and faithfulness across human and machine intelligence, and consider the relevance to theoretical research on -AI as well as the practical import of our results. (shrink)
This paper addresses the problem of upgrading functional information to knowledge. Functional information is defined as syntactically well-formed, meaningful and collectively opaque data. Its use in the formal epistemology of information theories is crucial to solve the debate on the veridical nature of information, and it represents the companion notion to standard strongly semantic information, defined as well-formed, meaningful and true data. The formal framework, on which the definitions are based, uses a contextual version of the verificationist principle of truth (...) in order to connect functional to semantic information, avoiding Gettierization and decoupling from true informational contents. The upgrade operation from functional information uses the machinery of epistemic modalities in order to add data localization and accessibility as its main properties. We show in this way the conceptual worthiness of this notion for issues in contemporary epistemology debates, such as the explanation of knowledge process acquisition from information retrieval systems, and open data repositories. (shrink)
Giovanni Sommaruga (ed): Formal Theories of Information: From Shannon to Semantic Information Theory and General Concepts of Information Content Type Journal Article Pages 119-122 DOI 10.1007/s11023-011-9228-0 Authors Giuseppe Primiero, Centre for Logic and Philosophy of Science, University of Ghent, Blandijnberg 2, Ghent, 9000 Belgium Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495 Journal Volume Volume 21 Journal Issue Volume 21, Number 1.
Modelling, reasoning and verifying complex situations involving a system of agents is crucial in all phases of the development of a number of safety-critical systems. In particular, it is of fundamental importance to have tools and techniques to reason about the doxastic and epistemic states of agents, to make sure that the agents behave as intended. In this paper we introduce a computationally grounded logic called COGWED and we present two types of semantics that support a range of practical situations. (...) We provide model checking algorithms, complexity characterisations and a prototype implementation. We validate our proposal against a case study from the avionic domain: we assess and verify the situational awareness of pilots flying an aircraft with several automated components in off-nominal conditions. (shrink)
This book presents a historical and philosophical analysis of programming systems, intended as large computational systems like, for instance, operating systems, programmed to control processes. The introduction to the volume emphasizes the contemporary need of providing a foundational analysis of such systems, rooted in a broader historical and philosophical discussion. The different chapters are grouped around three major themes. The first concerns the early history of large systems developed against the background of issues related to the growing semantic gap between (...) hardware and code. The second revisits the fundamental issue of complexity of large systems, dealt with by the use of formal methods and the development of `grand designs’ like Unix. Finally, a third part considers several issues related to programming systems in the real world, including chapters on aesthetical, ethical and political issues. This book will interest researchers from a diversity of backgrounds. It will appeal to historians, philosophers, as well as logicians and computer scientists who want to engage with topics relevant to the history and philosophy of programming and more specifically the role of programming systems in the foundations of computing. (shrink)
Current research in Explainable AI includes post-hoc explanation methods that focus on building transparent explaining agents able to emulate opaque ones. Such agents are naturally required to be accurate and trustworthy. However, what it means for an explaining agent to be accurate and trustworthy is far from being clear. We characterize accuracy and trustworthiness as measures of the distance between the formal properties of a given opaque system and those of its transparent explanantes. To this aim, we extend Probabilistic Computation (...) Tree Logic with operators to specify degrees of accuracy and trustworthiness of explaining agents. We also provide a semantics for this logic, based on a multi-agent structure and relative model-checking algorithms. The paper concludes with a simple example of a possible application. (shrink)
In Information Systems development, resilience has often been treated as a non-functional requirement and little or no work is aimed at building resilience in end-users through systems development. The question of how values and resilience (for the end-user) can be incorporated into the design of systems is an on-going research activity in user-centered design. In this paper we evaluate the relation of values and resilience within the context of an ongoing software development project and contribute a formal model of co-design (...) based on a significant extension of Abstract Design Theory. The formal analysis provides a full and clear-cut definition of the co-design space, its objectives and processes. On the basis of both, we provide an abstract definition of resilient system (for the end-user). We conclude that value-sensitive co-design enforces better resilience in end-users. (shrink)
We present a multi-conclusion natural deduction calculus characterizing the dynamic reasoning typical of Adaptive Logics. The resulting system AdaptiveND is sound and complete with respect to the propositional fragment of adaptive logics based on CLuN. This appears to be the first tree-format presentation of the standard linear dynamic proof system typical of Adaptive Logics. It offers the advantage of full transparency in the formulation of locally derivable rules, a connection between restricted inference-rules and their adaptive counterpart, and the formulation of (...) abnormalities as a subtype of well-formed formulas. These features of the proposed calculus allow us to clarify the relation between defeasible and multiple-conclusion approaches to classical recapture. (shrink)
Efficient and reliable computing is based on validity and correctness. Techniques to ensure these essential features have been in place since the early days of computing. The present study focuses on the hardware testing, data validation and program correctness techniques designed and implemented for LEO I and II machines in the UK during the 1950s.