One recursively enumerable real α dominates another one β if there are nondecreasing recursive sequences of rational numbers (a[n] : n ∈ ω) approximating α and (b[n] : n ∈ ω) approximating β and a positive constant C such that for all n, C(α − a[n]) ≥ (β − b[n]). See [R. M. Solovay, Draft of a Paper (or Series of Papers) on Chaitin’s Work, manuscript, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, 1974, p. 215] and [G. J. (...) Chaitin, IBM J. Res. Develop., 21 (1977), pp. 350–359]. We show that every recursively enumerable random real dominates all other recursively enumerable reals. We conclude that the recursively enumerable random reals are exactly the Ω-numbers [G. J. Chaitin, IBM J. Res. Develop., 21 (1977), pp. 350–359]. Second, we show that the sets in a universal Martin-Lof test for randomness have random measure, and every recursively enumerable random number is the sum of the measures represented in a universal Martin-Lof test. (shrink)
This book describes a program of research in computable structure theory. The goal is to find definability conditions corresponding to bounds on complexity which persist under isomorphism. The results apply to familiar kinds of structures (groups, fields, vector spaces, linear orderings Boolean algebras, Abelian p-groups, models of arithmetic). There are many interesting results already, but there are also many natural questions still to be answered. The book is self-contained in that it includes necessary background material from recursion theory (ordinal notations, (...) the hyperarithmetical hierarchy) and model theory (infinitary formulas, consistency properties). (shrink)
According to the Argument from Disagreement (AD) widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by moral facts, either because there are no such facts or because there are such facts but they fail to influence our moral opinions. In an innovative paper, Gustafsson and Peterson (Synthese, published online 16 October, 2010) study the argument by means of computer simulation of opinion dynamics, relying on the well-known model of Hegselmann and Krause (J Artif (...) Soc Soc Simul 5(3):1–33, 2002; J Artif Soc Soc Simul 9(3):1–28, 2006). Their simulations indicate that if our moral opinions were influenced at least slightly by moral facts, we would quickly have reached consensus, even if our moral opinions were also affected by additional factors such as false authorities, external political shifts and random processes. Gustafsson and Peterson conclude that since no such consensus has been reached in real life, the simulation gives us increased reason to take seriously the AD. Our main claim in this paper is that these results are not as robust as Gustafsson and Peterson seem to think they are. If we run similar simulations in the alternative Laputa simulation environment developed by Angere and Olsson (Angere, Synthese, forthcoming and Olsson, Episteme 8(2):127–143, 2011) considerably less support for the AD is forthcoming. (shrink)
In this paper, we pursue three general aims: (I) We will define a notion of fundamental opacity and ask whether it can be found in High Energy Physics (HEP), given the involvement of machine learning (ML) and computer simulations (CS) therein. (II) We identify two kinds of non-fundamental, contingent opacity associated with CS and ML in HEP respectively, and ask whether, and if so how, they may be overcome. (III) We address the question of whether any kind of opacity, contingent (...) or fundamental, is unique to ML or CS, or whether they stand in continuity to kinds of opacity associated with other scientific research. (shrink)
Book Review: Dubucs, J., & Bourdeau, M.. Constructivity and Computability in Historical and Philosophical Perspective. Springer Netherlands, XI. 214, pp. ISBN: 978-94-017-9216-5 978-94-017-9217-2, €83.29.
Artificial intelligence is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of system creates (...) both important risks and significant opportunities for promoting shared decision making. If value judgements are fixed and covert in AI systems, then we risk a shift back to more paternalistic medical care. However, if designed and used in an ethically informed way, AI could offer a potentially powerful way of supporting shared decision making. It could be used to incorporate explicit value reflection, promoting patient autonomy. In the context of medical treatment, we need value-flexible AI that can both respond to the values and treatment goals of individual patients and support clinicians to engage in shared decision making. (shrink)
Modeling and computer simulations, we claim, should be considered core philosophical methods. More precisely, we will defend two theses. First, philosophers should use simulations for many of the same reasons we currently use thought experiments. In fact, simulations are superior to thought experiments in achieving some philosophical goals. Second, devising and coding computational models instill good philosophical habits of mind. Throughout the paper, we respond to the often implicit objection that computer modeling is “not philosophical.”.
For countable structure, "Scott rank" provides a measure of internal, model-theoretic complexity. For a computable structure, the Scott rank is at most [Formula: see text]. There are familiar examples of computable structures of various computable ranks, and there is an old example of rank [Formula: see text]. In the present paper, we show that there is a computable structure of Scott rank [Formula: see text]. We give two different constructions. The first starts with an arithmetical example due to Makkai, and (...) codes it into a computable structure. The second re-works Makkai's construction, incorporating an idea of Sacks. (shrink)
AbstractThe question of where, between theory and experiment, computer simulations (CSs) locate on the methodological map is one of the central questions in the epistemology of simulation (cf. Saam Journal for General Philosophy of Science, 48, 293–309, 2017). The two extremes on the map have them either be a kind of experiment in their own right (e.g. Barberousse et al. Synthese, 169, 557–574, 2009; Morgan 2002, 2003, Journal of Economic Methodology, 12(2), 317–329, 2005; Morrison Philosophical Studies, 143, 33–57, 2009; Morrison (...) 2015; Massimi and Bhimji Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 51, 71–81, 2015; Parker Synthese, 169, 483–496, 2009) or just an argument executed with the aid of a computer (e.g. Beisbart European Journal for Philosophy of Science, 2, 395–434, 2012; Beisbart and Norton International Studies in the Philosophy of Science, 26, 403–422, 2012). There exist multiple versions of the first kind of position, whereas the latter is rather unified. I will argue that, while many claims about the ‘experimental’ status of CSs seem unjustified, there is a variant of the first position that seems preferable. In particular I will argue that while CSs respect the logic of (deductively valid) arguments, they neither agree with their pragmatics nor their epistemology. I will then lay out in what sense CSs can fruitfully be seen as experiments, and what features set them apart from traditional experiments nonetheless. I conclude that they should be seen as surrogate experiments, i.e. experiments executed consciously on the wrong kind of system, but with an exploitable connection to the system of interest. Finally, I contrast my view with that of Beisbart (European Journal for Philosophy of Science, 8, 171–204, 2018), according to which CSs are surrogates for experiments, arguing that this introduces an arbitrary split between CSs and other kinds of simulations. (shrink)
For countable structure, "Scott rank" provides a measure of internal, model-theoretic complexity. For a computable structure, the Scott rank is at most [Formula: see text]. There are familiar examples of computable structures of various computable ranks, and there is an old example of rank [Formula: see text]. In the present paper, we show that there is a computable structure of Scott rank [Formula: see text]. We give two different constructions. The first starts with an arithmetical example due to Makkai, and (...) codes it into a computable structure. The second re-works Makkai's construction, incorporating an idea of Sacks. (shrink)
A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
:Brain–computer interfaces are driven essentially by algorithms; however, the ethical role of such algorithms has so far been neglected in the ethical assessment of BCIs. The goal of this article is therefore twofold: First, it aims to offer insights into whether the problems related to the ethics of BCIs can be better grasped with the help of already existing work on the ethics of algorithms. As a second goal, the article explores what kinds of solutions are available in that body (...) of scholarship, and how these solutions relate to some of the ethical questions around BCIs. In short, the article asks what lessons can be learned about the ethics of BCIs from looking at the ethics of algorithms. To achieve these goals, the article proceeds as follows. First, a brief introduction into the algorithmic background of BCIs is given. Second, the debate about epistemic concerns and the ethics of algorithms is sketched. Finally, this debate is transferred to the ethics of BCIs. (shrink)
More than a decade ago, philosopher John Searle started a long-running controversy with his paper “Minds, Brains, and Programs” (Searle, 1980a), an attack on the ambitious claims of artificial intelligence (AI). With his now famous _Chinese Room_ argument, Searle claimed to show that despite the best efforts of AI researchers, a computer could never recreate such vital properties of human mentality as intentionality, subjectivity, and understanding. The AI research program is based on the underlying assumption that all important aspects of (...) human cognition may in principle be captured in a computational model. This assumption stems from the belief that beyond a certain level, implementational details are irrelevant to cognition. According to this belief, neurons, and biological wetware in general, have no preferred status as the substrate for a mind. As it happens, the best examples of minds we have at present have arisen from a carbon-based substrate, but this is due to constraints of evolution and possibly historical accidents, rather than to an absolute metaphysical necessity. As a result of this belief, many cognitive scientists have chosen to focus not on the biological substrate of the mind, but instead on the abstract causal structure_ _that the mind embodies (at an appropriate level of abstraction). The view that it is abstract causal structure that is essential to mentality has been an implicit assumption of the AI research program since Turing (1950), but was first articulated explicitly, in various forms, by Putnam (1960), Armstrong (1970) and Lewis (1970), and has become known as _functionalism_. From here, it is a very short step to _computationalism_, the view that computational structure is what is important in capturing the essence of mentality. This step follows from a belief that any abstract causal structure can be captured computationally: a belief made plausible by the Church–Turing Thesis, which articulates the power. (shrink)
The paper provides a critical review of thedebate on the foundations of Computer Ethics. Starting from a discussion of Moor'sclassic interpretation of the need for CEcaused by a policy and conceptual vacuum, fivepositions in the literature are identified anddiscussed: the ``no resolution approach'',according to which CE can have no foundation;the professional approach, according to whichCE is solely a professional ethics; the radicalapproach, according to which CE deals withabsolutely unique issues, in need of a uniqueapproach; the conservative approach, accordingto which CE (...) is only a particular appliedethics, discussing new species of traditionalmoral issues; and the innovative approach,according to which theoretical CE can expandthe metaethical discourse with a substantiallynew perspective. In the course of the analysis,it is argued that, although CE issues are notuncontroversially unique, they are sufficientlynovel to render inadequate the adoption ofstandard macroethics, such as Utilitarianismand Deontologism, as the foundation of CE andhence to prompt the search for a robust ethicaltheory. Information Ethics is proposed forthat theory, as the satisfactory foundation forCE. IE is characterised as a biologicallyunbiased extension of environmental ethics,based on the concepts of information object/infosphere/entropy rather thanlife/ecosystem/pain. In light of the discussionprovided in this paper, it is suggested that CEis worthy of independent study because itrequires its own application-specific knowledgeand is capable of supporting a methodologicalfoundation, IE. (shrink)
This unique volume introduces and discusses the methods of validating computer simulations in scientific research. The core concepts, strategies, and techniques of validation are explained by an international team of pre-eminent authorities, drawing on expertise from various fields ranging from engineering and the physical sciences to the social sciences and history. The work also offers new and original philosophical perspectives on the validation of simulations. Topics and features: introduces the fundamental concepts and principles related to the validation of computer simulations, (...) and examines philosophical frameworks for thinking about validation; provides an overview of the various strategies and techniques available for validating simulations, as well as the preparatory steps that have to be taken prior to validation; describes commonly used reference points and mathematical frameworks applicable to simulation validation; reviews the legal prescriptions, and the administrative and procedural activities related to simulation validation; presents examples of best practice that demonstrate how methods of validation are applied in various disciplines and with different types of simulation models; covers important practical challenges faced by simulation scientists when applying validation methods and techniques; offers a selection of general philosophical reflections that explore the significance of validation from a broader perspective. This truly interdisciplinary handbook will appeal to a broad audience, from professional scientists spanning all natural and social sciences, to young scholars new to research with computer simulations. Philosophers of science, and methodologists seeking to increase their understanding of simulation validation, will also find much to benefit from in the text. (shrink)
To clarify the notion of computation and its role in cognitive science, we need an account of implementation, the nexus between abstract computations and physical systems. I provide such an account, based on the idea that a physical system implements a computation if the causal structure of the system mirrors the formal structure of the computation. The account is developed for the class of combinatorial-state automata, but is sufficiently general to cover all other discrete computational formalisms. The implementation relation is (...) non-vacuous, so that criticisms by Searle and others fail. This account of computation can be extended to justify the foundational role of computation in artificial intelligence and cognitive science. (shrink)
Philosophy of mind and cognitive science have recently become increasingly receptive to the hypothesis of extended cognition, according to which external artifacts such as our laptops and smartphones can—under appropriate circumstances—feature as material realizers of a person's cognitive processes. We argue that, to the extent that the hypothesis of extended cognition is correct, our legal and ethical theorizing and practice must be updated by broadening our conception of personal assault so as to include intentional harm toward gadgets that have been (...) appropriately integrated. We next situate the theoretical case for extended personal assault within the context of some recent ethical and legal cases and close with critical discussion. (shrink)
F. H. George is Professor of Cybernetics at Brunel University in England. His book comprises eight chapters originally developed as lectures for a non-specialist audience. He points out the position of computer science among the sciences, explains its aims, procedures, and achievements to date, and speculates on its long-term implications for science in particular and society in general. Among the topics discussed are biological simulation and organ replacement, automated education, and the new philosophy of science. Each chapter concludes with a (...) brief summary. George's treatment of the technical details of his speciality is both illuminating and readable, thus serving as an excellent primer on one of the new technology's most important components. His wider forays into philosophy, economics, sociology, and religion are less happy, however; and unfortunately they take up a large part of the text. In general, they reveal that George identifies the methods of human advancement with the methods of the natural sciences in an equation whose rigidity would make even B. F. Skinner blush. Yet, the reader cannot claim that he was not forewarned; for in the introduction, D. J. Stewart, Chairman of the Rationalist Press Association, suggests that the current "swing of interest among young people away from the physical and biological sciences and towards the behavioural and social sciences... represents a symptom of disillusionment with science and technology and an attempted escape into irrationality."--J. M. V. (shrink)
There are many branches of philosophy called “the philosophy of X,” where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computer science is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computer science and describes a course covering those topics, along with suggested readings and assignments.
Professor Apter has written a valuable book. His work, a non-technical introduction to the most important aspect of the use of computers in psychology, is simple, readable, yet surprisingly concentrated and provocative. His first two chapters contain an unusually clear, concise examination of the extent to which minds and machines can be compared. Although brief it successfully collates the work of famous scientists and scholars of varied disciplines into a coherent cybernetic theory. Chapter three is a simplified explanation of the (...) way a digital computer works. This serves as a handy reference for the layman during more difficult discussions in ensuing chapters. Chapters four through nine recount the progress of many researchers in copying human behavior by means of machines. Here Apter examines data that have been part of the literature of computer simulation for some time, such as the Turing test, the General Problem Solver, experiments in Pattern Recognition, etc. But he also treats new or heretofore unnoticed research which undoubtedly will become elements in future philosophical controversy. One of these new cases is a program written by J. C. Loehlin called "Aldous". Aldous is the name given individually to a number of programs which interact emotionally. Loehlin so far has programmed anAldous, Decisive and Hesitant Aldouses, and, finally, Radical and Conservative Aldouses. Although the Aldous programs may sound more amusing than sober research in this area should, the details of the programming are both fascinating and philosophically significant. Loehlin, for instance, is forced to define "emotion" strictly in terms of environmental cues and elementary behavioral parameters. However, his Aldouses achieve from this a quite impressive and sophisticated repertoire of emotional responses accounting for learning, consistent emotional responses to unknown objects, mixed emotions, moods, frustration, satisfaction. Indeed, an "experienced" Aldous will exhibit behavior as unpredictable as any human. However, one still hesitates to call these reactions "emotion." Besides the obvious lack of physiological responses Aldous has no control over his environment and a merely marginal control over his emotions. Apter sees this difficulty clearly: "In particular one would like to see added the possibility of Aldous planning rather than reacting to the current situation and also the possibility of... changing not only his attitudes but also the parameters which govern the acquisition and change of attitudes." This criticism also may be levelled against other better known "personality" programs based on individual personality theories such as Heider’s or Homan’s. Apter clearly perceives theoretical difficulties in computer simulation such as these. Always, he faces them and gauges their relevance for the explanation of human behavior. The one serious shortcoming of Apter’s book is its brevity. Especially when he struggles with the theoretical implications of a particular experiment one gets the impression that he is needlessly condensing his analysis. This is particularly noticeable in the last chapter where he presents in barely-fleshed outline his position on the problem of consciousness. Despite this difficulty, however, Apter has fulfilled quite ably a promise he made in his preface that: "... this book about computers will be of interest to students of psychology and philosophy, the general reader and... perhaps also one day to computers."—J. F. (shrink)
This article reviews the strengths and limitations of five major paradigms of medical computer-assisted decision making (CADM): (1) clinical algorithms, (2) statistical analysis of collections of patient data, (3) mathematical models of physical processes, (4) decision analysis, and (5) symbolic reasoning or artificial intelligence (Al). No one technique is best for all applications, and there is recent promising work which combines two or more established techniques. We emphasize both the inherent power of symbolic reasoning and the promise of artificial intelligence (...) and the other techniques to complement each other. (shrink)
This unique volume introduces and discusses the methods of validating computer simulations in scientific research. The core concepts, strategies, and techniques of validation are explained by an international team of pre-eminent authorities, drawing on expertise from various fields ranging from engineering and the physical sciences to the social sciences and history. The work also offers new and original philosophical perspectives on the validation of simulations. Topics and features: introduces the fundamental concepts and principles related to the validation of computer simulations, (...) and examines philosophical frameworks for thinking about validation; provides an overview of the various strategies and techniques available for validating simulations, as well as the preparatory steps that have to be taken prior to validation; describes commonly used reference points and mathematical frameworks applicable to simulation validation; reviews the legal prescriptions, and the administrative and procedural activities related to simulation validation; presents examples of best practice that demonstrate how methods of validation are applied in various disciplines and with different types of simulation models; covers important practical challenges faced by simulation scientists when applying validation methods and techniques; offers a selection of general philosophical reflections that explore the significance of validation from a broader perspective. -/- This truly interdisciplinary handbook will appeal to a broad audience, from professional scientists spanning all natural and social sciences, to young scholars new to research with computer simulations. Philosophers of science, and methodologists seeking to increase their understanding of simulation validation, will also find much to benefit from in the text. (shrink)
A response to a recent critique by Cem Bozşahin of the theory of syntactic semantics as it applies to Helen Keller, and some applications of the theory to the philosophy of computer science.
We study the computational complexity of the universal and quasi-equational theories of classes of bounded distributive lattices with a negation operation, i.e., a unary operation satisfying a subset of the properties of the Boolean negation. The upper bounds are obtained through the use of partial algebras. The lower bounds are either inherited from the equational theory of bounded distributive lattices or obtained through a reduction of a global satisfiability problem for a suitable system of propositional modal logic.