This book argues that (1) AI will continue to produce machines with the capacity to pass stronger and stronger versions of the Turing Test but that (2) the "Person Building Project" (the attempt by AI and Cognitive Science to build a machine which is a person) will inevitably fail. The defense of (2) rests in large part on a refutation of the proposition that persons are automata -- a refutation involving an array of issues, from free will to Godel to (...) introspection to Searle and beyond. The defense of. (shrink)
The Turing Test is claimed by many to be a way to test for the presence, in computers, of such ``deep'' phenomena as thought and consciousness. Unfortunately, attempts to build computational systems able to pass TT have devolved into shallow symbol manipulation designed to, by hook or by crook, trick. The human creators of such systems know all too well that they have merely tried to fool those people who interact with their systems into believing that these systems really have (...) minds. And the problem is fundamental: the structure of the TT is such as to cultivate tricksters. A better test is one that insists on a certain restrictive epistemic relation between an artificial agent A, its output o, and the human architect H of A – a relation which, roughly speaking, obtains when H cannot account for how A produced o. We call this test the ``Lovelace Test'' in honor of Lady Lovelace, who believed that only when computers originate things should they be believed to have minds. (shrink)
The original proof of the four-color theorem by Appel and Haken sparked a controversy when Tymoczko used it to argue that the justification provided by unsurveyable proofs carried out by computers cannot be a priori. It also created a lingering impression to the effect that such proofs depend heavily for their soundness on large amounts of computation-intensive custom-built software. Contra Tymoczko, we argue that the justification provided by certain computerized mathematical proofs is not fundamentally different from that provided by surveyable (...) proofs, and can be sensibly regarded as a priori. We also show that the aforementioned impression is mistaken because it fails to distinguish between proof search (the context of discovery) and proof checking (the context of justification). By using mechanized proof assistants capable of producing certificates that can be independently checked, it is possible to carry out complex proofs without the need to trust arbitrary custom-written code. We only need to trust one fixed, small, and simple piece of software: the proof checker. This is not only possible in principle, but is in fact becoming a viable methodology for performing complicated mathematical reasoning. This is evinced by a new proof of the four-color theorem that appeared in 2005, and which was developed and checked in its entirety by a mechanical proof system. (shrink)
Bill Joyâs deep pessimism is now famous. Why the Future Doesnât Need Us, his defense of that pessimism, has been read by, it seems, everyoneâand many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis of (...) Joyâs reasoning. On the other hand, we ought to fear a good deal more than fear itself: we ought to fear not robots, but what some of us may do with robots. (shrink)
Is it true that if zombies---creatures who are behaviorally indistinguishable from us, but no more conscious than a rock-are logically possible, the computational conception of mind is false? Are zombies logically possible? Are they physically possible? This paper is a careful, sustained argument for affirmative answers to these three questions.
I critically review Raymond Turner’s Computational Artifacts – Towards a Philosophy of Computer Science by placing beside his position a rather different one, according to which computer science is a branch of, and is therefore subsumed by, immaterial formal logic.
Herein we make a plea to machine ethicists for the inclusion of constraints on their theories consistent with empirical data on human moral cognition. As philosophers, we clearly lack widely accepted solutions to issues regarding the existence of free will, the nature of persons and firm conditions on moral agency/patienthood; all of which are indispensable concepts to be deployed by any machine able to make moral judgments. No agreement seems forthcoming on these matters, and we don’t hold out hope for (...) machines that can both always do the right thing (on some general ethic) and produce explanations for its behavior that would be understandable to a human confederate. Our tentative solution involves understanding the folk concepts associated with our moral intuitions regarding these matters, and how they might be dependent upon the nature of human cognitive architecture. It is in this spirit that we begin to explore the complexities inherent in human moral judgment via computational theories of the human cognitive architecture, rather than under the extreme constraints imposed by rational-actor models assumed throughout much of the literature on philosophical ethics. After discussing the various advantages and challenges of taking this particular perspective on the development of artificial moral agents, we computationally explore a case study of human intuitions about the self and causal responsibility. We hypothesize that a significant portion of the variance in reported intuitions for this case might be explained by appeal to an interplay between the human ability to mindread and to the way that knowledge is organized conceptually in the cognitive system. In the present paper, we build on a pre-existing computational model of mindreading (Bello et al. 2007) by adding constraints related to psychological distance (Trope and Liberman 2010), a well-established psychological theory of conceptual organization. Our initial results suggest that studies of folk concepts involved in moral intuitions lead us to an enriched understanding of cognitive architecture and a more systematic method for interpreting the data generated by such studies. (shrink)
The dominant scientific and philosophical view of the mind – according to which, put starkly, cognition is computation – is refuted herein, via specification and defense of the following new argument: Computation is reversible; cognition isn't; ergo, cognition isn't computation. After presenting a sustained dialectic arising from this defense, we conclude with a brief preview of the view we would put in place of the cognition-is-computation doctrine.
What''s computation? The received answer is that computation is a computer at work, and a computer at work is that which can be modelled as a Turing machine at work. Unfortunately, as John Searle has recently argued, and as others have agreed, the received answer appears to imply that AI and Cog Sci are a royal waste of time. The argument here is alarmingly simple: AI and Cog Sci (of the Strong sort, anyway) are committed to the view that cognition (...) is computation (or brains are computers); butall processes are computations (orall physical things are computers); so AI and Cog Sci are positively silly.I refute this argument herein, in part by defining the locutions x is a computer and c is a computation in a way that blocks Searle''s argument but exploits the hard-to-deny link between What''s Computation? and the theory of computation. However, I also provide, at the end of this essay, an argument which, it seems to me, implies not that AI and Cog Sci are silly, but that they''re based on a form of computation that is well beneath human persons. (shrink)
Moody is right that the doctrine of conscious inessentialism is false. Unfortunately, his zombie-based argument against , once made sufficiently clear to evaluate, is revealed as nothing but legerdemain. The fact is -- though Moody has convinced himself otherwise -- certain zombies are impenetrable: that they are zombies, and not conscious beings like us, is something beyond the capacity of humans to divine.
Fetzer famously claims that program verification is not even a theoretical possibility, and offers a certain argument for this far-reaching claim. Unfortunately for Fetzer, and like-minded thinkers, this position-argument pair, while based on a seminal insight that program verification, despite its Platonic proof-theoretic airs, is plagued by the inevitable unreliability of messy, real-world causation, is demonstrably self-refuting. As I soon show, Fetzer is like the person who claims: ‘My sole claim is that every claim expressed by an English sentence and (...) starting with the phrase “My sole claim” is false’. Or, more accurately, such thinkers are like the person who claims that modus tollens is invalid, and supports this claim by giving an argument that itself employs this rule of inference. (shrink)
This article argues that existing systems on the Web cannot approach human-level intelligence, as envisioned by Descartes, without being able to achieve genuine problem solving on unseen problems. The article argues that this entails committing to a strong intensional logic. In addition to revising extant arguments in favor of intensional systems, it presents a novel mathematical argument to show why extensional systems can never hope to capture the inherent complexity of natural language. The argument makes its case by focusing on (...) representing, with increasing degrees of complexity, knowledge in a first-order language. Nevertheless, the attempts at representation fail to achieve consistency, making the case for an intensional representation system for natural language clear. (shrink)
You are offered one billion dollars to 'simply' produce a proof-of-concept robot that has phenomenal consciousness -- in fact, you can receive a deliciously large portion of the money up front, by simply starting a three-year work plan in good faith. Should you take the money and commence? No. I explain why this refusal is in order, now and into the foreseeable future.
Having, as it is generally agreed, failed to destroy the computational conception of mind with the G\"{o}delian attack he articulated in his {\em The Emperor's New Mind}, Penrose has returned, armed with a more elaborate and more fastidious G\"{o}delian case, expressed in and 3 of his {\em Shadows of the Mind}. The core argument in these chapters is enthymematic, and when formalized, a remarkable number of technical glitches come to light. Over and above these defects, the argument, at best, is (...) an instance of either the fallacy of denying the antecedent, the fallacy of {\em petitio principii}, or the fallacy of equivocation. More recently, writing in response to his critics in the electronic journal {\em Psyche}, Penrose has offered a G\"{o}delian case designed to improve on the version presented in {\em SOTM}. But this version is yet again another failure. In falling prey to the errors we uncover, Penrose's new G\"{o}delian case is unmasked as the same confused refrain J.R. Lucas initiated 35 years ago. (shrink)
Abstract: In the course of seeking an answer to the question "How do you know you are not a zombie?" Floridi (2005) issues an ingenious, philosophically rich challenge to artificial intelligence (AI) in the form of an extremely demanding version of the so-called knowledge game (or "wise-man puzzle," or "muddy-children puzzle")—one that purportedly ensures that those who pass it are self-conscious. In this article, on behalf of (at least the logic-based variety of) AI, I take up the challenge—which is to (...) say, I try to show that this challenge can in fact be met by AI in the foreseeable future. (shrink)
Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? (...) Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificial intelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person. (shrink)
Andrew Boucher (1997) argues that ``parallel computation is fundamentally different from sequential computation'' (p. 543), and that this fact provides reason to be skeptical about whether AI can produce a genuinely intelligent machine. But parallelism, as I prove herein, is irrelevant. What Boucher has inadvertently glimpsed is one small part of a mathematical tapestry portraying the simple but undeniable fact that physical computation can be fundamentally different from ordinary, ``textbook'' computation (whether parallel or sequential). This tapestry does indeed immediately imply (...) that human cognition may be uncomputable. (shrink)
Does what guides a pastry chef stand on par, from the standpoint of contemporary computer science, with what guides a supercomputer? Did Betty Crocker, when telling us how to bake a cake, provide an effective procedure, in the sense of `effective' used in computer science? According to Cleland, the answer in both cases is ``Yes''. One consequence of Cleland's affirmative answer is supposed to be that hypercomputation is, to use her phrase, ``theoretically viable''. Unfortunately, though we applaud Cleland's ``gadfly philosophizing'' (...) (as, in fact, seminal), we believe that unless such a modus operandi is married to formal philosophy, nothing conclusive will be produced (as evidenced by the problems plaguing Cleland's work that we uncover). Herein, we attempt to pull off not the complete marriage for hypercomputation, but perhaps at least the beginning of a courtship that others can subsequently help along. (shrink)
The vision of machines autonomously carrying out substantive conjecture generation, theorem discovery, proof discovery, and proof verification in mathematics and the natural sciences has a long history that reaches back before the development of automatic systems designed for such processes. While there has been considerable progress in proof verification in the formal sciences, for instance the Mizar project’ and the four-color theorem, now machine verified, there has been scant such work carried out in the realm of the natural sciences—until recently. (...) The delay in the case of the natural sciences can be attributed to both a lack of formal analysis of the so-called “theories” in such sciences, and the lack of sufficient progress in automated theorem proving. While the lack of analysis is probably due to an inclination toward informality and empiricism on the part of nearly all of the relevant scientists, the lack of progress is to be expected, given the computational hardness of automated theorem proving; after all, theoremhood in even first-order logic is Turing-undecidable. We give in the present short paper a compressed report on our building upon these formal theories using logic-based AI in order to achieve, in relativity, both machine proof discovery and proof verification, for theorems previously established by humans. Our report is intended to serve as a springboard to machine-produced results in the future that have not been obtained by humans. (shrink)
I urge return by the lights of logic and commonsense to a dialectical tabula rasa – according to which: (1) consciousness, in the ordinary pre-analytic sense of the term, is identified with P-consciousness, and “A-consciousness” is supplanted by suitably configured terms from its Blockian definition; (2) the supposedly fallacious Searlean argument for the view that a function of P-consciousness is to allow flexible and creative cognition is enthymematic and, when charitably specified, quite formidable.
One of us has previously argued that the Church-Turing Thesis (CTT), contra Elliot Mendelson, is not provable, and is — light of the mind’s capacity for effortless hypercomputation — moreover false (e.g., [13]). But a new, more serious challenge has appeared on the scene: an attempt by Smith [28] to prove CTT. His case is a clever “squeezing argument” that makes crucial use of Kolmogorov-Uspenskii (KU) machines. The plan for the present paper is as follows. After covering some necessary preliminaries (...) regarding the nature of CTT, and taking note of the fact that this thesis is “intrinsically cognitive” (§2), we: sketch out, for context, an open-minded position on CTT and related matters (§3); explain the formal structure of squeezing arguments (§4); after a review of KU-machines, formalize Smith’s case (§5); give our objections to certain assumptions in Smith’s argument (§6); support these objections with some evidence from general but limited-agent problem solving (§7); and explain why Smith’s argument is inconclusive (§8). We end with some brief, concluding remarks, some of which point toward near-future work that will build on the present paper (§9). (shrink)
In this paper I place Jim Fetzer's esemplastic burial of the computational conceptionof mind within the context of both my own burial and the theory of mind I would put in place of this dead doctrine. My view..
Steven Pinker's How the mind works (HTMW) marks in my opinion an historic point in the history of humankind's attempt to understand itself. Socrates delivered his "know thyself" imperative rather long ago, and now, finally, in this behemoth of a book, published at the dawn of a new millennium, Pinker steps up to have psychology tell us what we are: computers crafted by evolution - end of story; mystery solved; and the poor philosophers, having never managed to obey Socrates' command, (...) are left alone to wander in the labyrinth of their benighted speculation forever. Unfortunately, though HTMW is to this point the crowning attempt of psychology to make systematic sense of persons by integrating everything relevant science knows, the book fails - and it fails so fundamentally and irremediably that we would do well to wonder anew whether we should supplant the basic view it promotes with what I call the super-mind hypothesis: the view that though mere animals are evolved computers, persons are more. (shrink)
Though it''s difficult to agree on the exact date of their union, logic and artificial intelligence (AI) were married by the late 1950s, and, at least during their honeymoon, were happily united. What connubial permutation do logic and AI find themselves in now? Are they still (happily) married? Are they divorced? Or are they only separated, both still keeping alive the promise of a future in which the old magic is rekindled? This paper is an attempt to answer these questions (...) via a review of six books. Encapsulated, our answer is that (i) logic and AI, despite tabloidish reports to the contrary, still enjoy matrimonial bliss, and (ii) only their future robotic offspring (as opposed to the children of connectionist AI) will mark real progress in the attempt to understand cognition. (shrink)
We provide an underlying theory of argument by disanalogy, in order to employ it to show that cyberwarfare is fundamentally new. Once this general case is made, the battle is won: we are well on our way to establishing our main thesis: that Just War Theory itself must be modernized. Augustine and Aquinas had a stunningly long run, but today’s world, based as it is on digital information and increasingly intelligent information-processing, points the way to a beast so big and (...) so radically different, that the core of this duo’s insights needs to be radically extended. (shrink)
Zlatev offers surprisingly weak reasoning in support of his view that robots with the right kind of developmental histories can have meaning. We ought nonetheless to praise Zlatev for an impressionistic account of how attending to the psychology of human development can help us build robots that appear to have intentionality.