We describe a possible physical device that computes a function that cannot be computed by a Turing machine. The device is physical in the sense that it is compatible with General Relativity. We discuss some objections, focusing on those which deny that the device is either a computer or computes a function that is not Turing computable. Finally, we argue that the existence of the device does not refute the Church–Turing thesis, but nevertheless may be a counterexample to Gandy's thesis.
A recent proposal to solve the halting problem with the quantum adiabatic algorithm is criticized and found wanting. Contrary to other physical hypercomputers, where one believes that a physical process “computes” a (recursive-theoretic) non-computable function simply because one believes the physical theory that presumably governs or describes such process, believing the theory (i.e., quantum mechanics) in the case of the quantum adiabatic “hypercomputer” is tantamount to acknowledging that the hypercomputer cannot perform its task.
A version of the Church-Turing Thesis states that every effectively realizable physical system can be simulated by Turing Machines (‘Thesis P’). In this formulation the Thesis appears to be an empirical hypothesis, subject to physical falsification. We review the main approaches to computation beyond Turing definability (‘hypercomputation’): supertask, non-well-founded, analog, quantum, and retrocausal computation. The conclusions are that these models reduce to supertasks, i.e. infinite computation, and that even supertasks are no solution for recursive incomputability. This yields that the (...) realization of hypercomputing devices is implausible, and that Thesis P is not essentially different from the standard Church-Turing Thesis. (shrink)
We explore the possibility of using quantum mechanical principles for hypercomputation through the consideration of a quantum algorithm for computing the Turing halting problem. The mathematical noncomputability is compensated by the measurability of the values of quantum observables and of the probability distributions for these values. Some previous no-go claims against quantum hypercomputation are then reviewed in the light of this new positive proposal.
In this report I provide an introduction to the burgeoning field of hypercomputation – the study of machines that can compute more than Turing machines. I take an extensive survey of many of the key concepts in the field, tying together the disparate ideas and presenting them in a structure which allows comparisons of the many approaches and results. To this I add several new results and draw out some interesting consequences of hypercomputation for several different disciplines.
A recent attempt to compute a (recursion‐theoretic) noncomputable function using the quantum adiabatic algorithm is criticized and found wanting. Quantum algorithms may outperform classical algorithms in some cases, but so far they retain the classical (recursion‐theoretic) notion of computability. A speculation is then offered as to where the putative power of quantum computers may come from.
This paper investigates the view that digital hypercomputing is a good reason for rejection or re-interpretation of the Church-Turing thesis. After suggestion that such re-interpretation is historically problematic and often involves attack on a straw man (the ‘maximality thesis’), it discusses proposals for digital hypercomputing with Zeno-machines , i.e. computing machines that compute an infinite number of computing steps in finite time, thus performing supertasks. It argues that effective computing with Zeno-machines falls into a dilemma: either they are specified such (...) that they do not have output states, or they are specified such that they do have output states, but involve contradiction. Repairs though non-effective methods or special rules for semi-decidable problems are sought, but not found. The paper concludes that hypercomputing supertasks are impossible in the actual world and thus no reason for rejection of the Church-Turing thesis in its traditional interpretation. (shrink)
Does Nature permit the implementation of behaviours that cannot be simulated computationally? We consider the meaning of physical computation in some detail, and present arguments in favour of physical hypercomputation: for example, modern scientific method does not allow the specification of any experiment capable of refuting hypercomputation. We consider the implications of relativistic algorithms capable of solving the (Turing) Halting Problem. We also reject as a fallacy the argument that hypercomputation has no relevance because non-computable values are (...) indistinguishable from sufficiently close computable approximations. In addition to considering the nature of computability relative to any given physical theory, we can consider the relationship between versions of computability corresponding to different models of physics. Deutsch and Penrose have argued on mathematical grounds that quantum computation and Turing computation have equivalent formal power. We suggest this equivalence is invalid when considered from the physical point of view, by highlighting a quantum computational behaviour that cannot meaningfully be considered feasible in the classical universe. (shrink)
The diagonal method is often used to show that Turing machines cannot solve their own halting problem. There have been several recent attempts to show that this method also exposes either contradiction or arbitrariness in other theoretical models of computation which claim to be able to solve the halting problem for Turing machines. We show that such arguments are flawed—a contradiction only occurs if a type of machine can compute its own diagonal function. We then demonstrate why such a situation (...) does not occur for the methods of hypercomputation under attack, and why it is unlikely to occur for any other serious methods. Introduction Issues with specific hypermachines Conclusions for hypercomputation. (shrink)
This paper surveys a wide range of proposed hypermachines, examining the resources that they require and the capabilities that they possess. 2005 Elsevier Inc. All rights reserved.
For over a decade, the hypercomputation movement has produced computational models that in theory solve the algorithmically unsolvable, but they are not physically realizable according to currently accepted physical theories. While opponents to the hypercomputation movement provide arguments against the physical realizability of specific models in order to demonstrate this, these arguments lack the generality to be a satisfactory justification against the construction of any information-processing machine that computes beyond the universal Turing machine. To this end, I present (...) a more mathematically concrete challenge to hypercomputability, and will show that one is immediately led into physical impossibilities, thereby demonstrating the infeasibility of hypercomputers more generally. This gives impetus to propose and justify a more plausible starting point for an extension to the classical paradigm that is physically possible, at least in principle. Instead of attempting to rely on infinities such as idealized limits of infinite time or numerical precision, or some other physically unattainable source, one should focus on extending the classical paradigm to better encapsulate modern computational problems that are not well-expressed/modeled by the closed-system paradigm of the Turing machine. I present the first steps toward this goal by considering contemporary computational problems dealing with intractability and issues surrounding cyber-physical systems, and argue that a reasonable extension to the classical paradigm should focus on these issues in order to be practically viable. (shrink)
Hypercomputation—the hypothesis that Turing-incomputable objects can be computed through infinitary means—is ineffective, as the unsolvability of the halting problem for Turing machines depends just on the absence of a definite value for some paradoxical construction; nature and quantity of computing resources are immaterial. The assumption that the halting problem is solved by oracles of higher Turing degree amounts just to postulation; infinite-time oracles are not actually solving paradoxes, but simply assigning them conventional values. Special values for non-terminating processes are (...) likewise irrelevant, since diagonalization can cover any amount of value assignments. This should not be construed as a restriction of computing power: Turing’s uncomputability is not a ‘barrier’ to be broken, but simply an effect of the expressive power of consistent programming systems. (shrink)
We claim that a recent article of P. Cotogno ([2003]) in this journal is based on an incorrect argument concerning the non-computability of diagonal functions. The point is that whilst diagonal functions are not computable by any function of the class over which they diagonalise, there is no ?logical incomputability? in their being computed over a wider class. Hence this ?logical incomputability? regrettably cannot be used in his argument that no hypercomputation can compute the Halting problem. This seems to (...) lead him into a further error in his analysis of the supposed conventional status of the infinite time Turing machines of Hamkins and Lewis ([2000]). Theorem 1 refutes this directly. The diagonalisation misunderstanding Infinite computation Conclusion. (shrink)
Does what guides a pastry chef stand on par, from the standpoint of contemporary computer science, with what guides a supercomputer? Did Betty Crocker, when telling us how to bake a cake, provide an effective procedure, in the sense of `effective' used in computer science? According to Cleland, the answer in both cases is ``Yes''. One consequence of Cleland's affirmative answer is supposed to be that hypercomputation is, to use her phrase, ``theoretically viable''. Unfortunately, though we applaud Cleland's ``gadfly (...) philosophizing'' (as, in fact, seminal), we believe that unless such a modus operandi is married to formal philosophy, nothing conclusive will be produced (as evidenced by the problems plaguing Cleland's work that we uncover). Herein, we attempt to pull off not the complete marriage for hypercomputation, but perhaps at least the beginning of a courtship that others can subsequently help along. (shrink)
. In this article intelligent systems are placed in the context of accelerated Turing machines. Although such machines are not currently a reality, the very real gains in computing power made over previous decades require us to continually reevaluate the potential of intelligent systems. The economic theories of Adam Smith provide us with a useful insight into this question.
Accelerating Turing machines have attracted much attention in the last decade or so. They have been described as “the work-horse of hypercomputation”. But do they really compute beyond the “Turing limit”—e.g., compute the halting function? We argue that the answer depends on what you mean by an accelerating Turing machine, on what you mean by computation, and even on what you mean by a Turing machine. We show first that in the current literature the term “accelerating Turing machine” is (...) used to refer to two very different species of accelerating machine, which we call end-stage-in and end-stage-out machines, respectively. We argue that end-stage-in accelerating machines are not Turing machines at all. We then present two differing conceptions of computation, the internal and the external, and introduce the notion of an epistemic embedding of a computation. We argue that no accelerating Turing machine computes the halting function in the internal sense. Finally, we distinguish between two very different conceptions of the Turing machine, the purist conception and the realist conception; and we argue that Turing himself was no subscriber to the purist conception. We conclude that under the realist conception, but not under the purist conception, an accelerating Turing machine is able to compute the halting function in the external sense. We adopt a relatively informal approach throughout, since we take the key issues to be philosophical rather than mathematical. (shrink)
Since the mid-twentieth century, the concept of the Turing machine has dominated thought about effective procedures. This paper presents an alternative to Turing's analysis; it unifies, refines, and extends my earlier work on this topic. I show that Turing machines cannot live up to their billing as paragons of effective procedure; at best, they may be said to provide us with mere procedure schemas. I argue that the concept of an effective procedure crucially depends upon distinguishing procedures as definite courses (...) of action(- types) from the particular courses of action(-tokens) that actually instantiate them and the causal processes and/or interpretations that ultimately make them effective. On my analysis, effectiveness is not just a matter of logical form; `content' matters. The analysis I provide has the advantage of applying to ordinary, everyday procedures such as recipes and methods, as well as the more refined procedures of mathematics and computer science. It also has the virtue of making better sense of the physical possibilities for hypercomputation than the received view and its extensions, e.g. Turing's o-machines, accelerating machines. (shrink)
The ‘Turing barrier’ is an evocative image for 0′, the degree of the unsolvability of the halting problem for Turing machines—equivalently, of the undecidability of Peano Arithmetic. The ‘barrier’ metaphor conveys the idea that effective computability is impaired by restrictions that could be removed by infinite methods. Assuming that the undecidability of PA is essentially depending on the finite nature of its computational means, decidability would be restored by the ω-rule. Hypercomputation, the hypothetical realization of infinitary machines through relativistic (...) and quantium models, would thus be capable of breaking the Turing barrier. The speculation is unfounded in principle, apart from issues of physical realizability: The point is that the ω-rule does not cope with all objects entailing the undecidability of PA. As long as the system is consistent, the computational boundary can be established by paradox-like constructions, such as Gödel-Rosser’s and Yablo’s, which are refractory to infinite induction, and do stand as the barrier’s buttresses. (shrink)
This paper focuses on a constructive treatment of the mathematical formalism of quantum theory and a possible role of constructivist philosophy in resolving the foundational problems of quantum mechanics, particularly, the controversy over the meaning of the wave function of the universe. As it is demonstrated in the paper, unless the number of the universe’s degrees of freedom is fundamentally upper bounded or hypercomputation is physically realizable, the universal wave function is a non-constructive entity in the sense of constructive (...) recursive mathematics. This means that even if such a function might exist, basic mathematical operations on it would be undefinable and subsequently the only content one would be able to deduce from this function would be pure symbolical. (shrink)
Alan Turing anticipated many areas of current research incomputer and cognitive science. This article outlines his contributionsto Artificial Intelligence, connectionism, hypercomputation, andArtificial Life, and also describes Turing's pioneering role in thedevelopment of electronic stored-program digital computers. It locatesthe origins of Artificial Intelligence in postwar Britain. It examinesthe intellectual connections between the work of Turing and ofWittgenstein in respect of their views on cognition, on machineintelligence, and on the relation between provability and truth. Wecriticise widespread and influential misunderstandings of theChurch–Turing (...) thesis and of the halting theorem. We also explore theidea of hypercomputation, outlining a number of notional machines thatcompute the uncomputable. (shrink)
The increased interactivity and connectivity of computational devices along with the spreading of computational tools and computational thinking across the fields, has changed our understanding of the nature of computing. In the course of this development computing models have been extended from the initial abstract symbol manipulating mechanisms of stand-alone, discrete sequential machines, to the models of natural computing in the physical world, generally concurrent asynchronous processes capable of modelling living systems, their informational structures and dynamics on both symbolic and (...) sub-symbolic information processing levels. Present account of models of computation highlights several topics of importance for the development of new understanding of computing and its role: natural computation and the relationship between the model and physical implementation, interactivity as fundamental for computational modelling of concurrent information processing systems such as living organisms and their networks, and the new developments in logic needed to support this generalized framework. Computing understood as information processing is closely related to natural sciences; it helps us recognize connections between sciences, and provides a unified approach for modeling and simulating of both living and non-living systems. (shrink)
Recent work on hypercomputation has raised new objections against the Church–Turing Thesis. In this paper, I focus on the challenge posed by a particular kind of hypercomputer, namely, SAD computers. I first consider deterministic and probabilistic barriers to the physical possibility of SAD computation. These suggest several ways to defend a Physical version of the Church–Turing Thesis. I then argue against Hogarth's analogy between non-Turing computability and non-Euclidean geometry, showing that it is a non-sequitur. I conclude that the Effective (...) version of the Church–Turing Thesis is unaffected by SAD computation. (shrink)
We generalize ordinary register machines on natural numbers to machines whose registers contain arbitrary ordinals. Ordinal register machines are able to compute a recursive bounded truth predicate on the ordinals. The class of sets of ordinals which can be read off the truth predicate satisfies a natural theory SO. SO is the theory of the sets of ordinals in a model of the Zermelo-Fraenkel axioms ZFC. This allows the following characterization of computable sets: a set of ordinals is ordinal register (...) computable if and only if it is an element of Gödel’s constructible universe L. (shrink)
This paper develops my (BJPS 2009) criticisms of the philosophical significance of a certain sort of infinitary computational process, a hyperloop. I start by considering whether hyperloops suggest that "effectively computable" is vague (in some sense). I then consider and criticise two arguments by Hogarth, who maintains that hyperloops undermine the very idea of effective computability. I conclude that hyperloops, on their own, cannot threaten the notion of an effective procedure.
Infinite time register machines (ITRMs) are register machines which act on natural numbers and which are allowed to run for arbitrarily many ordinal steps. Successor steps are determined by standard register machine commands. At limit times register contents are defined by appropriate limit operations. In this paper, we examine the ITRMs introduced by the third and fourth author (Koepke and Miller in Logic and Theory of Algorithms LNCS, pp. 306–315, 2008), where a register content at a limit time is set (...) to the lim inf of previous register contents if that limit is finite; otherwise the register is reset to 0. The theory of these machines has several similarities to the infinite time Turing machines (ITTMs) of Hamkins and Lewis. The machines can decide all ${\Pi^1_1}$ sets, yet are strictly weaker than ITTMs. As in the ITTM situation, we introduce a notion of ITRM-clockable ordinals corresponding to the running times of computations. These form a transitive initial segment of the ordinals. Furthermore we prove a Lost Melody theorem: there is a real r such that there is a program P that halts on the empty input for all oracle contents and outputs 1 iff the oracle number is r, but no program can decide for every natural number n whether or not ${n \in r}$ with the empty oracle. In an earlier paper, the third author considered another type of machines where registers were not reset at infinite lim inf’s and he called them infinite time register machines. Because the resetting machines correspond much better to ITTMs we hold that in future the resetting register machines should be called ITRMs. (shrink)
We first discuss some technical questions which arise in connection with the construction of undecidable propositions in analysis, in particular in connection with the notion of the normal form of a function representing a predicate. Then it is stressed that while a function f(x) may be computable in the sense of recursive function theory, it may nevertheless have undecidable properties in the realm of Fourier analysis. This has an implication for a conjecture of Penrose's which states that classical physics is (...) computable. (shrink)
Black holes are extremely relativistic objects. Physical processes around them occur in a regime where the gravitational field is extremely intense. Under such conditions, our representations of space, time, gravity, and thermodynamics are pushed to their limits. In such a situation philosophical issues naturally arise. In this chapter I review some philosophical questions related to black holes. In particular, the relevance of black holes for the metaphysical dispute between presentists and eternalists, the origin of the second law of thermadynamics and (...) its relation to black holes, the problem of information, black holes and hypercomputing, the nature of determinsim, and the breakdown of predictability in black hole space-times. I maintain that black hole physics can be used to illuminate some important problems in the border between science and philosophy, either epistemology and ontology. (shrink)
The origin of my article lies in the appearance of Copeland and Proudfoot's feature article in Scientific American, April 1999. This preposterous paper, as described on another page, suggested that Turing was the prophet of 'hypercomputation'. In their references, the authors listed Copeland's entry on 'The Church-Turing thesis' in the Stanford Encyclopedia. In the summer of 1999, I circulated an open letter criticising the Scientific American article. I included criticism of this Encyclopedia entry. This was forwarded to Prof. Ed (...) Zalta, editor of the Encyclopedia, and after some discussion he invited me to submit an entry on ' Alan Turing.'. (shrink)
This article shows that Rabbi Pinchas Elijah Hurwitz, a major eighteenth-century kabbalist, Orthodox rabbi and Enlightenment thinker, who merged Lurianic Kabbalah with Kantian philosophy, attempted to describe God and the world in terms of formal grammars and abstract information processes. He resolves a number of Kant's dualistic views by introducing prophecy as a tool that allows a mystic's mind to perform transfinite hypercomputation and to obtain a priori knowledge about things usually known only a posteriori. According to Hurwitz, the (...) reality consists of Divine names, which generate an infinite network of recursive string rewriting systems, some of which are identical to what is known today as Lindenmayer systems. Hurwitz is also one of the first thinkers, who raised questions about non-human and artificial intelligence. (shrink)
In the paper I discuss the importance of relativistic hypercomputation for the philosophy of mathematics, in particular for our understanding of mathematical knowledge. I also discuss the problem of the explanatory role of mathematics in physics and argue that relativistic computation fits very well into the so-called programming account. Relativistic computation reveals an interesting interplay between the empirical realm and the realm of very abstract mathematical principles that even exceed standard mathematics and suggests, that such principles might play an (...) explanatory role. I also argue that relativistic computation does not have some of the weaknesses of other hypercomputational models, thus it is particularly attractive for the philosophy of mathematics. (shrink)
'Computationalism' is a relatively vague term used to describe attempts to apply Turing's model of computation to phenomena outside its original purview: in modelling the human mind, in physics, mathematics, etc. Early versions of computationalism faced strong objections from many (and varied) quarters, from philosophers to practitioners of the aforementioned disciplines. Here we will not address the fundamental question of whether computational models are appropriate for describing some or all of the wide range of processes that they have been applied (...) to, but will focus instead on whether `renovated' versions of the \textit{new computationalism} shed any new light on or resolve previous tensions between proponents and skeptics. We find this, however, not to be the case, because the 'new computationalism' falls short by using limited versions of "traditional computation", or proposing computational models that easily fall within the scope of Turing's original model, or else proffering versions of hypercomputation with its many pitfalls. (shrink)
One of us has previously argued that the Church-Turing Thesis (CTT), contra Elliot Mendelson, is not provable, and is — light of the mind’s capacity for effortless hypercomputation — moreover false (e.g., [13]). But a new, more serious challenge has appeared on the scene: an attempt by Smith [28] to prove CTT. His case is a clever “squeezing argument” that makes crucial use of Kolmogorov-Uspenskii (KU) machines. The plan for the present paper is as follows. After covering some necessary (...) preliminaries regarding the nature of CTT, and taking note of the fact that this thesis is “intrinsically cognitive” (§2), we: sketch out, for context, an open-minded position on CTT and related matters (§3); explain the formal structure of squeezing arguments (§4); after a review of KU-machines, formalize Smith’s case (§5); give our objections to certain assumptions in Smith’s argument (§6); support these objections with some evidence from general but limited-agent problem solving (§7); and explain why Smith’s argument is inconclusive (§8). We end with some brief, concluding remarks, some of which point toward near-future work that will build on the present paper (§9). (shrink)
Accelerating Turing machines are Turing machines of a sort able to perform tasks that are commonly regarded as impossible for Turing machines. For example, they can determine whether or not the decimal representation of contains n consecutive 7s, for any n; solve the Turing-machine halting problem; and decide the predicate calculus. Are accelerating Turing machines, then, logically impossible devices? I argue that they are not. There are implications concerning the nature of effective procedures and the theoretical limits of computability. Contrary (...) to a recent paper by Bringsjord, Bello and Ferrucci, however, the concept of an accelerating Turing machine cannot be used to shove up Searle's Chinese room argument. (shrink)
: One of the most compelling problems in science consists in understanding how living systems process information. After all, the way they process information defines their capacities to learning and adaptation. There is an increasing consensus in that living systems are not machines in any sense. Biological hypercomputation is the concept coined that expresses that living beings process information non-algorithmically. This paper aims at proving a positive understanding of “non-algorithmic” processes. Many arguments are brought that support the claim. This (...) foster, it is argued, a brand-new understanding of information processing among living beings. (shrink)
Two different types of analog computations are discussed in the paper: 1) analog-continuous computations, 2) analog-analogical computations. They are analyzed with regard to such questions like: a) are continuous computations physically implementable? b) what is the actual computational power of different analog techniques? c) can natural computations be such reliable as digital? d) is it possible to develop universal analog computers? Presented analyses are rather methodological than formal.
The article analyses the role of Church’s Thesis in the context of the development of hypercomputation research. The text begins by presenting various views on the essence of computer science and the limitations of its methods. Then CT and its importance in determining the limits of methods used by computer science is presented. Basing on the above explanations, the work goes on to characterize various proposals of hypercomputation showing their relative power in relation to the arithmetic hierarchy. The (...) general theme of the article is the analysis of mutual relations between the content of CT and the theories of hypercomputation. In the main part of the paper the arguments for abolition of CT caused by the introduction of hypercomputable methods in computer science are presented and critique of these views is presented. The role of the efficiency condition contained in the formulation of CT is stressed. The discussion ends with a summary defending the current status of Church’s thesis within the framework of philosophy and computer science as an important point of reference for determining what the notion of effective calculability really is. The considerations included in this article seem to be quite up-to-date relative to the current state of affairs in computer science.1. (shrink)
What are the limits of physical computation? In his ‘Church’s Thesis and Principles for Mechanisms’, Turing’s student Robin Gandy proved that any machine satisfying four idealised physical ‘principles’ is equivalent to some Turing machine. Gandy’s four principles in effect define a class of computing machines (‘Gandy machines’). Our question is: What is the relationship of this class to the class of all (ideal) physical computing machines? Gandy himself suggests that the relationship is identity. We do not share this view. We (...) will point to interesting examples of (ideal) physical machines that fall outside the class of Gandy machines and compute functions that are not Turing-machine computable. (shrink)
Certain enterprises at the fringes of science, such as intelligent design creationism, claim to identify phenomena that go beyond not just our present physics but any possible physical explanation. Asking what it would take for such a claim to succeed, we introduce a version of physicalism that formulates the proposition that all available data sets are best explained by combinations of “chance and necessity”—algorithmic rules and randomness. Physicalism would then be violated by the existence of oracles that produce certain kinds (...) of noncomputable functions. Examining how a candidate for such an oracle would be evaluated leads to questions that do not admit an easy resolution. Since we lack any plausible candidate for any such oracle, however, chance-and-necessity physicalism appears very likely to be correct. (shrink)