The semantic view of computation is the claim that semantic properties play an essential role in the individuation of physical computing systems such as laptops and brains. The main argument for the semantic view rests on the fact that some physical systems simultaneously implement different automata at the same time, in the same space, and even in the very same physical properties. Recently, several authors have challenged this argument. They accept the premise of simultaneous implementation but reject the semantic conclusion. (...) In this paper, I aim to explicate the semantic view and to address these objections. I first characterize the semantic view and distinguish it from other, closely related views. Then, I contend that the master argument for the semantic view survives the counter-arguments against it. One counter-argument is that computational individuation is not forced to choose between the implemented automata but rather always picks out a more basic computational structure. My response is that this move might undermine the notion of computational equivalence. Another counter-argument is that while computational individuation is forced to rely on extrinsic features, these features need not be semantic. My reply is that the semantic view better accounts for these extrinsic features than the proposed non-semantic alternatives. (shrink)
Are all three of Marr's levels needed? Should they be kept distinct? We argue for the distinct contributions and methodologies of each level of analysis. It is important to maintain them because they provide three different perspectives required to understand mechanisms, especially information-processing mechanisms. The computational perspective provides an understanding of how a mechanism functions in broader environments that determines the computations it needs to perform. The representation and algorithmic perspective offers an understanding of how information about the environment is (...) encoded within the mechanism and what are the patterns of organization that enable the parts of the mechanism to produce the phenomenon. The implementation perspective yields an understanding of the neural details of the mechanism and how they constrain function and algorithms. Once we adequately characterize the distinct role of each level of analysis, it is fairly straightforward to see how they relate. (shrink)
The view that the brain is a sort of computer has functioned as a theoretical guideline both in cognitive science and, more recently, in neuroscience. But since we can view every physical system as a computer, it has been less than clear what this view amounts to. By considering in some detail a seminal study in computational neuroscience, I first suggest that neuroscientists invoke the computational outlook to explain regularities that are formulated in terms of the information content of electrical (...) signals. I then indicate why computational theories have explanatory force with respect to these regularities:in a nutshell, they underscore correspondence relations between formal/mathematical properties of the electrical signals and formal/mathematical properties of the represented objects. I finally link my proposal to the philosophical thesis that content plays an essential role in computational taxonomy. (shrink)
According to Marr, a computational-level theory consists of two elements, the what and the why . This article highlights the distinct role of the Why element in the computational analysis of vision. Three theses are advanced: ( a ) that the Why element plays an explanatory role in computational-level theories, ( b ) that its goal is to explain why the computed function (specified by the What element) is appropriate for a given visual task, and ( c ) that the (...) explanation consists in showing that the functional relations between the representing cells are similar to the “external” mathematical relations between the entities that these cells represent. *Received September 2009; revised January 2010. †To contact the author, please write to: Departments of Philosophy and Cognitive Science, The Hebrew University, Jerusalem 91905, Israel; e-mail: [email protected] (shrink)
In Representation Reconsidered , William Ramsey suggests that the notion of structural representation is posited by classical theories of cognition, but not by the ‘newer accounts’ (e.g. connectionist modeling). I challenge the assertion about the newer accounts. I argue that the newer accounts also posit structural representations; in fact, the notion plays a key theoretical role in the current computational approaches in cognitive neuroscience. The argument rests on a close examination of computational work on the oculomotor system.
The paper presents an extended argument for the claim that mental content impacts the computational individuation of a cognitive system (section 2). The argument starts with the observation that a cognitive system may simultaneously implement a variety of different syntactic structures, but that the computational identity of a cognitive system is given by only one of these implemented syntactic structures. It is then asked what are the features that determine which of implemented syntactic structures is the computational structure of the (...) system, and it is contended that these features are certain aspects of mental content. The argument helps (section 3) to reassess the thesis known as computational externalism, namely, the thesis that computational theories of cognition make essential reference to features in the individual's environment. It is suggested that the familiar arguments for computational externalism?which rest on thought experiments and on exegesis of Marr's theories of vision?are unconvincing, but that they can be improved. A reconstruction of the visex/audex thought experiment is offered in section 3.1. An outline of a novel interpretation of Marr's theories of vision is presented in section 3.2. The corrected arguments support the claim that computational theories of cognition are intentional. Computational externalism is still pending, however, upon the thesis that psychological content is extrinsic. (shrink)
Computational neuroscientists not only employ computer models and simulations in studying brain functions. They also view the modeled nervous system itself as computing. What does it mean to say that the brain computes? And what is the utility of the ‘brain-as-computer’ assumption in studying brain functions? In previous work, I have argued that a structural conception of computation is not adequate to address these questions. Here I outline an alternative conception of computation, which I call the analog-model. The term ‘analog-model’ (...) does not mean continuous, non-discrete or non-digital. It means that the functional performance of the system simulates mathematical relations in some other system, between what is being represented. The brain-as-computer view is invoked to demonstrate that the internal cellular activity is appropriate for the pertinent information-processing task.Keywords: Computation; Computational neuroscience; Analog computers; Representation; Simulation. (shrink)
Theodore Sider distinguishes two notions of global supervenience: strong global supervenience and weak global supervenience. He then discusses some applications to general metaphysical questions. Most interestingly, Sider employs the weak notion in order to undermine a familiar argument against coincident distinct entities. In what follows, I reexamine the two notions and distinguish them from a third, intermediate, notion. I argue that weak global supervenience is not an adequate notion of dependence; weak global supervenience does not capture certain assumptions about coincidence (...) relations; these assumptions are better accommodated by the stronger notion of intermediate global supervenience; intermediate global supervenience, however, is also not an adequate notion of dependence; and strong global supervenience is an adequate notion of dependence. It also fits in with anti-individualism about the mental. It does not, however, serve to rebut arguments against coincident entities. (shrink)
According to Marr, a computational-level theory consists of two elements, the what and the why. This article highlights the distinct role of the Why element in the computational analysis of vision. Three theses are advanced: that the Why element plays an explanatory role in computational-level theories, that its goal is to explain why the computed function is appropriate for a given visual task, and that the explanation consists in showing that the functional relations between the representing cells are similar to (...) the “external” mathematical relations between the entities that these cells represent. (shrink)
It is often indeterminate what function a given computational system computes. This phenomenon has been referred to as “computational indeterminacy” or “multiplicity of computations.” In this paper, we argue that what has typically been considered and referred to as the challenge of computational indeterminacy in fact subsumes two distinct phenomena, which are typically bundled together and should be teased apart. One kind of indeterminacy concerns a functional characterization of the system’s relevant behavior. Another kind concerns the manner in which the (...) abstract states are interpreted. We discuss the similarities and differences between the two kinds of computational indeterminacy, their implications for certain accounts of “computational individuation” in the literature, and their relevance to different levels of description within the computational system. We also examine the inter-relationships between our proposed accounts of the two kinds of indeterminacy and the main accounts of “computational implementation.”. (shrink)
It is generally accepted that, in the cognitive and neural sciences, there are both computational and mechanistic explanations. We ask how computational explanations can integrate into the mechanistic hierarchy. The problem stems from the fact that implementation and mechanistic relations have different forms. The implementation relation, from the states of an abstract computational system to the physical, implementing states is a homomorphism mapping relation. The mechanistic relation, however, is that of part/whole; the explaining features in a mechanistic explanation are the (...) components of the explanandum phenomenon and their causal organization. Moreover, each component in one level of mechanism is constituted and explained by components of an underlying level of mechanism. Hence, it seems, computational variables and functions cannot be mechanistically explained by the medium-dependent states and properties that implement them. How then, do the computational and the implementational integrate to create the mechanistic hierarchy? After explicating the general problem, we further demonstrate it through a concrete example, of reinforcement learning, in the cognitive and neural sciences. We then examine two possible solutions. On one solution, the mechanistic hierarchy embeds at the same levels computational and implementational properties. This picture fits with the view that computational explanations are mechanistic sketches. On the other solution, there are two separate hierarchies, one computational and another implementational, which are related by the implementation relation. This picture fits with the view that computational explanations are functional and autonomous explanations. It is less clear how these solutions fit with the view that computational explanations are full-fledged mechanistic explanations. Finally, we argue that both pictures are consistent with the reinforcement learning example, but that scientific practice does not align with the view that computational models are merely mechanistic sketches. (shrink)
A key component of scientific inquiry, especially inquiry devoted to developing mechanistic explanations, is delineating the phenomenon to be explained. The task of delineating phenomena, however, has not been sufficiently analyzed, even by the new mechanistic philosophers of science. We contend that Marr’s characterization of what he called the computational level provides a valuable resource for understanding what is involved in delineating phenomena. Unfortunately, the distinctive feature of Marr’s computational level, his dual emphasis on both what is computed and why (...) it is computed, has not been appreciated in philosophical discussions of Marr. Accordingly we offer a distinctive account of CL. This then allows us to develop two important points about delineating phenomena. First, the accounts of phenomena that figure in explanatory practice are typically not qualitative but precise, formal or mathematical, representations. Second, delineating phenomena requires consideration of the demands the environment places on the mechanism—identifying, as Marr put it, the basis of the computed function in the world. As valuable as Marr’s account of CL is in characterizing phenomena, we contend that ultimately he did not go far enough. Determining the relevant demands of the environment on the mechanism often requires detailed empirical investigation. Moreover, often phenomena are reconstituted in the course of inquiry on the mechanism itself. (shrink)
What does it mean to say that an object or system computes? What is it about laptops, smartphones, and nervous systems that they are considered to compute, and why does it seldom occur to us to describe stomachs, hurricanes, rocks, or chairs that way? Though computing systems are everywhere today, it is very difficult to answer these questions. The book aims to shed light on the subject by arguing for the semantic view of computation, which states that computingsystems are always (...) accompanied by representations. This view is presented as an alternative to non-semantic views such as the mechanistic account of computation. (shrink)
We describe a possible physical device that computes a function that cannot be computed by a Turing machine. The device is physical in the sense that it is compatible with General Relativity. We discuss some objections, focusing on those which deny that the device is either a computer or computes a function that is not Turing computable. Finally, we argue that the existence of the device does not refute the Church–Turing thesis, but nevertheless may be a counterexample to Gandy's thesis.
An underlying assumption in computational approaches in cognitive and brain sciences is that the nervous system is an input–output model of the world: Its input–output functions mirror certain relations in the target domains. I argue that the input–output modelling assumption plays distinct methodological and explanatory roles. Methodologically, input–output modelling serves to discover the computed function from environmental cues. Explanatorily, input–output modelling serves to account for the appropriateness of the computed function to the explanandum information-processing task. I compare very briefly the (...) modelling explanation to mechanistic and optimality explanations, noting that in both cases the explanations can be seen as complementary rather than contrastive or competing. (shrink)
Accelerating Turing machines have attracted much attention in the last decade or so. They have been described as “the work-horse of hypercomputation”. But do they really compute beyond the “Turing limit”—e.g., compute the halting function? We argue that the answer depends on what you mean by an accelerating Turing machine, on what you mean by computation, and even on what you mean by a Turing machine. We show first that in the current literature the term “accelerating Turing machine” is used (...) to refer to two very different species of accelerating machine, which we call end-stage-in and end-stage-out machines, respectively. We argue that end-stage-in accelerating machines are not Turing machines at all. We then present two differing conceptions of computation, the internal and the external, and introduce the notion of an epistemic embedding of a computation. We argue that no accelerating Turing machine computes the halting function in the internal sense. Finally, we distinguish between two very different conceptions of the Turing machine, the purist conception and the realist conception; and we argue that Turing himself was no subscriber to the purist conception. We conclude that under the realist conception, but not under the purist conception, an accelerating Turing machine is able to compute the halting function in the external sense. We adopt a relatively informal approach throughout, since we take the key issues to be philosophical rather than mathematical. (shrink)
The mechanistic view of computation contends that computational explanations are mechanistic explanations. Mechanists, however, disagree about the precise role that the environment – or the so-called “contextual level” – plays for computational explanations. We advance here two claims: Contextual factors essentially determine the computational identity of a computing system ; this means that specifying the “intrinsic” mechanism is not sufficient to fix the computational identity of the system. It is not necessary to specify the causal-mechanistic interaction between the system and (...) its context in order to offer a complete and adequate computational explanation. While the first claim has been discussed before, the second has been practically ignored. After supporting these claims, we discuss the implications of our contextualist view for the mechanistic view of computational explanation. Our aim is to show that some versions of the mechanistic view are consistent with the contextualist view, whilst others are not. (shrink)
Putnam (Representations and reality. MIT Press, Cambridge, 1988) and Searle (The rediscovery of the mind. MIT Press, Cambridge, 1992) famously argue that almost every physical system implements every finite computation. This universal implementation claim, if correct, puts at the risk of triviality certain functional and computational views of the mind. Several authors have offered theories of implementation that allegedly avoid the pitfalls of universal implementation. My aim in this paper is to suggest that these theories are still consistent with a (...) weaker result, which is the nomological possibility of systems that simultaneously implement different complex automata. Elsewhere I (Shagrir in J Cogn Sci, 2012) argue that this simultaneous implementation result challenges a computational sufficiency thesis (articulated by Chalmers in J Cogn Sci, 2012). My focus here is on theories of implementation. After presenting the basic simultaneous implementation construction, I argue that these theories do not avoid the simultaneous implementation result. The conclusion is that the idea that the implementation of the right kind of automaton suffices for a possession of a mind is dubious. (shrink)
The mechanistic view of computation contends that computational explanations are mechanistic explanations. Mechanists, however, disagree about the precise role that the environment – or the so-called “contextual level” – plays for computational explanations. We advance here two claims: Contextual factors essentially determine the computational identity of a computing system ; this means that specifying the “intrinsic” mechanism is not sufficient to fix the computational identity of the system. It is not necessary to specify the causal-mechanistic interaction between the system and (...) its context in order to offer a complete and adequate computational explanation. While the first claim has been discussed before, the second has been practically ignored. After supporting these claims, we discuss the implications of our contextualist view for the mechanistic view of computational explanation. Our aim is to show that some versions of the mechanistic view are consistent with the contextualist view, whilst others are not. (shrink)
The mechanistic view of computation contends that computational explanations are mechanistic explanations. Mechanists, however, disagree about the precise role that the environment – or the so-called “contextual level” – plays for computational explanations. We advance here two claims: Contextual factors essentially determine the computational identity of a computing system ; this means that specifying the “intrinsic” mechanism is not sufficient to fix the computational identity of the system. It is not necessary to specify the causal-mechanistic interaction between the system and (...) its context in order to offer a complete and adequate computational explanation. While the first claim has been discussed before, the second has been practically ignored. After supporting these claims, we discuss the implications of our contextualist view for the mechanistic view of computational explanation. Our aim is to show that some versions of the mechanistic view are consistent with the contextualist view, whilst others are not. (shrink)
The paper criticizes standard functionalist arguments for multiple realization. It focuses on arguments in which psychological states are conceived as computational, which is precisely where the multiple realization doctrine has seemed the strongest. It is argued that a type-type identity thesis between computational states and physical states is no less plausible than a multiple realization thesis. The paper also presents, more tentatively, positive arguments for a picture of local reduction.
This paper challenges two orthodox theses: (a) that computational processes must be algorithmic; and (b) that all computed functions must be Turing-computable. Section 2 advances the claim that the works in computability theory, including Turing's analysis of the effective computable functions, do not substantiate the two theses. It is then shown (Section 3) that we can describe a system that computes a number-theoretic function which is not Turing-computable. The argument against the first thesis proceeds in two stages. It is first (...) shown (Section 4) that whether a process is algorithmic depends on the way we describe the process. It is then argued (Section 5) that systems compute even if their processes are not described as algorithmic. The paper concludes with a suggestion for a semantic approach to computation. (shrink)
What are the limits of physical computation? In his ‘Church’s Thesis and Principles for Mechanisms’, Turing’s student Robin Gandy proved that any machine satisfying four idealised physical ‘principles’ is equivalent to some Turing machine. Gandy’s four principles in effect define a class of computing machines (‘Gandy machines’). Our question is: What is the relationship of this class to the class of all (ideal) physical computing machines? Gandy himself suggests that the relationship is identity. We do not share this view. We (...) will point to interesting examples of (ideal) physical machines that fall outside the class of Gandy machines and compute functions that are not Turing-machine computable. (shrink)
Over the last 3 decades a vast literature has been dedicated to supervenience. Much of it has focused on the analysis of different concepts of supervenience and their philosophical consequences. This paper has two objectives. One is to provide a short, up-do-date, guide to the formal relations between the different concepts of supervenience. The other is to reassess the extent to which these concepts can establish metaphysical theses, especially about dependence. The conclusion is that strong global supervenience is the most (...) advantageous notion of supervenience that we have. (shrink)
What is computer-science about? CS is obviously the science of computers. But what exactly are computers? We know that there are physical computers, and, perhaps, also abstract computers. Let us limit the discussion here to physical entities and ask: What are physical computers? What does it mean for a physical entity to be a computer? The answer, it seems, is that physical computers are physical dynamical systems that implement formal entities such as Turing-machines. I do not think that this answer (...) is false. But it invites another, and troubling, question: What distinguishes computers from other physical dynamical systems? The difficulty is that, on the one hand, every physical system im-plements abstract formal entities such as sets of differential equations, while on the other hand we certainly do not want to count every dynamical system as a computer. After all, if CS is somehow distinctive, then there must be a difference between computers and other systems such as solar systems, stomachs, and carburetors. But what is the difference? (shrink)
It is generally assumed that everything that can be said about dependence with the notion of strong global supervenience can also be said with the notion of strong supervenience. It is argued here, however, that strong global supervenience has a metaphysically distinctive role to play. It is shown that when the relevant sets include relations , strong global supervenience and strong supervenience are distinct. It is then concluded that there are claims about dependence of relations that can be made with (...) the global notion of strong supervenience but not with the “local” (individual) one. (shrink)
Jaegwon Kim contends that global supervenience is consistent with non-materialistic cases. Paull and Sider, Horgan, as well as Kim, attempt to defend it from these charges. It is shown here that their defense is only partially successful. Their defense meets one challenge to global supervenience-the hydrogen-atom case-but fails to meet other, `local', cases. It is suggested that the other challenges can be met if global supervenience is combined with weak supervenience. The combination of global and weak supervenience constitutes a viable (...) picture of psychophysical relations, and is especially attractive to nonreductive materialists who are also anti-individualists. (shrink)
What is computer-science about? CS is obviously the science of computers. But what exactly are computers? We know that there are physical computers, and, perhaps, also abstract computers. Let us limit the discussion here to physical entities and ask: What are physical computers? What does it mean for a physical entity to be a computer? The answer, it seems, is that physical computers are physical dynamical systems that implement formal entities such as Turing-machines. I do not think that this answer (...) is false. But it invites another, and troubling, question: What distinguishes computers from other physical dynamical systems? The difficulty is that, on the one hand, every physical system im-plements abstract formal entities such as sets of differential equations, while on the other hand we certainly do not want to count every dynamical system as a computer. After all, if CS is somehow distinctive, then there must be a difference between computers and other systems such as solar systems, stomachs, and carburetors. But what is the difference? (shrink)
What does it mean to say that a physical system computes or, specifically, to say that the nervous system computes? One answer, endorsed here, is that computing is a sort of modeling. I trace this line of answer in the conceptual and philosophical work conducted over the last 3 decades by researchers associated with the University of California, San Diego. The linkage between their work and the modeling notion is no coincidence: the modeling notion aims to account for the computational (...) approach in neuroscience, and UCSD has been home to central studies in neurophilosophy, connectionism, and computational neuroscience. (shrink)
Computational physical systems may exhibit indeterminacy of computation (IC). Their identified physical dynamics may not suffice to select a unique computational profile. We consider this phenomenon from the point of view of cognitive science and examine how computational profiles of cognitive systems are identified and justified in practice, in the light of IC. To that end, we look at the literature on the underdetermination of theory by evidence and argue that the same devices that can be successfully employed to confirm (...) physical hypotheses can also be used to rationally single out computational profiles, notwithstanding IC. (shrink)
I explore a Davidsonian proposal for the reconciliation of two theses. One is the supervenience of the mental on the physical, the other is the anomalism of the mental. The gist of the proposal is that supervenience and anomalism are theses about interpretation. Starting with supervenience, the claim is that it should not be understood in terms of deeper metaphysical relations, but as a constraint on the relations between the applications of physical and mental predicates. Regarding anomalism, the claim is (...) that psychophysical laws have to satisfy certain counterfactual cases, in which an interpreter evaluates her past attributions in the light of new pieces of evidence. The proposed reconciliation is that supervenience entails that an interpreter will always attribute the same mental predicates to two individuals with the same physical states. However, supervenience does not imply that an interpreter cannot revise her past attributions to the two individuals. (shrink)
The thesis that mental properties are dependent, or supervenient, on physical properties, but this dependence is not lawlike, has been influential in contemporary philosophy of mind. It is put forward explicitly in Donald Davidson's seminal ‘Mental Events.’ On the one hand, Davidson claims that the mental is anomalous, that ‘there are no strict deterministic laws on the basis of which mental events can be predicted and explained’, and, in particular, that there are no strict psychophysical laws. On the other hand, (...) he insists that the mental supervenes on the physical; that ‘mental characteristics are in some sense dependent, or supervenient, on physical characteristics’. (shrink)
Is the mind/brain a kind of a computer? In cognitive science, it is widely believed that cognition is a form of computation--that some physical systems, such as minds/brains, compute appropriate functions, whereas other systems, such as video cameras, stomachs or the weather, do not compute. What makes a physical system a computing system? In my dissertation I first reject the orthodox, Turing-machine style answer to this question. I argue that the orthodox notion is rooted in a misunderstanding of our pre-theoretic (...) notion of computation and of Turing's characterization of it. I then offer an alternative--semantic --theory of computation for physical systems. I suggest that to view a system as a computing system is to identify its processes and states, as computational, with respect to their semantic relations to external objects. Lastly, I examine the ramifications of my theses about computation for cognitive science. I argue that the level at which we specify psychological processes/mechanisms is defined over semantic, rather than syntactic or algorithmic, types. As a result of this, I go on to claim that cognitive scientists take semantic properties as those which explain behavior, not those which are in need of explanation. (shrink)
The thesis that mental properties are dependent, or supervenient, on physical properties, but this dependence is not lawlike, has been influential in contemporary philosophy of mind. It is put forward explicitly in Donald Davidson's seminal ‘Mental Events.’ On the one hand, Davidson claims that the mental is anomalous, that ‘there are no strict deterministic laws on the basis of which mental events can be predicted and explained’, and, in particular, that there are no strict psychophysical laws. On the other hand, (...) he insists that the mental supervenes on the physical; that ‘mental characteristics are in some sense dependent, or supervenient, on physical characteristics’. (shrink)