Gualtiero Piccinini articulates and defends a mechanistic account of concrete, or physical, computation. A physical system is a computing system just in case it is a mechanism one of whose functions is to manipulate vehicles based solely on differences between different portions of the vehicles according to a rule defined over the vehicles. Physical Computation discusses previous accounts of computation and argues that the mechanistic account is better. Many kinds of computation are explicated, such as digital vs. analog, serial vs. (...) parallel, neural network computation, program-controlled computation, and more. Piccinini argues that computation does not entail representation or information processing although information processing entails computation. Pancomputationalism, according to which every physical system is computational, is rejected. A modest version of the physical Church-Turing thesis, according to which any function that is physically computable is computable by Turing machines, is defended. (shrink)
We sketch a framework for building a unified science of cognition. This unification is achieved by showing how functional analyses of cognitive capacities can be integrated with the multilevel mechanistic explanations of neural systems. The core idea is that functional analyses are sketches of mechanisms , in which some structural aspects of a mechanistic explanation are omitted. Once the missing aspects are filled in, a functional analysis turns into a full-blown mechanistic explanation. By this process, functional analyses are seamlessly integrated (...) with multilevel mechanistic explanations. (shrink)
We outline a framework of multilevel neurocognitive mechanisms that incorporates representation and computation. We argue that paradigmatic explanations in cognitive neuroscience fit this framework and thus that cognitive neuroscience constitutes a revolutionary break from traditional cognitive science. Whereas traditional cognitive scientific explanations were supposed to be distinct and autonomous from mechanistic explanations, neurocognitive explanations aim to be mechanistic through and through. Neurocognitive explanations aim to integrate computational and representational functions and structures across multiple levels of organization in order to explain (...) cognition. To a large extent, practicing cognitive neuroscientists have already accepted this shift, but philosophical theory has not fully acknowledged and appreciated its significance. As a result, the explanatory framework underlying cognitive neuroscience has remained largely implicit. We explicate this framework and demonstrate its contrast with previous approaches. (shrink)
Gualtiero Piccinini presents a systematic and rigorous philosophical defence of the computational theory of cognition. His view posits that cognition involves neural computation within multilevel neurocognitive mechanisms, and includes novel ideas about ontology, functions, neural representation, neural computation, and consciousness.
The received view is that computational states are individuated at least in part by their semantic properties. I offer an alternative, according to which computational states are individuated by their functional properties. Functional properties are specified by a mechanistic explanation without appealing to any semantic properties. The primary purpose of this paper is to formulate the alternative view of computational individuation, point out that it supports a robust notion of computational explanation, and defend it on the grounds of how computational (...) states are individuated within computability theory and computer science. A secondary purpose is to show that existing arguments for the semantic view are defective. (shrink)
We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism—neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous (...) signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation. (shrink)
Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In (...) this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism and connectionism on the other. We defend the relevance to cognitive science of both computation, in a generic sense that we fully articulate for the first time, and information processing, in three important senses of the term. Our account advances some foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates’ empirical aspects. (shrink)
This paper offers an account of what it is for a physical system to be a computing mechanism—a system that performs computations. A computing mechanism is a mechanism whose function is to generate output strings from input strings and (possibly) internal states, in accordance with a general rule that applies to all relevant strings and depends on the input strings and (possibly) internal states for its application. This account is motivated by reasons endogenous to the philosophy of computing, namely, doing (...) justice to the practices of computer scientists and computability theorists. It is also an application of recent literature on mechanisms, because it assimilates computational explanation to mechanistic explanation. The account can be used to individuate computing mechanisms and the functions they compute and to taxonomize computing mechanisms based on their computing power. (shrink)
The historical debate on representation in cognitive science and neuroscience construes representations as theoretical posits and discusses the degree to which we have reason to posit them. We reject the premise of that debate. We argue that experimental neuroscientists routinely observe and manipulate neural representations in their laboratory. Therefore, neural representations are as real as neurons, action potentials, or any other well-established entities in our ontology.
Defending or attacking either functionalism or computationalism requires clarity on what they amount to and what evidence counts for or against them. My goalhere is not to evaluatc their plausibility. My goal is to formulate them and their relationship clearly enough that we can determine which type of evidence is relevant to them. I aim to dispel some sources of confusion that surround functionalism and computationalism. recruit recent philosophical work on mechanisms and computation to shed light on them, and clarify (...) how functionalism and computationalism mayor may not legitimately come together. (shrink)
Since the cognitive revolution, it has become commonplace that cognition involves both computation and information processing. Is this one claim or two? Is computation the same as information processing? The two terms are often used interchangeably, but this usage masks important differences. In this paper, we distinguish information processing from computation and examine some of their mutual relations, shedding light on the role each can play in a theory of cognition. We recommend that theorists of cognition be explicit and careful (...) in choosing notions of computation and information and connecting them together.Keywords: Computation; Information processing; Computationalism; Computational theory of mind; Cognitivism. (shrink)
We situate the debate on intentionality within the rise of cognitive neuroscience and argue that cognitive neuroscience can explain intentionality. We discuss the explanatory significance of ascribing intentionality to representations. At first, we focus on views that attempt to render such ascriptions naturalistic by construing them in a deflationary or merely pragmatic way. We then contrast these views with staunchly realist views that attempt to naturalize intentionality by developing theories of content for representations in terms of information and biological function. (...) We echo several other philosophers by arguing that these theories over-generalize unless they are constrained by a theory of the functional role of representational vehicles. This leads to a discussion of the functional roles of representations, and how representations might be realized in the brain. We argue that there’s work to be done to identify a distinctively mental kind of representation. We close by sketching a way forward for the project of naturalizing intentionality. This will not be achieved simply by ascribing the content of mental states to generic neural representations, but by identifying specific neural representations that explain the puzzling intentional properties of mental states. (shrink)
Abstract: According to the Veridicality Thesis, information requires truth. On this view, smoke carries information about there being a fire only if there is a fire, the proposition that the earth has two moons carries information about the earth having two moons only if the earth has two moons, and so on. We reject this Veridicality Thesis. We argue that the main notions of information used in cognitive science and computer science allow A to have information about the obtaining of (...) p even when p is false. (shrink)
We provide an explicit taxonomy of legitimate kinds of abstraction within constitutive explanation. We argue that abstraction is an inherent aspect of adequate mechanistic explanation. Mechanistic explanations—even ideally complete ones—typically involve many kinds of abstraction and therefore do not require maximal detail. Some kinds of abstraction play the ontic role of identifying the specific complex components, subsets of causal powers, and organizational relations that produce a suitably general phenomenon. Therefore, abstract constitutive explanations are both legitimate and mechanistic.
A common presupposition in the concepts literature is that concepts constitute a sin- gular natural kind. If, on the contrary, concepts split into more than one kind, this literature needs to be recast in terms of other kinds of mental representation. We offer two new arguments that concepts, in fact, divide into different kinds: (a) concepts split because different kinds of mental representation, processed independently, must be posited to explain different sets of relevant phenomena; (b) concepts split because different kinds (...) of mental representation, processed independently, must be posited to explain responses to different kinds of category. Whether these arguments are sound remains an open empirical question, to be resolved by future empirical and theoretical work. (shrink)
According to pancomputationalism, everything is a computing system. In this paper, I distinguish between different varieties of pancomputationalism. I find that although some varieties are more plausible than others, only the strongest variety is relevant to the philosophy of mind, but only the most trivial varieties are true. As a side effect of this exercise, I offer a clarified distinction between computational modelling and computational explanation.<br><br>.
Some philosophers have conflated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conflation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions. # 2004 Elsevier Ltd. All rights reserved.
I offer an explication of the notion of computer, grounded in the practices of computability theorists and computer scientists. I begin by explaining what distinguishes computers from calculators. Then, I offer a systematic taxonomy of kinds of computer, including hard-wired versus programmable, general-purpose versus special-purpose, analog versus digital, and serial versus parallel, giving explicit criteria for each kind. My account is mechanistic: which class a system belongs in, and which functions are computable by which system, depends on the system's mechanistic (...) properties. Finally, I briefly illustrate how my account sheds light on some issues in the history and philosophy of computing as well as the philosophy of mind. (shrink)
This article defends a modest version of the Physical Church-Turing thesis (CT). Following an established recent trend, I distinguish between what I call Mathematical CT—the thesis supported by the original arguments for CT—and Physical CT. I then distinguish between bold formulations of Physical CT, according to which any physical process—anything doable by a physical system—is computable by a Turing machine, and modest formulations, according to which any function that is computable by a physical system is computable by a Turing machine. (...) I argue that Bold Physical CT is not relevant to the epistemological concerns that motivate CT and hence not suitable as a physical analog of Mathematical CT. The correct physical analog of Mathematical CT is Modest Physical CT. I propose to explicate the notion of physical computability in terms of a usability constraint, according to which for a process to count as relevant to Physical CT, it must be usable by a finite observer to obtain the desired values of a function. Finally, I suggest that proposed counterexamples to Physical CT are still far from falsifying it because they have not been shown to satisfy the usability constraint. (shrink)
Some philosophers have conflated functionalism and computationalism. I reconstruct how this came about and uncover two assumptions that made the conflation possible. They are the assumptions that (i) psychological functional analyses are computational descriptions and (ii) everything may be described as performing computations. I argue that, if we want to improve our understanding of both the metaphysics of mental states and the functional relations between them, we should reject these assumptions.
Despite its significance in neuroscience and computation, McCulloch and Pitts's celebrated 1943 paper has received little historical and philosophical attention. In 1943 there already existed a lively community of biophysicists doing mathematical work on neural networks. What was novel in McCulloch and Pitts's paper was their use of logic and computation to understand neural, and thus mental, activity. McCulloch and Pitts's contributions included (i) a formalism whose refinement and generalization led to the notion of finite automata (an important formalism in (...) computability theory), (ii) a technique that inspired the notion of logic design (a fundamental part of modern computer design), (iii) the first use of computation to address the mind–body problem, and (iv) the first modern computational theory of mind and brain. (shrink)
According to some philosophers, computational explanation is proprietary to psychology—it does not belong in neuroscience. But neuroscientists routinely offer computational explanations of cognitive phenomena. In fact, computational explanation was initially imported from computability theory into the science of mind by neuroscientists, who justified this move on neurophysiological grounds. Establishing the legitimacy and importance of computational explanation in neuroscience is one thing; shedding light on it is another. I raise some philosophical questions pertaining to computational explanation and outline some promising answers that (...) are being developed by a number of authors. (shrink)
We propose a novel account of the distinction between innate and acquired biological traits: biological traits are innate to the degree that they are caused by factors intrinsic to the organism at the time of its origin; they are acquired to the degree that they are caused by factors extrinsic to the organism. This account borrows from recent work on causation in order to make rigorous the notion of quantitative contributions to traits by different factors in development. We avoid the (...) pitfalls of previous accounts and argue that the distinction between innate and acquired traits is scientifically useful. We therefore address not only previous accounts of innateness but also skeptics about any account. The two are linked, in that a better account of innateness also enables us better to address the skeptics. (shrink)
According to the computational theory of mind (CTM), mental capacities are explained by inner computations, which in biological organisms are realized in the brain. Computational explanation is so popular and entrenched that it’s common for scientists and philosophers to assume CTM without argument.
Computationalism has been the mainstream view of cognition for decades. There are periodic reports of its demise, but they are greatly exaggerated. This essay surveys some recent literature on computationalism. It concludes that computationalism is a family of theories about the mechanisms of cognition. The main relevant evidence for testing it comes from neuroscience, though psychology and AI are relevant too. Computationalism comes in many versions, which continue to guide competing research programs in philosophy of mind as well as psychology (...) and neuroscience. Although our understanding of computationalism has deepened in recent years, much work in this area remains to be done. (shrink)
First-person data have been both condemned and hailed because of their alleged privacy. Critics argue that science must be based on public evidence: since first-person data are private, they should be banned from science. Apologists reply that first-person data are necessary for understanding the mind: since first-person data are private, scientists must be allowed to use private evidence. I argue that both views rest on a false premise. In psychology and neuroscience, the subjects issuing first-person reports and other sources of (...) first-person data play the epistemic role of a (self-) measuring instrument. Data from measuring instruments are public and can be validated by public methods. Therefore, first-person data are as public as other scientific data: their use in science is legitimate, in accordance with standard scientific methodology. (shrink)
This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, for (...) Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical objection” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human mathematicians presumably do. (shrink)
The Church–Turing Thesis (CTT) is often employed in arguments for computationalism. I scrutinize the most prominent of such arguments in light of recent work on CTT and argue that they are unsound. Although CTT does nothing to support computationalism, it is not irrelevant to it. By eliminating misunderstandings about the relationship between CTT and computationalism, we deepen our appreciation of computationalism as an empirical hypothesis.
I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way. Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may well fall into this last class.
Introspective reports are used as sources of information about other minds, in both everyday life and science. Many scientists and philosophers consider this practice unjustified, while others have made the untestable assumption that introspection is a truthful method of private observation. I argue that neither skepticism nor faith concerning introspective reports are warranted. As an alternative, I consider our everyday, commonsensical reliance on each other’s introspective reports. When we hear people talk about their minds, we neither refuse to learn from (...) nor blindly accept what they say. Sometimes we accept what we are told, other times we reject it, and still other times we take the report, revise it in light of what we believe, then accept the modified version. Whatever we do, we have (implicit) reasons for it. In developing a sound methodology for the scientific use of introspective reports, we can take our commonsense treatment of introspective reports and make it more explicit and rigorous. We can discover what to infer from introspective reports in a way similar to how we do it every day, but with extra knowledge, methodological care, and precision. Sorting out the use of introspective reports as sources of data is going to be a painstaking, piecemeal task, but it promises to enhance our science of the mind and brain. (shrink)
As our data will show, negative existential sentences containing socalled empty names evoke the same strong semantic intuitions in ordinary speakers and philosophers alike.Santa Claus does not exist.Superman does not exist.Clark Kent does not exist.Uttering the sentences in (1) seems to say something truth-evaluable, to say something true, and to say something different for each sentence. A semantic theory ought to explain these semantic intuitions.The intuitions elicited by (1) are in apparent conflict with the Millian view of proper names. According (...) to Millianism, the meaning (or 'semantic value') of a proper name is just its referent. But empty names, such as 'Santa Claus' and 'Superman', appear to lack a .. (shrink)
I argue that metaphysicians of mind have not done justice to the notion of accessibility between possible worlds. Once accessibility is given its due, physicalism must be reformulated and conceivability arguments must be reevaluated. To reach these conclusions, I explore a novel way of assessing the zombie conceivability argument. I accept that zombies are possible and ask whether that possibility is accessible from our world in the sense of ‘accessible’ used in possible world semantics. It turns out that the question (...) whether zombie worlds are accessible from our world is equivalent to the question whether physicalism is true at our world. By assuming that zombie worlds are accessible from our world, proponents of the zombie conceivability argument beg the question against physicalism. In other words, it is a mistake to assume that the metaphysical possibility of zombies entails that physicalism is false at our world. I will then consider what happens if a proponent of the zombie conceivability argument should insist that zombie worlds are accessible from our world. I will argue that the same ingredients used in the zombie conceivability argument—whatever exactly they might be—can be used to construct an argument to the opposite conclusion. At that point, we reach a stalemate between physicalism and property dualism: while the possibility of zombies entails property dualism, the possibility of other creatures entails physicalism. Since these two possibilities are mutually inconsistent, either one of them is not genuine or one of them is inaccessible from the actual world. To resolve this stalemate, we need more than traditional conceivability arguments. (shrink)
In the 1950s, Alan Turing proposed his influential test for machine intelligence, which involved a teletyped dialogue between a human player, a machine, and an interrogator. Two readings of Turing's rules for the test have been given. According to the standard reading of Turing's words, the goal of the interrogator was to discover which was the human being and which was the machine, while the goal of the machine was to be indistinguishable from a human being. According to the literal (...) reading, the goal of the machine was to simulate a man imitating a woman, while the interrogator – unaware of the real purpose of the test – was attempting to determine which of the two contestants was the woman and which was the man. The present work offers a study of Turing's rules for the test in the context of his advocated purpose and his other texts. The conclusion is that there are several independent and mutually reinforcing lines of evidence that support the standard reading, while fitting the literal reading in Turing's work faces severe interpretative difficulties. So, the controversy over Turing's rules should be settled in favor of the standard reading. (shrink)
Epistemic divergence occurs when different investigators give different answers to the same question using evidence-collecting methods that are not public. Without following the principle that scientific methods must be public, scientific communities risk epistemic divergence. I explicate the notion of public method and argue that, to avoid the risk of epistemic divergence, scientific communities should (and do) apply only methods that are public.
Heterophenomenology is a third-person methodology proposed by Daniel Dennett for using first-person reports as scientific evidence. I argue that heterophenomenology can be improved by making six changes: (i) setting aside consciousness, (ii) including other sources of first-person data besides first-person reports, (iii) abandoning agnosticism as to the truth value of the reports in favor of the most plausible assumptions we can make about what can be learned from the data, (iv) interpreting first-person reports (and other first-person behaviors) directly in terms (...) of target mental states rather than in terms of beliefs about them, (v) dropping any residual commitment to incorrigibility of first-person reports, and (vi) recognizing that thirdperson methodology does have positive effects on scientific practices. When these changes are made, heterophenomenology turns into the self-measurement methodology of firstperson data that I have defended in previous papers. (shrink)
Functionalism is a popular solution to the mind–body problem. It has a number of versions. We outline some of the major releases of functionalism, listing some of their important features as well as some of the bugs that plagued these releases. We outline how different versions are related. Many have been pessimistic about functionalism’s prospects, but most criticisms have missed the latest upgrades. We end by suggesting a version of functionalism that provides a complete account of the mind.
Computationalism says that brains are computing mechanisms, that is, mechanisms that perform computations. At present, there is no consensus on how to formulate computationalism precisely or adjudicate the dispute between computationalism and its foes, or between different versions of computationalism. An important reason for the current impasse is the lack of a satisfactory philosophical account of computing mechanisms. The main goal of this dissertation is to offer such an account. I also believe that the history of computationalism sheds light on the (...) current debate. By tracing different versions of computationalism to their common historical origin, we can see how the current divisions originated and understand their motivation. Reconstructing debates over computationalism in the context of their own intellectual history can contribute to philosophical progress on the relation between brains and computing mechanisms and help determine how brains and computing mechanisms are alike, and how they differ. Accordingly, my dissertation is divided into a historical part, which traces the early history of computationalism up to 1946, and a philosophical part, which offers an account of computing mechanisms. The two main ideas developed in this dissertation are that (1) computational states are to be identified functionally not semantically, and (2) computing mechanisms are to be studied by functional analysis. The resulting account of computing mechanism, which I call the functional account of computing mechanisms, can be used to identify computing mechanisms and the functions they compute. I use the functional account of computing mechanisms to taxonomize computing mechanisms based on their different computing power, and I use this taxonomy of computing mechanisms to taxonomize different versions of computationalism based on the functional properties that they ascribe to brains. By doing so, I begin to tease out empirically testable statements about the functional organization of the brain that different versions of computationalism are committed to. I submit that when computationalism is reformulated in the more explicit and precise way I propose, the disputes about computationalism can be adjudicated on the grounds of empirical evidence from neuroscience. (shrink)
I reduce activities to properties, where properties include causal powers. Activities are manifestations of causal powers. Activities occur when an entity’s causal powers encounter partners for their manifestation. Given this reduction of activities to properties, entities and properties are all we need for an ontology of mechanisms.
We define mereologically invariant composition as the relation between a whole object and its parts when the object retains the same parts during a time interval. We argue that mereologically invariant composition is identity between a whole and its parts taken collectively. Our reason is that parts and wholes are equivalent measurements of a portion of reality at different scales in the precise sense employed by measurement theory. The purpose of these scales is the numerical representation of primitive relations between (...) quantities of being. To show this, we prove representation and uniqueness theorems for composition. Thus, mereologically invariant composition is trans-scalar identity. (shrink)
Roughly speaking, computationalism says that cognition is computation, or that cognitive phenomena are explained by the agent‘s computations. The cognitive processes and behavior of agents are the explanandum. The computations performed by the agents‘ cognitive systems are the proposed explanans. Since the cognitive systems of biological organisms are their nervous 1 systems (plus or minus a bit), we may say that according to computationalism, the cognitive processes and behavior of organisms are explained by neural computations. Some people might prefer to (...) say that cognitive systems are ―realized‖ by nervous systems, and thus that—according to computationalism—cognitive computations are ―realized‖ by neural processes. In this paper, nothing hinges on the nature of the relation between cognitive systems and nervous systems, or between computations and neural processes. For present purposes, if a neural process realizes a computation, then that neural process is a computation. Thus, I will couch much of my discussion in terms of nervous systems and neural computation.1 Before proceeding, we should dispense with a possible red herring. Contrary to a common assumption, computationalism does not stand in opposition to connectionism. Connectionism, in the most general and common sense of the term, is the claim that cognitive phenomena are explained (at some level and at least in part) by the processes of neural networks. This is a truism, supported by most neuroscientific evidence. Everybody ought to be a connectionist in this general sense. The relevant question is, are neural processes computations? More precisely, are the neural processes to be found in the nervous systems of organisms computations? Computationalists say ―yes‖, anti-computationalists say ―no‖. This paper investigates whether any of the arguments on offer against computationalism have a chance at knocking it off.2 Ever since Warren McCulloch and Walter Pitts (1943) first proposed it, computationalism has been subjected to a wide range of objections.. (shrink)
According to the zombie conceivability argument, phenomenal zombies are conceivable, and hence possible, and hence physicalism is false. Critics of the conceivability argument have responded by denying either that zombies are conceivable or that they are possible. Much of the controversy hinges on how to establish and understand what is conceivable, what is possible, and the link between the two—matters that are at least as obscure and controversial as whether consciousness is physical. Because of this, the debate over physicalism is (...) unlikely to be resolved by thinking about zombies—or at least, zombies as discussed by philosophers to date.
In this paper, I explore an alternative strategy against the zombie conceivability argument. I accept the possibility of zombies and ask whether that possibility is accessible (in the sense of ‘accessible’ used in possible world semantics) to our world. It turns out that the question of whether zombie worlds are accessible to our world is equivalent to the question of whether physicalism is true. By assuming that zombie worlds are accessible to our world, supporters of the zombie conceivability argument beg the question against physicalists. I will then consider what happens if a supporter of the zombie conceivability argument should insist that zombie worlds are accessible to our world. I will argue that the same ingredients used in the zombie conceivability argument—whatever they might be—can be used to construct an argument to the opposite conclusion. If that is correct, we reach a stalemate between physicalism and property dualism: while the possibility of some zombies entails property dualism, the possibility of other creatures entails physicalism. Since these two possibilities are inconsistent, one of them is not genuine. To resolve this stalemate, we need more than thought experiments. (shrink)
Machery’s argument that concepts split into different kinds is bold and inspiring but not fully persuasive. We will focus on the lack of evidence for the fourth tenet of Machery’s..
Knowledge is factually grounded belief. This account uses the same ingredients as the traditional analysis—belief, truth, and justification—but posits a different relation between them. While the traditional analysis begins with true belief and improves it by simply adding justification, this account begins with belief, improves it by grounding it, and then improves it further by grounding it in the facts. In other words, for a belief to be knowledge, it's not enough that it be true and justified; for a belief (...) to be knowledge, it must be justified by the facts. This account solves the Gettier problem. Gettierized beliefs fall short of knowledge because, albeit true and justified, they are not grounded in the facts. This account also elucidates why knowledge attributions are sensitive to epistemic standards. It's because whether we take a belief to be grounded in the facts is sensitive to epistemic standards. (shrink)
According to pancomputationalism, all physical systems – atoms, rocks, hurricanes, and toasters – perform computations. Pancomputationalism seems to be increasingly popular among some philosophers and physicists. In this paper, we interpret pancomputationalism in terms of computational descriptions of varying strength—computational interpretations of physical microstates and dynamics that vary in their restrictiveness. We distinguish several types of pancomputationalism and identify essential features of the computational descriptions required to support them. By tying various pancomputationalist theses directly to notions of what counts as (...) computation in a physical system, we clarify the meaning, strength, and plausibility of pancomputationalist claims. We show that the force of these claims is diminished when weaknesses in their supporting computational descriptions are laid bare. Specifically, once computation is meaningfully distinguished from ordinary dynamics, the most sensational pancomputationalist claims are unwarranted, whereas the more modest claims offer little more than recognition of causal similarities between physical processes and the most primitive computing processes. (shrink)