I argue that discussions of cognitive penetration have been insufficiently clear about what distinguishes perception and cognition, and what kind of relationship between the two is supposed to be at stake in the debate. A strong reading, which is compatible with many characterizations of penetration, posits a highly specific and directed influence on perception. According to this view, which I call the “internal effect view” a cognitive state penetrates a perceptual process if the presence of the cognitive state causes a (...) change to the computation performed by the process, with the result being a distinct output. I produce a novel argument that this strong reading is false. On one well-motivated way of drawing the distinction between perceptual states and cognitive states, cognitive representations cannot play the computational role posited for them by IEV, vis-à-vis perception. This does not mean, however, that there are not important causal relationships between cognitive and perceptual states. I introduce an alternative view of these relationships, the “external effect view”. EEV posits that each cognitive state is associated with a broad range of possible perceptual outcomes, and biases perception towards any of those perceptual outcomes without determining specific perceptual contents. I argue that EEV captures the kinds of cases philosophers have thought to be evidence for IEV, and a wide range of other cases as well. (shrink)
When doing mental ontology, we must ask how to individuate distinct categories of mental states, and then, given that individuation, ask how states from distinct categories interact. One promising proposal for how to individuate cognitive from sensorimotor states is in terms of their representational form. On these views, cognitive representations are propositional in structure, while sensorimotor representations have an internal structure that maps to the perceptual and kinematic dimensions involved in an action context. This way of thinking has resulted in (...) worries about the interface between cognition and sensorimotor systems – that is, about how representations of these distinct types might interact in performing actions. I claim that current solutions to the interface problem fail, because they have not sufficiently abandoned intuitions inspired by faculty psychology. In particular, current proposals seek to show how cognitive states can enforce prior decisions on sensorimotor systems. I argue that such “determination” views are the wrong kind of views to adopt, given the form distinction. Instead, I offer a proposal on which propositional representations can at best bias us toward certain kinds of action. This kind of view, I argue, appealingly distributes the explanation of action across distinctive contributions from cognitive and sensorimotor processing. (shrink)
Functional decomposition is an important goal in the life sciences, and is central to mechanistic explanation and explanatory reduction. A growing literature in philosophy of science, however, has challenged decomposition-based notions of explanation. ‘Holists’ posit that complex systems exhibit context-sensitivity, dynamic interaction, and network dependence, and that these properties undermine decomposition. They then infer from the failure of decomposition to the failure of mechanistic explanation and reduction. I argue that complexity, so construed, is only incompatible with one notion of decomposition, (...) which I call ‘atomism’, and not with decomposition writ large. Atomism posits that function ascriptions must be made to parts with minimal reference to the surrounding system. Complexity does indeed falsify atomism, but I contend that there is a weaker, ‘contextualist’ notion of decomposition that is fully compatible with the properties that holists cite. Contextualism suggests that the function of parts can shift with external context, and that interactions with other parts might help determine their context-appropriate functions. This still admits of functional decomposition within a given context. I will give examples based on the notion of oscillatory multiplexing in systems neuroscience. If contextualism is feasible, then holist inferences are faulty—one cannot infer from the presence of complexity to the failure of decomposition, mechanism, and reductionism. (shrink)
Functional localization has historically been one of the primary goals of neuroscience. There is still debate, however, about whether it is possible, and if so what kind of theories succeed at localization. I argue for a contextualist approach to localization. Most theorists assume that widespread contextual variability in function is fundamentally incompatible with functional decomposition in the brain, because contextualist accounts will fail to be generalizable and projectable. I argue that this assumption is misplaced. A properly articulated contextualism can ground (...) successful theories of localization even without positing completely generalizable accounts. Via a case study from perceptual neuroscience, I suggest that there is strong evidence for contextual variation in the function of perceptual brain areas. I then outline a version of contextualism that is empirically adequate with respect to this data, and claim that it can still distinguish brain areas from each other according to their functional properties. Finally, I claim that the view does not fail the norms for good theory in the way that anticontextualists suppose. It is true that, on a contextualist view, we will not have theories that are completely generalizable and predictive. We can, however, have successful partial generalizations that structure ongoing investigation and lead to novel functional insight, and this success is sufficient to ground the project of functional localization. (shrink)
Intentions are commonly conceived of as discrete mental states that are the direct cause of actions. In the last several decades, neuroscientists have taken up the project of finding the neural implementation of intentions, and a number of areas have been posited as implementing these states. We argue, however, that the processes underlying action initiation and control are considerably more dynamic and context sensitive than the concept of intention can allow for. Therefore, adopting the notion of ‘intention’ in neuroscientific explanations (...) can easily lead to misinterpretation of the data, and can negatively influence investigation into the neural correlates of intentional action.We suggest reinterpreting the mechanisms underlying intentional action, and we will discuss the elements that such a reinterpretation needs to account for. (shrink)
Proponents of cognitive penetration often argue for the thesis on the basis of combined intuitions about categorical perception and perceptual learning. The claim is that beliefs penetrate perceptions in the course of learning to perceive categories. I argue that this “diachronic” penetration thesis is false. In order to substantiate a robust notion of penetration, the beliefs that enable learning must describe the particular ability that subjects learn. However, they cannot do so, since in order to help with learning they must (...) instruct learners to employ previously existing abilities. I argue that a better approach recognizes that we can have sophisticated causal precursors to perceptual learning, but that the learning process itself must operate outside of cognitive influence. (shrink)
Diagrams have distinctive characteristics that make them an effective medium for communicating research findings, but they are even more impressive as tools for scientific reasoning. Focusing on circadian rhythm research in biology to explore these roles, we examine diagrammatic formats that have been devised to identify and illuminate circadian phenomena and to develop and modify mechanistic explanations of these phenomena.
I draw on empirical results from perceptual and motor learning to argue for an anti-intellectualist position on skill. Anti-intellectualists claim that skill or know-how is non-propositional. Recent proponents of the view have stressed the flexible but fine-grained nature of skilled control as supporting their position. However, they have left the nature of the mental representations underlying such control undertheorized. This leaves open several possible strategies for the intellectualist, particularly with regard to skill learning. Propositional knowledge may structure the inputs to (...) sensorimotor learning, may constitute the outcomes of said learning, or may be needed for the employment of learned skill. I argue that sensorimotor learning produces multi-scale associational representations, and that these representations are of the right sort to underlie flexible and fine-grained control. I then suggest that their content is vitally indeterminate with regard to propositional content attribution, because they exhibit a kind of open-ended structure. I articulate this kind of structure, and use it to respond to the three intellectualist strategies. I then show how the perspective I advance offers insights for understanding both instruction and expert practice. (shrink)
The notion of “hierarchy” is one of the most commonly posited organizational principles in systems neuroscience. To this date, however, it has received little philosophical analysis. This is unfortunate, because the general concept of hierarchy ranges over two approaches with distinct empirical commitments, and whose conceptual relations remain unclear. We call the first approach the “representational hierarchy” view, which posits that an anatomical hierarchy of feed-forward, feed-back, and lateral connections underlies a signal processing hierarchy of input-output relations. Because the representational (...) hierarchy view holds that unimodal sensory representations are subsequently elaborated into more categorical and rule-based ones, it is committed to an increasing degree of abstraction along the hierarchy. The second view, which we call “topological hierarchy", is not committed to different representational functions or degrees of abstraction at different levels. Topological approaches instead posit that the hierarchical level of a part of the brain depends on how central it is to the pattern of connections in the system. Based on the current evidence, we argue that three conceptual relations between the two approaches are possible: topological hierarchies could substantiate the traditional representational hierarchy, conflict with it, or contribute to a plurality of approaches needed to understand the organization of the brain. By articulating each of these possibilities, our analysis attempts to open a conceptual space in which further neuroscientific and philosophical reasoning about neural hierarchy can proceed. (shrink)
: Fodor’s view of the mind is thoroughly computational. This means that the basic kind of mental entity is a “discursive” mental representation and operations over this kind of mental representation have broad architectural scope, extending out to the edges of perception and the motor system. However, in multiple epochs of his work, Fodor attempted to define a functional role for non-discursive, imagistic representation. I describe and critique his two considered proposals. The first view says that images play a particular (...) kind of functional role in certain types of deliberative tasks. The second says that images are solely restricted to the borders of perception, and act as a sort of medium for the fixing of conceptual reference. I argue, against the first proposal, that a broad-scope computationalism such as Fodor’s renders images in principle functionally redundant. I argue, against the second proposal, that empirical evidence suggests that non-discursive representations are learned through perceptual learning, and directly inform category judgments. In each case, I point out extant debates for which the arguments are relevant. The upshot is that there is motivation for limited scope computationalism, in which some, but not all, mental processes operate on discursive mental representations. Keywords: Computational Theory of Mind; Mental Representation; Perception; Mental Image; Jerry Fodor Fodor e le rappresentazioni mentali come immagini Riassunto: La concezione della mente di Fodor è rigorosamente computazionale, ossia le entità mentali di base sono rappresentazioni mentali “discorsive”. Le operazioni su queste rappresentazioni hanno un fine architettonico ampio, che va fino ai confini della percezione e del sistema motorio. In periodi diversi del suo lavoro, Fodor ha proposto due modi per definire un ruolo funzionale per la rappresentazione non-discorsiva come immagine. Tratterò criticamente entrambi. Per il primo, le immagini giocano un particolare tipo di ruolo funzionale in certi tipi di compiti deliberativi, mentre, per il secondo, sono relegate unicamente ai confini della percezione, agendo come medium per fissare il riferimento concettuale. Contro il primo sosterrò che un computazionalismo così ampio come quello di Fodor rende le immagini in principio funzionalmente ridondanti. Contro il secondo sosterrò che l’evidenza empirica suggerisce che le rappresentazioni non-discorsive vengono apprese percettivamente, agendo direttamente sui giudizi di categorizzazione. In entrambi i casi considererò gli argomenti più rilevanti nel dibattito corrente. Si vedrà che ci sono buone ragioni in favore di un computazionalismo più limitato, in cui alcuni processi mentali operano su rappresentazioni mentali discorsive. Parole chiave: Teoria computazionale della mente; Rappresentazione mentale; Percezione; Immagine mentale; Jerry Fodor. (shrink)
In this paper I criticize a view of functional localization in neuroscience, which I call “computational absolutism”. “Absolutism” in general is the view that each part of the brain should be given a single, univocal function ascription. Traditional varieties of absolutism posit that each part of the brain processes a particular type of information and/or performs a specific task. These function attributions are currently beset by physiological evidence which seems to suggest that brain areas are multifunctional—that they process distinct information (...) and perform different tasks depending on context. Many theorists take this contextual variation as inimical to successful localization, and claim that we can avoid it by changing our functional descriptions to computational descriptions. The idea is that we can have highly generalizable and predictive functional theories if we can discover a single computation performed by each area regardless of the specific context in which it operates. I argue, drawing on computational models of perceptual area MT, that this computational version of absolutism fails to come through on its promises. In MT, the modeling field has not produced a univocal computational description, but instead a plurality of models analyzing different aspects of MT function. Moreover, CA cannot appeal to theoretical unification to solve this problem, since highly general models, on their own, neither explain nor predict what MT does in any particular context. I close by offering a perspective on neural modeling inspired by Nancy Cartwright’s and Margaret Morrison’s views of modeling in the physical sciences. (shrink)
We explore the crucial role of diagrams in scientific reasoning, especially reasoning directed at developing mechanistic explanations of biological phenomena. We offer a case study focusing on one research project that resulted in a published paper advancing a new understanding of the mechanism by which the central circadian oscillator in Synechococcus elongatus controls gene expression. By examining how the diagrams prepared for the paper developed over the course of multiple drafts, we show how the process of generating a new explanation (...) vitally involved the development and integration of multiple versions of different types of diagrams, and how reasoning about the mechanism proceeded in tandem with the development of the diagrams used to represent it. (shrink)
It is a widespread assumption in philosophy of science that data is what is explained by theory—that data itself is not explanatory. I draw on instances of representational and explanatory practice from mammalian chronobiology to suggest that this assumption is unsustainable. In many instances, biologists employ representations of data in explanatory ways that are not reducible to constraints on or evidence for representations of mechanisms. Data graphs are used to exemplify relationships between quantities in the mechanism, and often these representations (...) are necessary for explaining particular aspects of the phenomena under study. I argue that this kind of representation is distinct from representing laws or generalizations, and its primary purpose is to convey particular types or patterns of quantitative relationships. The benefit of the analysis is two-fold. First, it provides a more accurate account of explanatory practice in broadly mechanistic analysis in biology. Second, it suggests that there is not an explanatory “fundamental” type of representation in biology. Rather, the practice of explanation consists in the construction of different types of representations and their employment for distinct explanatory purposes. (shrink)
The notion of representation in neuroscience has largely been predicated on localizing the components of computational processes that explain cognitive function. On this view, which I call “algorithmic homuncularism,” individual, spatially and temporally distinct parts of the brain serve as vehicles for distinct contents, and the causal relationships between them implement the transformations specified by an algorithm. This view has a widespread influence in philosophy and cognitive neuroscience, and has recently been ably articulated and defended by Shea. Still, I am (...) skeptical about algorithmic homuncularism, and I argue against it by focusing on recent methods for complex data analysis in systems neuroscience. I claim that analyses such as principle components analysis and linear discriminant analysis prevent individuating vehicles as algorithmic homuncularism recommends. Rather, each individual part contributes to a global state space, trajectories of which vary with important task parameters. I argue that, while homuncularism is false, this view still supports a kind of “vehicle realism,” and I apply this view to debates about the explanatory role of representation. (shrink)
There is a long and distinguished tradition in philosophy and psychology according to which the mind’s fundamental, foundational connection to the world is made by connecting perceptually to features of objects. On this picture, which we’ll call feature prioritarianism, minds like ours first make contact with the colors, shapes, and sizes of distal items, and then, only on the basis of the representations so obtained, build up representations of the objects that bear these features. The feature priority view maintains, then, (...) that our perception/knowledge of objects asymmetrically depends on our perception/knowledge of simple features. This paper has two aims. First, we will present evidence, drawn from a variety of perceptual effects, that feature prioritarianism cannot be true, since there are cases that speak against the priority of feature representations in perceptual processing. Instead, we claim that the evidence supports an alternative —-and more complex—- no-priority view. Second, we will offer a framework for a no-priority view that both captures the cases we cite and provides a more sensible architecture in which to understand a variety of productive projects in perceptual science, and show how the framework cross-cuts some recent discussions in philosophy of perception. (shrink)
In discussion of mechanisms, philosophers often debate about whether quantitative descriptions of generalizations or qualitative descriptions of operations are explanatorily fundamental. I argue that these debates have erred by conflating the explanatory roles of generalizations and patterns. Patterns are types of variations within or between quantities in a mechanism over time or across conditions. While these patterns must often be represented in addition to descriptions of operations in order to explain a phenomenon, they are not equivalent to generalizations because their (...) explanatory role does not depend on any specific facts about their scope or domain of invariance. (shrink)
Despite their popularity, relatively scant attention has been paid to the upshot of Bayesian and predictive processing models of cognition for views of overall cognitive architecture. Many of these models are hierarchical ; they posit generative models at multiple distinct "levels," whose job is to predict the consequences of sensory input at lower levels. I articulate one possible position that could be implied by these models, namely, that there is a continuous hierarchy of perception, cognition, and action control comprising levels (...) of generative models. I argue that this view is not entailed by a general Bayesian/predictive processing outlook. Bayesian approaches are compatible with distinct formats of mental representation. Focusing on Bayesian approaches to motor control, I argue that the junctures between different types of mental representation are places where the transitivity of hierarchical prediction may be broken, and I consider the upshot of this conclusion for broader discussions of cognitive architecture. (shrink)
The traditional approach to explanation in cognitive neuroscience is realist about psychological constructs, and treats them as explanatory. On the “standard framework,” cognitive neuroscientists explain behavior as the result of the instantiation of psychological functions in brain activity. This strategy is questioned by results suggesting the distribution of function in the brain, the multifunctionality of individual parts of the brain, and the overlap in neural realization of purportedly distinct psychological constructs. One response to this in the field has been to (...) employ the tools of databasing and machine learning to attempt to find and quantify specific correlations between psychological kinds such as ‘memory’ or ‘attention’ (or sub-kinds thereof) and patterns of activity in the brain. I assess the status and prospects of these projects. I argue that current proponents of the project are vague about their aims, vis-à-vis the standard framework, sometimes suggesting substantiation of the framework, sometimes suggesting retaining the framework but revising the ontology of mental constructs, and sometimes suggesting abandonment of the framework. I argue that extant results from within the projects fail to substantiate the standard framework, and propose an alternative. On my view, psychological constructs should not be viewed as explanantia, but instead as heuristic concepts that help us uncover ways that behaviors can vary and the ways that the brain implements those distinctions. I then discuss the normative upshot of these views for databasing and brain mapping projects. (shrink)
According to the Causal Theory of Action (CTA), genuine actions are individuated by their causal history. Actions are bodily movements that are causally explained by citing the agent’s reasons. Reasons are then explained as some combination of propositional attitudes – beliefs, desires, and/or intentions. The CTA is thus committed to realism about the attitudes. This paper explores current models of decision-making from the mind sciences, and argues that it is far from obvious how to locate the propositional attitudes in the (...) causal processes they describe. The outcome of the analysis is a proposal for pluralism: there are several ways one could attempt to map states like “intention” onto decision-making processes, but none will fulfill all of the roles attributed to the attitudes by the CTA. (shrink)
In several works, Ruth Millikan has developed a ‘teleosemantic’ theory of concepts. Millikan’s theory has three explicit desiderata for concepts: wide scope, non-descriptionist content, and naturalism. I contend that Millikan’s theory cannot fulfill all of these desiderata simultaneously. Theoretical concepts, such as those of chemistry and physics, fall under Millikan’s intended scope, but I will argue that her theory cannot account for these concepts in a way that is compatible with both non-descriptionism and naturalism. In these cases, Millikan’s view is (...) subject to the traditional ‘indeterminacy problem’ for teleosemantic theories. This leaves the content of theoretical concepts indeterminate between a descriptionist and non-descriptionist content. Furthermore, this problem cannot be overcome without giving up the naturalism desideratum. I suggest that the scope of Millikan’s theory should be limited. At best, the theory will be able to attribute naturalistic, non-descriptionist content to a smaller range of concepts. (shrink)
Naturalism and scientific creativity: new tools for analyzing science Content Type Journal Article Pages 1-4 DOI 10.1007/s11016-010-9513-1 Authors Daniel Burnston, Department of Philosophy, Interdisciplinary Cognitive Science Program, University of California, San Diego, 9500 Gilman Drive # 0119, La Jolla, CA 92093-0119, USA Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
On page 3653, there is a mistake in the explanation of the Cornsweet illusion. In fact, the explanation is that the panel perceived as darker is facing towards the light source—in the case of this figure the light is coming from the right.