According to the decomposition thesis, perceptual experiences resolve without remainder into their different modality-specific components. Contrary to this view, I argue that certain cases of multisensory integration give rise to experiences representing features of a novel type. Through the coordinated use of bodily awareness—understood here as encompassing both proprioception and kinaesthesis—and the exteroceptive sensory modalities, one becomes perceptually responsive to spatial features whose instances couldn’t be represented by any of the contributing modalities functioning in isolation. I develop an argument for (...) this conclusion focusing on two cases: 3D shape perception in haptic touch and experiencing an object’s egocentric location in crossmodally accessible, environmental space. (shrink)
Pictures are 2D surfaces designed to elicit 3D-scene-representing experiences from their viewers. In this essay, I argue that philosophers have tended to underestimate the relevance of research in vision science to understanding the nature of pictorial experience. Both the deeply entrenched methodology of virtual psychophysics as well as empirical studies of pictorial space perception provide compelling support for the view that pictorial experience and seeing face-to-face are experiences of the same psychological, explanatory kind. I also show that an empirically informed (...) account of pictorial experience provides resources to develop a novel, resemblance-based account of depiction. According to what I call the deep resemblance theory, pictures work by presenting virtual models of objects and scenes in phenomenally 3D, pictorial space. (shrink)
It is natural to assume that the fine-grained and highly accurate spatial information present in visual experience is often used to guide our bodily actions. Yet this assumption has been challenged by proponents of the Two Visual Systems Hypothesis , according to which visuomotor programming is the responsibility of a “zombie” processing stream whose sources of bottom-up spatial information are entirely non-conscious . In many formulations of TVSH, the role of conscious vision in action is limited to “recognizing objects, selecting (...) targets for action, and determining what kinds of action, broadly speaking, to perform” . Our aim in this study is to show that the available evidence not only fails to support this dichotomous view but actually reveals a significant role for conscious vision in motor programming, especially for actions that require deliberate attention. (shrink)
The problem of amodal perception is the problem of how we represent features of perceived objects that are occluded or otherwise hidden from us. Bence Nanay (2010) has recently proposed that we amodally perceive an object's occluded features by imaginatively projecting them into the relevant regions of visual egocentric space. In this paper, I argue that amodal perception is not a single, unitary capacity. Drawing appropriate distinctions reveals amodal perception to be characterized not only by mental imagery, as Nanay suggests, (...) but also by genuinely visual representations as well as beliefs. I conclude with some brief remarks on the role of object-directed bodily action in conferring a sense of unseen presence on an object's occluded features. (shrink)
Neuropsychological findings used to motivate the "two visual systems" hypothesis have been taken to endanger a pair of widely accepted claims about spatial representation in conscious visual experience. The first is the claim that visual experience represents 3-D space around the perceiver using an egocentric frame of reference. The second is the claim that there is a constitutive link between the spatial contents of visual experience and the perceiver's bodily actions. In this paper, I review and assess three main sources (...) of evidence for the two visual systems hypothesis. I argue that the best interpretation of the evidence is in fact consistent with both claims. I conclude with some brief remarks on the relation between visual consciousness and rational agency. (shrink)
In this paper, I critically assess the enactive account of visual perception recently defended by Alva Noë (2004). I argue inter alia that the enactive account falsely identifies an object’s apparent shape with its 2D perspectival shape; that it mistakenly assimilates visual shape perception and volumetric object recognition; and that it seriously misrepresents the constitutive role of bodily action in visual awareness. I argue further that noticing an object’s perspectival shape involves a hybrid experience combining both perceptual and imaginative elements (...) – an act of what I call ‘make-perceive.’. (shrink)
This chapter critically assesses recent arguments that acquiring the ability to categorize an object as belonging to a certain high-level kind can cause the relevant kind property to be represented in visual phenomenal content. The first two arguments, developed respectively by Susanna Siegel (2010) and Tim Bayne (2009), employ an essentially phenomenological methodology. The third argument, developed by William Fish (2013), by contrast, is supported by an array of psychophysical and neuroscientific findings. I argue that while none of these arguments (...) ultimately proves successful, there is a substantial body of empirical evidence that information originating outside the visual system can nonetheless modulate the way an object’s low-level attributes visually appear. Visual phenomenal content, I show, is not only significantly influenced by crossmodal interactions between vision and other exteroceptive senses such as touch and audition, but also by interactions between vision and non-perceptual systems involved in motor planning and construction of the proprioceptive body-image. (shrink)
Human beings have the ability to ‘augment’ reality by superimposing mental imagery on the visually perceived scene. For example, when deciding how to arrange furniture in a new home, one might project the image of an armchair into an empty corner or the image of a painting onto a wall. The experience of noticing a constellation in the sky at night is also perceptual-imaginative amalgam: it involves both seeing the stars in the constellation and imagining the lines that connect them (...) at the same time. I here refer to such hybrid experiences – involving both a bottom-up, externally generated component and a top-down, internally generated component – as make-perceive (Briscoe 2008, 2011). My discussion in this paper has two parts. In the first part, I show that make-perceive enables human beings to solve certain problems and pursue certain projects more effectively than bottom-up perceiving or top-down visualization alone. To this end, the skillful use of projected mental imagery is surveyed in a variety of contexts, including action planning, the interpretation of static mechanical diagrams, and non-instrumental navigation. In the second part, I address the question of whether make-perceive may help to account for the “phenomenal presence” of occluded or otherwise hidden features of perceived objects. I argue that phenomenal presence is not well explained by the hypothesis that hidden features are represented using projected mental images. In defending this position, I point to important phenomenological and functional differences between the way hidden object features are represented respectively in mental imagery and amodal completion. (shrink)
Multisensory processing encompasses all of the various ways in which the presence of information in one sensory modality can adaptively influence the processing of information in a different modality. In Part I of this survey article, I begin by presenting a cartography of some of the more extensively investigated forms of multisensory processing, with a special focus on two distinct types of multisensory integration. I briefly discuss the conditions under which these different forms of multisensory processing occur as well as (...) their important perceptual consequences and interrelations. In Part II, I then turn to examining of some of the different possible ways in which the structure of conscious perceptual experience might also be characterized as multisensory. In addition, I discuss the significance of research on multisensory processing and multisensory consciousness for philosophical attempts to individuate the senses. (shrink)
The first part of this survey article presented a cartography of some of the more extensively studied forms of multisensory processing. In this second part, I turn to examining some of the different possible ways in which the structure of conscious perceptual experience might also be characterized as multisensory. In addition, I discuss the significance of research on multisensory processing and multisensory consciousness for philosophical debates concerning the modularity of perception, cognitive penetration, and the individuation of the senses.
Action is a means of acquiring perceptual information about the environment. Turning around, for example, alters your spatial relations to surrounding objects and, hence, which of their properties you visually perceive. Moving your hand over an object’s surface enables you to feel its shape, temperature, and texture. Sniffing and walking around a room enables you to track down the source of an unpleasant smell. Active or passive movements of the body can also generate useful sources of perceptual information (Gibson 1966, (...) 1979). The pattern of optic flow in the retinal image produced by forward locomotion, for example, contains information about the direction in which you are heading, while motion parallax is a “cue” used by the visual system to estimate the relative distances of objects in your field of view. In these uncontroversial ways and others, perception is instrumentally dependent on action. According to an explanatory framework that Susan Hurley (1998) dubs the “Input-Output Picture”, the dependence of perception on action is purely instrumental: "Movement can alter sensory inputs and so result in different perceptions… changes in output are merely a means to changes in input, on which perception depends directly" (1998: 342). -/- The action-based theories of perception, reviewed in this entry, challenge the Input-Output Picture. They maintain that perception can also depend in a noninstrumental or constitutive way on action (or, more generally, on capacities for object-directed motor control). This position has taken many different forms in the history of philosophy and psychology. Most action-based theories of perception in the last 300 years, however, have looked to action in order to explain how vision, in particular, acquires either all or some of its spatial representational content. Accordingly, these are the theories on which we shall focus here. -/- We begin in Section 1 by discussing George Berkeley’s Towards a New Theory of Vision (1709), the historical locus classicus of action-based theories of perception, and one of the most influential texts on vision ever written. Berkeley argues that the basic or “proper” deliverance of vision is not an arrangement of voluminous objects in three-dimensional space, but rather a two-dimensional manifold of light and color. We then turn to a discussion of Lotze, Helmholtz, and the local sign doctrine. The “local signs” were felt cues for the mind to know what sort of spatial content to imbue visual experience with. For Lotze, these cues were “inflowing” kinaesthetic feelings that result from actually moving the eyes, while, for Helmholtz, they were “outflowing” motor commands sent to move the eyes. -/- In Section 2, we discuss sensorimotor contingency theories, which became prominent in the 20thcentury. These views maintain that an ability to predict the sensory consequences of self-initiated actions is necessary for perception. Among the motivations for this family of theories is the problem of visual direction constancy—why do objects appear to be stationary even though the locations on the retina to which they reflect light change with every eye movement?—as well as experiments on adaptation to optical rearrangement devices (ORDs) and sensory substitution. -/- Section 3 examines two other important 20th century theories. According to what we shall call the motor component theory, efference copies generated in the oculomotor system and/or proprioceptive feedback from eye-movements are used together with incoming sensory inputs to determine the spatial attributes of perceived objects. Efferent readiness theories, by contrast, look to the particular ways in which perceptual states prepare the observer to move and act in relation to the environment. The modest readiness theory, as we shall call it, claims that the way an object’s spatial attributes are represented in visual experience can be modulated by one or another form of covert action planning. The bold readiness theory argues for the stronger claim that perception just is covert readiness for action. -/- In Section 4, we move to the disposition theory, most influentially articulated by Gareth Evans (1982, 1985), but more recently defended by Rick Grush (2000, 2007). Evans’ theory is, at its core, very similar to the bold efferent readiness theory. There are some notable differences, though. Evans’ account is more finely articulated in some philosophical respects. It also does not posit a reduction of perception to behavioral dispositions, but rather posits that certain complicated relations between perceptual input and behavioral provide spatial content. Grush proposes a very specific theory that is like Evans’ in that it does not posit a reduction, but unlike Evans’ view, does not put behavioral dispositions and sensory input on an undifferentiated footing. (shrink)
According to “actionism” (Noë 2010), perception constitutively depends on implicit knowledge of the way sensory stimulations vary as a consequence of the perceiver’s self-movement. My aim in this contribution is to develop an alternative conception of the role of action in perception present in the work of Gareth Evans using resources provided by Ruth Millikan’s biosemantic theory of mental representation.
The purpose of this paper is to defend what I call the action-oriented coding theory (ACT) of spatially contentful visual experience. Integral to ACT is the view that conscious visual experience and visually guided action make use of a common subject-relative or 'egocentric' frame of reference. Proponents of the influential two visual systems hypothesis (TVSH), however, have maintained on empirical grounds that this view is false (Milner & Goodale, 1995/2006; Clark, 1999; 2001; Campbell, 2002; Jacob & Jeannerod, 2003; Goodale & (...) Milner, 2004). One main source of evidence for TVSH comes from behavioral studies of the comparative effects of size-contrast illusions on visual awareness and visuo- motor action. This paper shows that not only is the evidence from illusion studies inconclusive, there is a better, ACT-friendly interpretation of the evidence that avoids serious theoretical difficulties faced by TVSH. (shrink)
According to proponents of the sensorimotor contingency theory of perception (Hurley & Noë 2003, Noë 2004, O’Regan 2011), active control of camera movement is necessary for the emergence of distal attribution in tactile-visual sensory substitution (TVSS) because it enables the subject to acquire knowledge of the way stimulation in the substituting modality varies as a function of self-initiated, bodily action. This chapter, by contrast, approaches distal attribution as a solution to a causal inference problem faced by the subject’s perceptual systems. (...) Given all of the available endogenous and exogenous evidence available to those systems, what is the most probable source of stimulation in the substituting modality? From this perspective, active control over the camera’s movements matters for rather different reasons. Most importantly, it generates proprioceptive and efference-copy based information about the camera’s body-relative position necessary to make use of the spatial cues present in the stimulation that the subject receives for purposes of egocentric object localization. (shrink)
Mark Changizi et al. (2008) claim that it is possible systematically to organize more than 50 kinds of illusions in a 7 × 4 matrix of 28 classes. This systematization, they further maintain, can be explained by the operation of a single visual processing latency correction mechanism that they call “perceiving the present” (PTP). This brief report raises some concerns about the way a number of illusions are classified by the proposed systematization. It also poses two general problems—one empirical and (...) one conceptual—for the PTP approach. (shrink)
Donald Davidson has long maintained that in order to be credited with the concept of objectivity – and, so, with language and thought – it is necessary to communicate with at least one other speaker. I here examine Davidson’s central argument for this thesis and argue that it is unsuccessful. Subsequently, I turn to Robert Brandom’s defense of the thesis in Making It Explicit. I argue that, contrary to Brandom, in order to possess the concept of objectivity it is not (...) necessary to engage in the practice of interpersonal reasoning because possession of the concept is independently integral to the practice of intrapersonal reasoning. (shrink)
In this chapter, I critically examine two of the main approaches to colour categorization in cognitive science: the perceptual salience theory and linguistic relativism. I then turn to reviewing several decades of psychological research on colour categorical perception (CP). A careful assessment of relevant findings suggests that most of the experimental effects that have been understood in terms of CP actually fall on the cognition side of the perception-cognition divide: they are effects of colour language, for example, on memory or (...) decision-making. (shrink)
I here present some doubts about whether Mandik’s (2010) proposed intermediacy and recurrence constraints are necessary and sufficient for agentive experience. I also argue that in order to vindicate the conclusion that agentive experience is an exclusively perceptual phenomenon (Prinz, 2007), it is not enough to show that the predictions produced by forward models of planned motor actions are conveyed by mock sensory signals. Rather, it must also be shown that the outputs of “comparator” mechanisms that compare these predictions against (...) actual sensory feedback are also coded in a perceptual representational format. (shrink)
Semantic externalism in contemporary philosophy of language typically – and often tacitly – combines two supervenience claims about idiolectical meaning (i.e., meaning in the language system of an individual speaker). The first claim is that the meaning of a word in a speaker’s idiolect may vary without any variation in her intrinsic, physical properties. The second is that the meaning of a word in a speaker’s idiolect may vary without any variation in her understanding of its use. I here show (...) that a conception of idiolectical meaning is possible that accepts the “anti-internalism” of the first claim while rejecting (what I shall refer to as) the “anti-individualism” of the second. According to this conception, externally constituted idiolectical meaning supervenes on idiolectical understanding. (shrink)
Focusing on Crispin Wright, I try in Chapter One to show that semantic antirealism cannot stably be combined with either communitarianism or constructivism about meaning. I also argue that the rational tenability of communitarianism is threatened by a powerful argument of Wright's own devising in "What Could Anti-Realism About Ordinary Psychology Possibly Be?" In Chapters Two and Three, I defend the individualist idea that the meaning of an expression in an agent's idiolect is correlative with her understanding of its use. (...) I try to show that individualism, so conceived, is fully compatible with natural-kind externalism and that none of the familiar and widely accepted arguments for social externalism are cogent. I also argue that there is no incompatibility between externalism and self-knowledge in matters of meaning. In Chapters Four and Five, I criticize a transcendental argument developed by Donald Davidson and recently defended by Robert Brandom that a creature cannot properly be credited with language or thought unless it is in communication with at least one other creature. Neither philosopher, I argue, provides a cogent case for the argument's crucial premise that the concept of objectivity is unavailable to a creature outside of a social, linguistic setting. The thesis that meaning is normative has widespread currency in the philosophy of language and does much to motivate the social, deontological approach to meaning taken in Making it Explicit. However, I argue in Chapter Six that central arguments for the thesis rest on confusions about the relation between the concepts of meaning, truth, use and intention. In Chapter Seven, I conclude, by connecting Davidson and Brandom's social account of the concept of objectivity with a certain "non-individualist" theory of perception. Following John McDowell, I argue that the theory renders the empirical contentfulness language and thought unintelligible. (shrink)