How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the (...) relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier to blur the distinction between sign and gesture, we argue that distinguishing between sign and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture. (shrink)
Developmental psychologists have long recognized the extraordinary influence of action on learning (Held & Hein, 1963; Piaget, 1952). Action experiences begin to shape our perception of the world during infancy (e.g., as infants gain an understanding of others’ goal-directed actions; Woodward, 2009) and these effects persist into adulthood (e.g., as adults learn about complex concepts in the physical sciences; Kontra, Lyons, Fischer, & Beilock, 2012). Theories of embodied cognition provide a structure within which we can investigate the mechanisms underlying action’s (...) impact on thinking and reasoning. We argue that theories of embodiment can shed light on the role of action experience in early learning contexts, and further that these theories hold promise for using action to scaffold learning in more formal educational settings later in development. (shrink)
Analogy researchers do not often examine gesture, and gesture researchers do not often borrow ideas from the study of analogy. One borrowable idea from the world of analogy is the importance of distinguishing between attributes and relations. Gentner observed that some metaphors highlight attributes and others highlight relations, and called the latter analogies. Mirroring this logic, we observe that some metaphoric gestures represent attributes and others represent relations, and propose to call the latter analogical gestures. We provide examples of such (...) analogical gestures and show how they relate to the categories of iconic and metaphoric gestures described previously. Analogical gestures represent different types of relations and different degrees of relational complexity, and sometimes cohere into larger analogical models. Treating analogical gestures as a distinct phenomenon prompts new questions and predictions, and illustrates one way that the study of gesture and the study of analogy can be mutually informative. (shrink)
It is difficult to create spoken forms that can be understood on the spot. But the manual modality, in large part because of its iconic potential, allows us to construct forms that are immediately understood, thus requiring essentially no time to develop. This paper contrasts manual forms for actions produced over three time spans—by silent gesturers who are asked to invent gestures on the spot; by homesigners who have created gesture systems over their life spans; and by signers who have (...) learned a conventional sign language from other signers—and finds that properties of the predicate differ across these time spans. Silent gesturers use location to establish co-reference in the way established sign languages do, but they show little evidence of the segmentation sign languages display in motion forms for manner and path, and little evidence of the finger complexity sign languages display in handshapes in predicates representing events. Homesigners, in contrast, not only use location to establish co-reference but also display segmentation in their motion forms for manner and path and finger complexity in their object handshapes, although they have not yet decreased finger complexity to the levels found in sign languages in their handling handshapes. The manual modality thus allows us to watch language as it grows, offering insight into factors that may have shaped and may continue to shape human language. (shrink)
Previous work has found that guiding problem-solvers' movements can have an immediate effect on their ability to solve a problem. Here we explore these processes in a learning paradigm. We ask whether guiding a learner's movements can have a delayed effect on learning, setting the stage for change that comes about only after instruction. Children were taught movements that were either relevant or irrelevant to solving mathematical equivalence problems and were told to produce the movements on a series of problems (...) before they received instruction in mathematical equivalence. Children in the relevant movement condition improved after instruction significantly more than children in the irrelevant movement condition, despite the fact that the children showed no improvement in their understanding of mathematical equivalence on a ratings task or on a paper-and-pencil test taken immediately after the movements but before instruction. Movements of the body can thus be used to sow the seeds of conceptual change. But those seeds do not necessarily come to fruition until after the learner has received explicit instruction in the concept, suggesting a “sleeper effect” of gesture on learning. (shrink)
Sign languages modulate the production of signs in space and use this spatial modulation to refer back to entities—to maintain coreference. We ask here whether spatial modulation is so fundamental to language in the manual modality that it will be invented by individuals asked to create gestures on the spot. English speakers were asked to describe vignettes under 2 conditions: using gesture without speech, and using speech with spontaneous gestures. When using gesture alone, adults placed gestures for particular entities in (...) non-neutral locations and then used those locations to refer back to the entities. When using gesture plus speech, adults also produced gestures in non-neutral locations but used the locations coreferentially far less often. When gesture is forced to take on the full burden of communication, it exploits space for coreference. Coreference thus appears to be a resilient property of language, likely to emerge in communication systems no matter how simple. (shrink)
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech, not without speech. We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish to 80 sighted adult speakers as they described three-dimensional (...) motion scenes. We found an effect of language on co-speech gesture, not on silent gesture—blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language—an organization that relies on neither visuospatial cues nor language structure. (shrink)
The commentaries have led us to entertain expansions of our paradigm to include new theoretical questions, new criteria for what counts as a gesture, and new data and populations to study. The expansions further reinforce the approach we took in the target article: namely, that linguistic and gestural components are two distinct yet integral sides of communication, which need to be studied together.
Gesture does not have a fixed position in the Dienes & Perner framework. Its status depends on the way knowledge is expressed. Knowledge reflected in gesture can be fully implicit (neither factuality nor predication is explicit) if the goal is simply to move a pointing hand to a target. Knowledge reflected in gesture can be explicit (both factuality and predication are explicit) if the goal is to indicate an object. However, gesture is not restricted to these two extreme positions. When (...) gestures are unconscious accompaniments to speech and represent information that is distinct from speech, the knowledge they convey is factuality-implicit but predication-explicit. (shrink)