Traditional approaches to human information processing tend to deal with perception and action planning in isolation, so that an adequate account of the perception-action interface is still missing. On the perceptual side, the dominant cognitive view largely underestimates, and thus fails to account for, the impact of action-related processes on both the processing of perceptual information and on perceptual learning. On the action side, most approaches conceive of action planning as a mere continuation of stimulus processing, thus failing to account (...) for the goal-directedness of even the simplest reaction in an experimental task. We propose a new framework for a more adequate theoretical treatment of perception and action planning, in which perceptual contents and action plans are coded in a common representational medium by feature codes with distal reference. Perceived events (perceptions) and to-be-produced events (actions) are equally represented by integrated, task-tuned networks of feature codes – cognitive structures we call event codes. We give an overview of evidence from a wide variety of empirical domains, such as spatial stimulus-response compatibility, sensorimotor synchronization, and ideomotor action, showing that our main assumptions are well supported by the data. Key Words: action planning; binding; common coding; event coding; feature integration; perception; perception-action interface. (shrink)
This chapter challenges the assumption of attention functioning as a means of preventing consciousness from getting overloaded, and also challenges the assumption of any relationships between management of scarce resources and the original biological function of attention. It emphasizes that attention is directly derived from mechanisms governing the control of basic movements. The author establishes the theoretical stage through discussions on the implications of the brain’s preference to stimulus events and action plans in a feature-based manner and processing information through (...) different mechanisms. The chapter also discusses many empirical findings supporting the conception of action planning and action control having the potential to determine perception and attention. (shrink)
Human cognition and action are intentional and goal-directed, and explaining how they are controlled is one of the most important tasks of the cognitive sciences. After half a century of benign neglect this task is enjoying increased attention. Unfortunately, however, current theorizing about control in general, and the role of consciousness for/in control in particular, suffers from major conceptual flaws that lead to confusion regarding the following distinctions: automatic and unintentional processes, exogenous control and disturbance of endogenous control, conscious control (...) and conscious access to control, and personal and systems levels of analysis and explanation. Only if these flaws are overcome will a comprehensive understanding of the relationship between consciousness and control emerge. (shrink)
Using an explicit task cuing paradigm, we tested whether masked cues can trigger task-set activation, which would suggest that unconsciously presented stimuli can impact cognitive control processes. Based on a critical assessment of previous findings on the priming of task-set activation, we present two experiments with a new method to approach this subject. Instead of using a prime, we varied the visibility of the cue. These cues either directly signaled particular tasks in Experiment 1, or certain task transitions in Experiment (...) 2. While both masked task and transition cues affected task choice, only task cues affected the speed of task performance. This observation suggests that task-specific stimulus–response rules can be activated only by masked cues that are uniquely associated with a particular task. Taken together, these results demonstrate that unconsciously presented stimuli have the power to activate corresponding task sets. (shrink)
This article reviews evidence suggesting that the cause of approach and avoidance behavior lies not so much in the presence (i.e., the stimulus) but, rather, in the behavior’s anticipated future consequences (i.e., the goal): Approach is motivated by the goal to produce a desired consequence or end-state, while avoidance is motivated by the goal to prevent an undesired consequence or end-state. However, even though approach and avoidance are controlled by goals rather than stimuli, affective stimuli can influence action control by (...) priming associated goals. An integrative ideomotor model of approach and avoidance is presented and discussed. (shrink)
Religions commonly are taken to provide general orientation in leading one's life. We develop here the idea that religions also may have a much more concrete guidance function in providing systematic decision biases in the face of cognitive-control dilemmas. In particular, we assume that the selective reward that religious belief systems provide for rule-conforming behavior induces systematic biases in cognitive-control parameters that are functional in producing the wanted behavior. These biases serve as default values under uncertainty and affect performance in (...) any task that shares cognitive-control operations with the religiously motivated rule-conforming behavior the biases were originally developed for. Such biases therefore can be unraveled and objectified by means of rather simple tasks that are relatively well understood with regard to the cognitive mechanisms they draw on. (shrink)
Processing the various features from different feature maps and modalities in coherent ways requires a dedicated integration mechanism . Many authors have related feature binding to conscious awareness but little is known about how tight this relationship really is. We presented subjects with asynchronous audiovisual stimuli and tested whether the two features were integrated. The results show that binding took place up to 350 ms feature-onset asynchronies, suggesting that integration covers a relatively wide temporal window. We also asked subjects to (...) explicitly judge whether the two features would belong to the same or to the different events. Unsurprisingly, synchrony judgments decreased with increasing asynchrony. Most importantly, feature binding was entirely unaffected by conscious experience: features were bound whether they were experienced as occurring together or as belonging to a separate events, suggesting that the conscious experience of unity is not a prerequisite for, or a direct consequence of binding. (shrink)
First, we discuss issues raised with respect to the Theory of Event Coding (TEC)'s scope, that is, its limitations and possible extensions. Then, we address the issue of specificity, that is, the widespread concern that TEC is too unspecified and, therefore, too vague in a number of important respects. Finally, we elaborate on our views about TEC's relations to other important frameworks and approaches in the field like stages models, ecological approaches, and the two-visual-pathways model. Footnotes1 We acknowledge the precedence (...) of both Freud¹s Instincts and Their Vicissitudes (1915) and Neisser¹s Stimulus Information and Its Vicissitudes (a term Neisser borrowed from Freud for his monograph “Cognitive psychology,” 1967). (shrink)
Sequential action makes up the bulk of human daily activity, and yet much remains unknown about how people learn such actions. In one motor learning paradigm, the serial reaction time task, people are taught a consistent sequence of button presses by cueing them with the next target response. However, the SRT task only records keypress response times to a cued target, and thus it cannot reveal the full time-course of motion, including predictive movements. This paper describes a mouse movement trajectory (...) SRT task in which the cursor must be moved to a cued location. We replicated keypress SRT results, but also found that predictive movement—before the next cue appears—increased during the experiment. Moreover, trajectory analyses revealed that people developed a centering strategy under uncertainty. In a second experiment, we made prediction explicit, no longer cueing targets. Thus, participants had to explore the response alternatives and learn via reinforcement, receiving rewards and penalties for correct and incorrect actions, respectively. Participants were not told whether the sequence of stimuli was deterministic, nor if it would repeat, nor how long it was. Given the difficulty of the task, it is unsurprising that some learners performed poorly. However, many learners performed remarkably well, and some acquired the full 10-item sequence within 10 repetitions. Comparing the high- and low-performers’ detailed results in this reinforcement learning task with the first experiment's cued trajectory SRT task, we found similarities between the two tasks, suggesting that the effects in Experiment 1 are due to predictive, rather than reactive processes. Finally, we found that two standard model-free reinforcement learning models fit the high-performing participants, while the four low-performing participants provide better fit with a simple negative recency bias model. (shrink)
Conceptual knowledge is acquired through recurrent experiences, by extracting statistical regularities at different levels of granularity. At a fine level, patterns of feature co-occurrence are categorized into objects. At a coarser level, patterns of concept co-occurrence are categorized into contexts. We present and test CONCAT, a connectionist model that simultaneously learns to categorize objects and contexts. The model contains two hierarchically organized CALM modules (Murre, Phaf, & Wolters, 1992). The first module, the Object Module, forms object representations based on co-occurrences (...) between features. These representations are used as input for the second module, the Context Module, which categorizes contexts based on object co-occurrences. Feedback connections from the Context Module to the Object Module send activation from the active context to those objects that frequently occur within this context. We demonstrate that context feedback contributes to the successful categorization of objects, especially when bottom-up feature information is degraded or ambiguous. (shrink)
Many psychologists and neuroscientists still see executive functions as independent, domain-general, supervisory functions that are often dissociated from more “low-level” associative learning. Here, we suggest that executive functions very much build on associative learning, and argue that executive functions might be better understood as culture-sensitive cognitive gadgets, rather than as ready-made cognitive instincts.
Participants were required to switch among randomly ordered tasks, and instructional cues were used to indicate which task to execute. In Experiments 1 and 2, the participants indicated their readiness for the task switch before they received the target stimulus; thus, each trial was associated with two primary dependent measures: (1) readiness time and (2) target reaction time. Slow readiness responses and instructions emphasizing high readiness were paradoxically accompanied by slow target reaction time. Moreover, the effect of task switching on (...) readiness time was an order of magnitude smaller then the (objectively estimated) duration required for task preparation (Experiment 3). The results strongly suggest that participants have little conscious awareness of their preparedness and challenge commonly accepted assumptions concerning the role of consciousness in cognitive control. (shrink)
Explicit and implicit learning have been attributed to different learning processes that create different types of knowledge structures. Consistent with that claim, our study provides evidence that people integrate stimulus events differently when consciously aware versus unaware of the relationship between the events. In a first, acquisition phase participants sorted words into two categories , which were fully predicted by task-irrelevant primes—the labels of two other, semantically unrelated categories . In a second, test phase participants performed a lexical decision task, (...) in which all word stimuli stemmed from the previous prime categories and the primes were the labels of the previous target categories . Reliable priming effects in the second phase demonstrated that bidirectional associations between the respective categories had been formed in the acquisition phase , but these effects were found only in participants that were unaware of the relationship between the categories! We suggest that unconscious, implicit learning of event relationships results in the rather unsophisticated integration of the underlying event representations, whereas explicit learning takes the meaning of the order of the events into account, and thus creates unidirectional associations. (shrink)
The latest volume in the critically acclaimed and highly cited Attention and Performance series presents state of the art research from leading scientists in cognitive psychology and cognitive neuroscience describing the approaches being taken to understanding the mechanisms that allow us to negotiate and respond to the world around us.