A number of ways of taxonomizing human learning have been proposed. We examine the evidence for one such proposal, namely, that there exist independent explicit and implicit learning systems. This combines two further distinctions, (1) between learning that takes place with versus without concurrent awareness, and (2) between learning that involves the encoding of instances (or fragments) versus the induction of abstract rules or hypotheses. Implicit learning is assumed to involve unconscious rule learning. We examine the evidence for implicit learning (...) derived from subliminal learning, conditioning, artificial grammar learning, instrumental learning, and reaction times in sequence learning. We conclude that unconscious learning has not been satisfactorily established in any of these areas. The assumption that learning in some of these tasks (e.g., artificial grammar learning) is predominantly based on rule abstraction is questionable. When subjects cannot report the rules that govern stimulus selection, this is often because their knowledge consists of instances or fragments of the training stimuli rather than rules. In contrast to the distinction between conscious and unconscious learning, the distinction between instance and rule learning is a sound and meaningful way of taxonomizing human learning. We discuss various computational models of these two forms of learning. (shrink)
A number of ways of taxonomizing human learning have been proposed. We examine the evidence for one such proposal, namely, that there exist independent explicit and implicit learning systems. This combines two further distinctions, between learning that takes place with versus without concurrent awareness, and between learning that involves the encoding of instances versus the induction of abstract rules or hypotheses. Implicit learning is assumed to involve unconscious rule learning. We examine the evidence for implicit learning derived from subliminal learning, (...) conditioning, artificial grammar learning, instrumental learning, and reaction times in sequence learning. We conclude that unconscious learning has not been satisfactorily established in any of these areas. The assumption that learning in some of these tasks is predominantly based on rule abstraction is questionable. When subjects cannot report the “implicitly learned” rules that govern stimulus selection, this is often because their knowledge consists of instances or fragments of the training stimuli rather than rules. In contrast to the distinction between conscious and unconscious learning, the distinction between instance and rule learning is a sound and meaningful way of taxonomizing human learning. We discuss various computational models of these two forms of learning. (shrink)
Insight and strategy 2 Abstract In multiple-cue learning (also known as probabilistic category learning) people acquire information about cue-outcome relations and combine these into predictions or judgments. Previous studies claim that people can achieve high levels of performance without explicit knowledge of the task structure or insight into their own judgment policies. It has also been argued that people use a variety of suboptimal strategies to solve such tasks. In three experiments we re-examined these conclusions by introducing novel measures of (...) task knowledge and self-insight, and using ‘rolling regression’ methods to analyze individual learning. Participants successfully learned a four-cue probabilistic environment and showed accurate knowledge of both the task structure and their own judgment processes. Learning analyses suggested that the apparent use of suboptimal strategies emerges from the incremental tracking of statistical contingencies in the environment. (shrink)
This article reviews recent work aimed at developing a new framework, based on signal detection theory, for understanding the relationship between explicit (e.g., recognition) and implicit (e.g., priming) memory. Within this framework, different assumptions about sources of memorial evidence can be framed. Application to experimental results provides robust evidence for a single-system model in preference to multiple-systems models. This evidence comes from several sources including studies of the effects of amnesia and ageing on explicit and implicit memory. The framework allows (...) a range of concepts in current memory research, such as familiarity, recollection, fluency, and source memory, to be linked to implicit memory. More generally, this work emphasizes the value of modern computational modelling techniques in the study of learning and memory. (shrink)
The question of whether studies of human learning provide evidence for distinct conscious and unconscious influences remains as controversial today as ever. Much of this controversy arises from the use of the logic of dissociation. The controversy has prompted the use of an alternative approach that places conscious and unconscious influences on memory retrieval in opposition. Here we ask whether evidence acquired via the logic of opposition requires a dual-process account or whether it can be accommodated within a single similarity-based (...) account. We report simulations using a simple neural network model of two artificial grammar learning experiments reported by Higham, Vokey, and Pritchard that dissociated conscious and unconscious influences on classification. The simulations demonstrate that opposition logic is insufficient to distinguish between single- and multiple-system models. (shrink)
Multiple cue probability learning studies have typically focused on stationary environments. We present three experiments investigating learning in changing environments. A ﬁne-grained analysis of the learning dynamics shows that participants were responsive to both abrupt and gradual changes in cue-outcome relations. We found no evidence that participants adapted to these types of change in qualitatively different ways. Also, in contrast to earlier claims that these tasks are learned implicitly, participants showed good insight into what they learned. By ﬁtting formal learning (...) models, we investigated whether participants learned global functional relationships or made localized predictions from similar experienced exemplars. Both a local (the Associative Learning Model) and a global learning model (the novel Bayesian Linear Filter) ﬁtted the data of the ﬁrst two experiments. However, the results of Experiment 3, which was speciﬁcally designed to discriminate between local and global learning models, provided more support for global learning models. Finally, we present a novel model to account for the cue competition effects found in previous research and displayed by some of our participants. (shrink)
We present a new modeling framework for recognition memory and repetition priming based on signal detection theory. We use this framework to specify and test the predictions of 4 models: (a) a single-system (SS) model, in which one continuous memory signal drives recognition and priming; (b) a multiple-systems-1 (MS1) model, in which completely independent memory signals (such as explicit and implicit memory) drive recognition and priming; (c) a multiple-systems-2 (MS2) model, in which there are also 2 memory signals, but some (...) degree of dependence is allowed between these 2 signals (and this model subsumes the SS and MS1 models as special cases); and (d) a dual-process signal detection (DPSD1) model, 1 possible extension of a dual-process theory of recognition (Yonelinas, 1994) to priming, in which a signal detection model is augmented by an independent recollection process. The predictions of the models are tested in a continuous-identification-with-recognition paradigm in both normal adults (Experiments 1–3) and amnesic individuals (using data from Conroy, Hopkins, & Squire,2005). The SS model predicted numerous results in advance. These were not predicted by the MS1 model, though could be accommodated by the more flexible MS2 model. Importantly, measures of overall model fit favored the SS model over the others. These results illustrate a new, formal approach to testing theories of explicit and implicit memory. (shrink)
Do dissociations imply independent systems? In the memory field, the view that there are independent implicit and explicit memory systems has been predominantly supported by dissociation evidence. Here, we argue that many of these dissociations do not necessarily imply distinct memory systems. We review recent work with a single-system computational model that extends signal-detection theory (SDT) to implicit memory. SDT has had a major influence on research in a variety of domains. The current work shows that it can be broadened (...) even further in its range of application. Indeed, the single-system model that we present does surprisingly well in accounting for some key dissociations that have been taken as evidence for independent implicit and explicit memory systems. (shrink)
Previous research suggests that early performance of amnesic individuals in a probabilistic category learning task is relatively unimpaired. When combined with impaired declarative knowledge, this is taken as evidence for the existence of separate implicit and explicit memory systems. The present study contains a more ﬁne-grained analysis of learning than earlier studies. Using a dynamic lens model approach with plausible learning models, we found that the learning process is indeed indistinguishable between an amnesic and control group. However, in contrast to (...) earlier ﬁndings, we found that explicit knowledge of the task structure is also good in both the amnesic and the control group. This is inconsistent with a crucial prediction from the multiple-systems account. The results can be explained from a single system account and previously found diﬀerences in later categorization performance can be accounted for by a diﬀerence in learning rate. (shrink)
A central claim of Jones & Love's (J&L's) article is that Bayesian Fundamentalism is empirically unconstrained. Unless constraints are placed on prior beliefs, likelihood, and utility functions, all behaviour is consistent with Bayesian rationality. Although such claims are commonplace, their basis is rarely justified. We fill this gap by sketching a proof, and we discuss possible solutions that would make Bayesian approaches empirically interesting.
Consider the task of predicting which soccer team will win the next World Cup. The bookmakers may judge Brazil to be the team most likely to win, but also judge it most likely that a European rather than a Latin American team will win. This is an example of a non-aligned hierarchy structure: the most probable event at the subordinate level (Brazil wins) appears to be inconsistent with the most probable event at the superordinate level (a European team wins). In (...) this paper we exploit such structures to investigate how people make predictions based on uncertain hierarchical knowledge. We distinguish between aligned and non-aligned environments, and conjecture that people assume alignment. Participants were exposed to a non-aligned training set in which the most probable superordinate category predicted one outcome, whereas the most probable subordinate category predicted a different outcome. In the test phase participants allowed their initial probability judgments about category membership to shift their final ratings of the probability of the outcome, even though all judgments were made on the basis of the same statistical data. In effect people were primed to focus on the most likely path in an inference tree, and neglect alternative paths. These results highlight the importance of the level at which statistical data are represented, and suggest that when faced with hierarchical inference problems people adopt a simplifying heuristic that assumes alignment. (shrink)
Two main uses of categories are classification and feature inference, and category labels have been widely shown to play a dominant role in feature inference. However, the nature of this influence remains unclear, and we evaluate two contrasting hypotheses formalized as mathematical models: the label special-mechanism hypothesis and the label super-salience hypothesis. The special-mechanism hypothesis is that category labels, unlike other features, trigger inference decision making in reference to the category prototypes. This results in a tendency for prototype-compatible inferences because (...) the labels trigger a special mechanism rather than because of any influences they have on similarity evaluation. The super-salience hypothesis assumes that the large label influence is due to their high salience and corresponding impact on similarity without any need for a special mechanism. Application of the two models to a feature inference task based on a family resemblance category structure yields strong support for the label super-salience hypothesis and in particular does not support the need for a special mechanism based on prototypes. (shrink)
Although testing has repeatedly been shown to be one of the most effective strategies for consolidating retention of studied information (the backward testing effect) and facilitating mastery of new information (the forward testing effect), few studies have explored individual differences in the beneficial effects of testing. The current study recruited a large sample (1,032 participants) to explore the potential roles of working memory capacity and test anxiety in the enhancing effects of testing on new learning, and the converse influence of (...) testing on test anxiety. The results demonstrated that administering interim tests during learning appears to be an effective technique to potentiate new learning, regardless of working memory capacity and test anxiety. At a final test on all studied materials, individuals with low working memory capacity benefited more from interim testing than those with high working memory capacity. These testing effects are minimally modulated by levels of trait/state test anxiety, and low-stake interim testing neither reduced nor increased test anxiety. Overall, the results imply that low-stake interim tests can be administered to boost new learning irrespective of learners’ level of WMC, test anxiety, and of possible reactive effects of testing on test anxiety. (shrink)
The extent to which human learning should be thought of in terms of elementary, automatic versus controlled, cognitive processes is unresolved after nearly a century of often fierce debate. Mitchell et al. provide a persuasive review of evidence against automatic, unconscious links. Indeed, unconscious processes seem to play a negligible role in any form of learning, not just in Pavlovian conditioning. But a modern connectionist framework, in which phenomena are emergent properties, is likely to offer a fuller account of human (...) learning than the propositional framework Mitchell et al. propose. (shrink)
The demonstration of a sequential congruency effect in sequence learning has been offered as evidence for control processes that act to inhibit automatic response tendencies via unconscious conflict monitoring. Here we propose an alternative interpretation of this effect based on the associative learning of chains of sequenced contingencies. This account is supported by simulations with a Simple Recurrent Network, an associative model of sequence learning. We argue that the control- and associative-based accounts differ in their predictions concerning the magnitude of (...) the sequential congruency effect across training. These predictions are tested by reanalysing data from a study by Shanks, Wilkinson, and Channon . The results support the associative learning account which explains the sequential congruency effect without appealing to control processes. (shrink)
Barbey & Sloman attribute all instances of normative base-rate usage to a rule-based system, and all instances of neglect to an associative system. As it stands, this argument is too simplistic, and indeed fails to explain either good or bad performance on the classic Medical Diagnosis problem.
In the original target article (Shanks & St. John 1994), one of our principal conclusions was that there is almost no evidence that learning can occur outside awareness. The continuing commentaries raise some interesting questions, especially about the definition of learning, but do not lead us to abandon our conclusion.
Although we welcome Gigerenzer, Todd, and the ABC Research Group's shift of emphasis from “coherence” to “correspondence” criteria, their rejection of optimality in human decision making is premature: In many situations, experts can achieve near-optimal performance. Moreover, this competence does not require implausible computing power. The models Gigerenzer et al. evaluate fail to account for many of the most robust properties of human decision making, including examples of optimality.