Abstract
A central question in online human sentence comprehension is, “How are linguistic relations established between different parts of a sentence?” Previous work has shown that this dependency resolution process can be computationally expensive, but the underlying reasons for this are still unclear. This article argues that dependency resolution is mediated by cue‐based retrieval, constrained by independently motivated working memory principles defined in a cognitive architecture. To demonstrate this, this article investigates an unusual instance of dependency resolution, the processing of negative and positive polarity items, and confirms a surprising prediction of the cue‐based retrieval model: Partial‐cue matches—which constitute a kind of similarity‐based interference—can give rise to the intrusion of ungrammatical retrieval candidates, leading to both processing slow‐downs and even errors of judgment that take the form of illusions of grammaticality in patently ungrammatical structures. A notable achievement is that good quantitative fits are achieved without adjusting the key model parameters.