Affective information can be retrieved simply by measuring words co‐occurrences in linguistic contexts. Lenci and colleagues demonstrate that the affective measures retrieved from linguistic occurrences predict words’ concreteness: abstract words are more heavily loaded with affective information than concrete ones. These results challenge the Affective grounding hypothesis, suggesting that abstract concepts may be ungrounded and coded only linguistically, and that their affective load may be a linguistic factor.
The paper investigates the interaction of lexical and constructional meaning in valency coercion processing, and the effect of (in)compatibility between verb and construction for its successful resolution (Perek, Florent & Martin Hilpert. 2014. Constructional tolerance: Cross-linguistic differences in the acceptability of non-conventional uses of constructions. Constructions and Frames 6(2). 266–304; Yoon, Soyeon. 2019. Coercion and language change: A usage-based approach. Linguistic Research 36(1). 111–139). We present an online experiment on valency coercion (the first one on Italian), by means of a (...) semantic priming protocol inspired by Johnson, Matt A. & Adele E. Goldberg. 2013. Evidence for automatic accessing of constructional meaning: Jabberwocky sentences prime associated verbs. Language & Cognitive Processes 28(10). 1439–1452. We test priming effects with a lexical decision task which presents different target verbs preceded by coercion instances of four Italian argument structure constructions, which serve as primes. Three types of verbs serve as target: lexical associate (LA), construction associate (CA), and unrelated (U) verbs. LAs are semantically similar to the main verb of the prime sentence, whereas CAs are prototypical verbs associated to the prime construction. U verbs serve as a mean of comparison for the two categories of interest. Results confirm that processing of valency coercion requires an integration of both lexical and constructional semantics. Moreover, compatibility is also found to influence coercion resolution. Specifically, constructional priming is primary and independent from compatibility. A secondary priming effect for LA verbs is also found, which suggests a contribution of lexical semantics in coercion resolution – especially for low-compatibility coercion coinages. (shrink)
Logical metonymy resolution (begin a book begin reading a book or begin writing a book) has traditionally been explained either through complex lexical entries (qualia structures) or through the integration of the implicit event via post-lexical access to world knowledge. We propose that recent work within the words-as-cues paradigm can provide a more dynamic model of logical metonymy, accounting for early and dynamic integration of complex event information depending on previous contextual cues (agent and patient). We first present a self-paced (...) reading experiment on German subordinate sentences, where metonymic sentences and their paraphrased version differ only in the presence or absence of the clause-final target verb (Der Konditor begann die Glasur Der Konditor begann, die Glasur aufzutragen/The baker began the icing The baker began spreading the icing). Longer reading times at the target verb position in a high-typicality condition (baker + icing spread ) compared to a low-typicality (but still plausible) condition (child + icing spread) suggest that we make use of knowledge activated by lexical cues to build expectations about events. The early and dynamic integration of event knowledge in metonymy interpretation is bolstered by further evidence from a second experiment using the probe recognition paradigm. Presenting covert events as probes following a high-typicality or a low-typicality metonymic sentence (Der Konditor begann die Glasur AUFTRAGEN/The baker began the icing SPREAD), we obtain an analogous effect of typicality at 100 ms interstimulus interval. (shrink)
This chapter begins with a discussion of the three phases of the interaction between logic and linguistics on the nature of universal grammar. It then attempts to reconstruct the dynamics and interactions between these approaches in logic and in linguistic theory, which represent the major landmarks in the quest for the individuation of the universal structure of language.
The paper discusses the structure of non-verbal predication, with particular reference to the role of the copula. Differently from the main tenets of contemporary logico-philosophical and linguistic theories, a model of predication is proposed where the verbal component (specifically, tense information) is regarded as central in establishing the syntactic and semantic relation between a predicate and its subject. It is thus possible to recover some of the insights of the pre-Fregean analysis of predication. The proposed solution has a number of (...) significant consequences for the structure to be assigned to non-verbal predication, in particular for the semantics of small clause constituents, where the predication is established without the copula. (shrink)
Word co‐occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context, leverage these patterns to achieve impressive performance on diverse semantic tasks requiring world knowledge. An important but understudied question about LLMs’ semantic abilities is whether they acquire generalized knowledge of common events. Here, we test whether five pretrained LLMs (from 2018's BERT to 2023's MPT) assign a higher likelihood to plausible descriptions of agent−patient interactions than to minimally (...) different implausible versions of the same event. Using three curated sets of minimal sentence pairs (total n = 1215), we found that pretrained LLMs possess substantial event knowledge, outperforming other distributional language models. In particular, they almost always assign a higher likelihood to possible versus impossible events (The teacher bought the laptop vs. The laptop bought the teacher). However, LLMs show less consistent preferences for likely versus unlikely events (The nanny tutored the boy vs. The boy tutored the nanny). In follow‐up analyses, we show that (i) LLM scores are driven by both plausibility and surface‐level sentence features, (ii) LLM scores generalize well across syntactic variants (active vs. passive constructions) but less well across semantic variants (synonymous sentences), (iii) some LLM errors mirror human judgment ambiguity, and (iv) sentence plausibility serves as an organizing dimension in internal LLM representations. Overall, our results show that important aspects of event knowledge naturally emerge from distributional linguistic patterns, but also highlight a gap between representations of possible/impossible and likely/unlikely events. (shrink)