ABSTRACTIn this article we investigate structural differences between “literary” metaphors created by renowned poets and “nonliterary” ones imagined by non-professional authors from Katz et al.’s 1988 corpus. We provide data from quantitative narrative analyses of the altogether 464 metaphors on over 70 variables, including surface features like metaphor length, phonological features like sonority score, or syntactic-semantic features like sentence similarity. In a first computational study using machine learning tools we show that Katz et al.’s literary metaphors can be successfully discriminated (...) from their nonliterary ones on the basis of response measures, in particular the ratings for familiarity, ease of interpretation, semantic relatedness, and comprehensibility. A second computational study then shows that the classifier can reliably detect and predict between-group differences on the basis of five QNA features generalizing from a... (shrink)
Two main goals of the emerging field of neurocognitive poetics are the use of more natural and ecologically valid stimuli, tasks and contexts and providing methods and models allowing to quantify distinctive features of verbal materials used in such tasks and contexts and their effects on readers responses. A natural key element of poetic language, metaphor, still is understudied insofar as relatively little empirical research looked at literary or poetic metaphors. An exception is Katz et al.’s corpus of 204 literary (...) metaphors by authors such as Shakespeare or Dylan Thomas, for which various rating data are available. We reanalyzed their corpus using a combination of quantitative narrative analysis, latent semantic analysis, and machine learning in order to identify relevant features of the metaphors that influenced the ratings. The combined application of computational tools sheds light on surface and affective-semantic features that co-determine the reception of poetic metaphors and successfully predicted the period of origin, authorship and goodness ratings of the metaphors. The present results can be used for generating quantitative hypotheses or selecting and matching verbal stimuli in empirical studies of literature and neurocognitive poetics. (shrink)
If the words of natural human language possess a universal positivity bias, as assumed by Boucher and Osgood’s (1969) famous Pollyanna hypothesis and computationally confirmed for large text corpora in several languages (Dodds et al., 2015), then children and youth literature (CYL) should also show a Pollyanna effect. Here we tested this prediction applying a vector space model- based sentiment analysis tool called SentiArt (Jacobs, 2019) to two CYL corpora, one in English (372 books) and one in German (500 books). (...) Pitching our analysis at the sentence level, and assessing semantic as well as lexico-grammatical information, bBoth corpora show the Pollyanna effect and thus add further evidence to the universality hypothesis. The results of our multivariate sentiment analyses provide interesting testable predictions for future scientific studies of literature. (shrink)
It is argued that current neuroimaging studies can provide useful constraints for the construction of models of cognition, and that these studies should be guided by cognitive models. A numberof challenges for a successful cross-fertilization between “mind mappers” and cognitive modelers are discussed in the light of current research on word recognition.
In his review, Walter (2012) links conceptual perspectives on empathy with crucial results of neurocognitive and genetic studies and presents a descriptive neurocognitive model that identifies neuronal key structures and links them with both cognitive and affective empathy via a high and a low road. After discussion of this model, the remainder of this comment deals more generally with the possibilities and limitations of current neurocognitive models, considering ways to develop process models allowing specific quantitative predictions.
Levelt et al. attempt to “model their theory” with WEAVER ++. Modeling theories requires a model theory. The time is ripe for a methodology for building, testing, and evaluating computational models. We propose a tentative, five-step framework for tackling this problem, within which we discuss the potential strengths and weaknesses of Levelt et al.'s modeling approach.
Glenberg's conception of “meaning from and for action” is too narrow. For example, it provides no satisfactory account of the “logic of Elfland,” a metaphor used by Chesterton to refer to meaning acquired by being told something. All that we call spirit and art and ecstasy only means that for one awful instant we remember that we forget. G. K. Chesterton.