(1) This is Part 2 of the semantic theory I call TM. In Part 1, I developed TM as a theory in the analytic philosophy of language, in lexical semantics, and in the sociology of relating occasions of statement production and comprehension to formal and informal lexicographic conclusions about statements and lexical items – roughly, as showing how synchronic semantics is a sociological derivative of diachronic, person-relative acts of linguistic behavior. I included descriptions of new cognitive psychology experimental (...) paradigms which would allow us to precisely measure the two constituents of semantics – meaning and reference – both at the level of individual speech acts and at the level of societal convergences, i.e. at both the token and type levels. -/- (2) In the Introduction, I recapitulate the arguments of Part 1. The Introduction also develops some analytic philosophical and lexical semantics themes not discussed in Part 1. -/- (3) After the Introduction, I present neural TM (nTM) as a theory of the neural mechanisms and processes which give rise to these person/occasion-relative acts of linguistic behavior. I develop nTM at three levels, the first two of which describe linguistic/semantic functions independently of their cortical locations. At the first level, I describe individual word-to-word and word-to-object connections. At the second level, I describe the corresponding structuralist networks of which they are the individual components. At this level, I introduce some key linguistic concepts of TM – its graded meaning, reference, and generalization sets, and the types of statements which express various levels of word-to-word and word-to-object relationships among lexical items which, because of the constraints they impose on the use of those lexical items in statements we produce and comprehend, are concepts. This constitutes the second structural level of nTM. -/- (4) At the third level, I associate the non-localized structures of the previous levels with cortically located neural structures and with the fasciculi that connect them. I distinguish neural areas in which primary (phonetic) and secondary (orthographic) lexicons are stored in long-term memory. I also describe the embodied concepts which co-exist in the anterior temporal lobes with the images they lexicalize. These concepts are often said to name physical objects and their features, although what they in fact name are kinds of physical objects and features. I describe how conceptual constraints and referential constraints interact to channel our intentions to say how things are into statements which are semantically well-formed, and which consequently successfully communicate information. -/- (5) Following this presentation of nTM, I examine five prominent neural semantic theories. I point out what is wrong with each of them as far as their explanations of semantics are concerned, and I also indicate how nTM can replace the “semantic cores” of those theories. -/- (6) The two basic mistakes made by neuroscience semantic theories, as I will explain, are (i) that all but one of them regard semantics as a matter of the association of words with perceptual images, and of generalizations from those associations; and (ii) that they all rely on an unspecified set of neural structures which purportedly encode the meaning of concepts in abstraction from their phonological and orthographic forms. nTM maintains, in contrast, that there are no abstract neural representations of semantic content. Neural constraints on our linguistic behavior, especially on our ascriptive and co-ascriptive use of words, express the semantic constraints on those words which make them concepts. That is the semantic content of words. -/- (7) I next consider several results from neuroscience experimental data which have been given one interpretation by one or another of the standard neurosemantic theories, but to which nTM gives a different interpretation. I include several predictions which I have found neither confirmed nor disconfirmed in the experimental neuroscience literature. -/- (8) After a concluding section in which I summarize the major changes to neurosemantic theory introduced by TM, and the analytic philosophy of language and lexical semantics contexts within which TM is situated, there follows an appendix in which I discuss neural net AI, and make some recommendations for implementing nTM in silicon. (shrink)
I argue that it follows from a very plausible principle concerning understanding that the truth of an ascription of understanding is context-relative. I use this to defend an account of lexical meaning according to which full understanding of a natural kind term or name requires knowing informative, uniquely identifying information about its referent. This point undermines Putnam-style 'elm-beech' arguments against the description theory of names and natural kind terms.
The Substitution Anomaly is the failure of intuitively coreferential expressions of the corresponding forms “that S” and “the proposition that S” to be intersubstitutable salva veritate under certain ‘selective’ attitudinal verbs that grammatically accept both sorts of terms as complements. The Substitution Anomaly poses a direct threat to the basic assumptions of Millianism, which predict the interchangeability of “that S” and “the proposition that S”. Jeffrey King has argued persuasively that the most plausible Millian solution is to treat the selective (...) attitudinal verbs as lexically ambiguous , having distinct meanings associated with the different sorts of complement terms. In opposition this approach, I argue that there are independent reasons for maintaining the univocality of these verbs and that this can be done while accommodating the Substitution Anomaly and without sacrificing the transparency of the relevant attitude ascriptions. In particular, I show how, by employing an extended version of Edward Zalta’s system of intensional logic for abstract objects, one can construct for a regimented fragment ℜ of English containing the relevant vocabulary a semantical theory ℑ which (a) treats ℜ’s selective attitudinal verbs as univocal, (b) regards genuine terms as occurring transparently under such verbs in sentences of ℜ, and yet (c) predicts the occurrence of the Substitution Anomaly in ℜ. (shrink)
Lexical Semantics is about the meaning of words. Although obviously a central concern of linguistics, the semantic behaviour of words has been unduly neglected in the current literature, which has tended to emphasize sentential semantics and its relation to formal systems of logic. In this textbook D. A. Cruse establishes in a principled and disciplined way the descriptive and generalizable facts about lexical relations that any formal theory of semantics will have to encompass. Among the topics covered in (...) depth are idiomaticity, lexical ambiguity, synonymy, hierarchical relations such as hyponymy and meronymy, and various types of oppositeness. Syntagmatic relations are also treated in some detail. The discussions are richly illustrated by examples drawn almost entirely from English. Although a familiarity with traditional grammar is assumed, readers with no technical linguistic background will find the exposition always accessible. All readers with an interest in semantics will find in this original text not only essential background but a stimulating new perspective on the field. (shrink)
Why do our intuitive knowledge ascriptions shift when a subject's practical interests are mentioned? Many efforts to answer this question have focused on empirical linguistic evidence for context sensitivity in knowledge claims, but the empirical psychology of belief formation and attribution also merits attention. The present paper examines a major psychological factor (called ?need-for-closure?) relevant to ascriptions involving practical interests. Need-for-closure plays an important role in determining whether one has a settled belief; it also influences the accuracy of one's cognition. (...) Given these effects, it is a mistake to assume that high- and low-stakes subjects provided with the same initial evidence are perceived to enjoy belief formation that is the same as far as truth-conducive factors are concerned. This mistaken assumption has underpinned contextualist and interest-relative invariantist treatments of cases in which contrasting knowledge ascriptions are elicited by descriptions of subjects with the same initial information and different stakes. The paper argues that intellectualist invariantism can easily accommodate such cases. (shrink)
I argue that disposition ascriptions—claims like ‘the glass is fragile’—are semantically equivalent to possibility claims: they are true when the given object manifests the disposition in at least one of the relevant possible worlds.
Epistemologists generally agree that the stringency of intuitive ascriptions of knowledge is increased when unrealized possibilities ofenor are mentioned. Non-sceptical invanantists (Williamson, Hawthorne) think it a mistake to yield in such cases to the temptation to be more stringent, but they do not deny that we feel it. They contend that the temptation is best explained as the product of a psychological bias known as the availability heuristic. I argue against the availability explanation, and sketch a rival account of what (...) happens to us psychologically when possibilities of error are raised. (shrink)
Knowledge ascriptions are a central topic of research in both philosophy and science. In this collection of new essays on knowledge ascriptions, world class philosophers offer novel approaches to this long standing topic.
Preparing words in speech production is normally a fast and accurate process. We generate them two or three per second in fluent conversation; and overtly naming a clear picture of an object can easily be initiated within 600 msec after picture onset. The underlying process, however, is exceedingly complex. The theory reviewed in this target article analyzes this process as staged and feedforward. After a first stage of conceptual preparation, word generation proceeds through lexical selection, morphological and phonological encoding, (...) phonetic encoding, and articulation itself. In addition, the speaker exerts some degree of output control, by monitoring of self-produced internal and overt speech. The core of the theory, ranging from lexical selection to the initiation of phonetic encoding, is captured in a computational model, called WEAVER++. Both the theory and the computational model have been developed in interaction with reaction time experiments, particularly in picture naming or related word production paradigms, with the aim of accounting for the real-time processing in normal word production. A comprehensive review of theory, model, and experiments is presented. The model can handle some of the main observations in the domain of speech errors (the major empirical domain for most other theories of lexical access), and the theory opens new ways of approaching the cerebral organization of speech production by way of high-temporal-resolution imaging. (shrink)
Some theories of practical reasons incorporate a lexical priority structure, according to which some practical reasons have infinitely greater weight than others. This includes absolute deontological theories and axiological theories that take some goods to be categorically superior to others. These theories face problems involving cases in which there is a non-extreme probability that a given reason applies. In view of such cases, lexical-priority theories are in danger of becoming irrelevant to decision-making, becoming absurdly demanding, or generating paradoxical (...) cases in which each of a pair of actions is permissible yet the pair is impermissible. (shrink)
Lexical Pragmatics is a research field that tries to give a systematic and explanatory account of pragmatic phenomena that are connected with the semantic underspecification of lexical items. Cases in point are the pragmatics of adjectives, systematic polysemy, the distribution of lexical and productive causatives, blocking phenomena, the interpretation of compounds, and many phenomena presently discussed within the framework of Cognitive Semantics. The approach combines a constrained-based semantics with a general mechanism of conversational implicature. The basic pragmatic (...) mechanism rests on conditions of updating the common ground and allows to give a precise explication of notions as generalized conversational implicature and pragmatic anomaly. The fruitfulness of the basic account is established by its application to a variety of recalcitrant phenomena among which its precise treatment of Atlas & Levinson's Q- and I-principles and the formalization of the balance between informativeness and efficiency in natural language processing (Horn's division of pragmatic labour) deserve particular mention. The basic mechanism is subsequently extended by an abductive reasoning system which is guided by subjective probability. The extended mechanism turned out to be capable of giving a principled account of lexical blocking, the pragmatics of adjectives, and systematic polysemy. (shrink)
In the last decades increasing attention is paid to the topic of responsibility in technology development and engineering. The discussion of this topic is often guided by questions related to liability and blameworthiness. Recent discussions in engineering ethics call for a reconsideration of the traditional quest for responsibility. Rather than on alleged wrongdoing and blaming, the focus should shift to more socially responsible engineering, some authors argue. The present paper aims at exploring the different approaches to responsibility in order to (...) see which one is most appropriate to apply to engineering and technology development. Using the example of the development of a new sewage water treatment technology, the paper shows how different approaches for ascribing responsibilities have different implications for engineering practice in general, and R&D or technological design in particular. It was found that there was a tension between the demands that follow from these different approaches, most notably between efficacy and fairness. Although the consequentialist approach with its efficacy criterion turned out to be most powerful, it was also shown that the fairness of responsibility ascriptions should somehow be taken into account. It is proposed to look for alternative, more procedural ways to approach the fairness of responsibility ascriptions. (shrink)
This paper defends Lewis’ influential treatment of de se attitudes from recent criticism to the effect that a key explanatory notion—self-ascription—goes unexplained The Blackwell companion to David Lewis, Blackwell, Oxford, pp. 399–410, 2015). It is shown that Lewis’ treatment can be reconstructed in a way which provides clear responses. This sheds light on the explanatory ambitions of those engaged in Lewis’ project.
In this paper we investigate how discourse structure affects the meanings of words, and how the meanings of words affect discourse structure. We integrate three ingredients: a theory of discourse structure called SDRT, which represents discourse in terms of rhetorical relations that glue together the propositions introduced by the text segments; an accompanying theory of discourse attachment called DICE, which computes which rhetorical relations hold between the constituents, on the basis of the reader's background information; and a formal language for (...) specifying the lexical knowledge—both syntactic and semantic—called the LKB. Through this integration, we can model the information flow from words to discourse, and discourse to words. From words to discourse, we show how the LKE permits the rules for computing rhetorical relations in DICETO be generalized and simplified, so that a single law applies to several semantically related lexical items. From discourse to words, we encode two novel heuristics for lexical disambiguation: disambiguate words so that discourse incoherence is avoided, and disambiguate words so that rhetorical connections are reinforced. These heuristics enable us to tackle several cases of lexical disambiguation that have until now been outside the scope of theories of lexical processing. (shrink)
In this paper, I analyse a finding by Riggs and colleagues that there is a close connection between people’s ability to reason with counterfactual conditionals and their capacity to attribute false beliefs to others. The result indicates that both processes may be governed by one cognitive mechanism, though false belief attribution seems to be slightly more cognitively demanding. Given that the common denominator for both processes is suggested to be a form of the Ramsey test, I investigate whether Stalnaker’s semantic (...) theory of conditionals, which was inspired by the Ramsey test, may provide the basis for a psychologically plausible model of belief ascription. The analysis I propose will shed some new light on the developmental discrepancy between counterfactual reasoning and false belief ascription. (shrink)
Machine generated contents note: Part I. Meaning and the Lexicon: 1. The lexicon - some preliminaries; 2. What do we mean by meaning?; 3. Components and prototypes; 4. Modern componential approaches - and some alternatives; Part II. Relations Among Words and Senses: 5. Meaning variation: polysemy, homonymy and vagueness; 6. Lexical and semantic relations; Part III. Word Classes and Semantic Types: 7. Ontological categories and word classes; 8. Nouns and countability; 9. Predication: verbs, events, and states; 10. Verbs and (...) time; 11. Adjectives and properties. (shrink)
The idea that the concept ‘knowledge’ has a distinctive function or social role is increasingly influential within contemporary epistemology. Perhaps the best-known account of the function of ‘knowledge’ is that developed in Edward Craig’s Knowledge and the state of nature (1990, OUP), on which (roughly) ‘knowledge’ has the function of identifying good informants. Craig’s account of the function of ‘knowledge’ has been appealed to in support of a variety of views, and in this paper I’m concerned with the claim that (...) it supports a sort of epistemic contextualism, which is (roughly) the view that the semantic contents and truth-conditions of ‘knowledge’ ascriptions - instances of ‘S knows that p’- depend on and vary with the context of ascription (see, for instance, John Greco’s ‘What’s wrong with contextualism’, Philosophical Quarterly ). Prima facie, this claim should strike us as surprising. A number of concepts and linguistic items (words, sentences) serve functions that have little or nothing to do with semantics. However, I argue that, on the best interpretation of talk of the function of a concept such as ‘knowledge’, the function of ‘knowledge’ is relevant to semantics. Along the way I also suggest how to improve on what I call the ‘usual argument’ that Craig’s account of the function of ‘knowledge’ supports epistemic contextualism. (shrink)
There exists a considerable body of work on epistemic logics for resource-bounded reasoners. In this paper, we concentrate on a less studied aspect of resource-bounded reasoning, namely, on the ascription of beliefs and inference rules by the agents to each other. We present a formal model of a system of bounded reasoners which reason about each other’s beliefs, and investigate the problem of belief ascription in a resource-bounded setting. We show that for agents whose computational resources and memory (...) are bounded, correct ascription of beliefs cannot be guaranteed, even in the limit. We propose a solution to the problem of correct belief ascription for feasible agents which involves ascribing reasoning strategies , or preferences on formulas, to other agents, and show that if a resource-bounded agent knows the reasoning strategy of another agent, then its ascription of beliefs to the other agent is correct in the limit. (shrink)
Following much work in linguistic theory, it is hypothesized that the language faculty has a modular structure and consists of two basic components, a lexicon of (structured) entries and a computational system of combinatorial operations to form larger linguistic expressions from lexical entries. This target article provides evidence for the dual nature of the language faculty by describing recent results of a multidisciplinary investigation of German inflection. We have examined: (1) its linguistic representation, focussing on noun plurals and verb (...) inflection (participles), (2) processes involved in the way adults produce and comprehend inflected words, (3) brain potentials generated during the processing of inflected words, and (4) the way children acquire and use inflection. It will be shown that the evidence from all these sources converges and supports the distinction between lexical entries and combinatorial operations. (shrink)
Current philosophical theorizing about technical functions is mainly focused on specifying conditions under which agents are justified in ascribing functions to technical artifacts. Yet, assessing the precise explanatory relevance of such function ascriptions is, by and large, a neglected topic in the philosophy of technical artifacts and technical functions. We assess the explanatory utility of ascriptions of technical functions in the following three explanation-seeking contexts: why was artifact x produced?, why does artifact x not have the expected capacity to ϕ?, (...) how does artifact x realize its capacity to ϕ? We argue that while function ascriptions serve a mere heuristic role in the first context, they have substantial explanatory leverage in the second and third context. In addition, we assess the relevance of function ascriptions in the context of engineering redesign. Here, function ascriptions also play a relevant role: they enable normative statements of the sort that component b functions better than component a. We unpack these claims by considering philosophical theories of technical functions, in particular the ICE theory, and engineering work on function ascription and explanation. We close the paper by relating our analysis to current debates on the explanatory power of mechanistic vis-à-vis functional explanations. (shrink)
According to lexical views in population axiology, there are good lives x and y such that some number of lives equally good as x is not worse than any number of lives equally good as y. Such views can avoid the Repugnant Conclusion without violating Transitivity or Separability, but they imply a dilemma: either some good life is better than any number of slightly worse lives, or else the ‘at least as good as’ relation on populations is radically incomplete, (...) in a sense to be explained. One might judge that the Repugnant Conclusion is preferable to each of these horns and hence embrace an Archimedean view. This is, roughly, the claim that quantity can always substitute for quality: each population is worse than a population of enough good lives. However, Archimedean views face an analogous dilemma: either some good life is better than any number of slightly worse lives, or else the ‘at least as good as’ relation on populations is radically and symmetrically incomplete, in a sense to be explained. Therefore, the lexical dilemma gives us little reason to prefer Archimedean views. Even if we give up on lexicality, problems of the same kind remain. (shrink)
Predicativism about names—the view that names are metalinguistic predicates—has yet to confront a foundational issue: how are names represented in the lexicon? I provide a positive characterization of the structure of the lexicon from the point of view Predicativism. I proceed to raise a problem for Predicativism on the basis of that characterization, focusing on cases in which individuals have names which are spelled the same way but pronounced differently. Finally, I introduce two potential strategies for solving the problem, and (...) offer reasons not to be optimistic about either. (shrink)
Some propositional attitude verbs require that the complement contain some “subjective predicate”. In terms of the theory proposed by Lasersohn, these verbs would seem to identify the “judge” of the embedded proposition with the matrix subject, and there have been suggestions in this direction. I show that it is possible to analyze these verbs as setting the judge and doing nothing more; then according to whether a judge index or a judge argument is assumed, unless the complement contains a subjective (...) predicate, the whole matrix is redundant or there is a type conflict. I further show that certain clear facts argue for assuming a judge argument which can be filled by a contextually salient entity–or by the subject of a subjective attitude verb. (shrink)
In a series of papers, Donald Davidson :3–17, 1984, The philosophical grounds of rationality, 1986, Midwest Stud Philos 16:1–12, 1991) developed a powerful argument against the claim that linguistic conventions provide any explanatory purchase on an account of linguistic meaning and communication. This argument, as I shall develop it, turns on cases of what I call lexical innovation: cases in which a speaker uses a sentence containing a novel expression-meaning pair, but nevertheless successfully communicates her intended meaning to her (...) audience. I will argue that cases of lexical innovation motivate a dynamic conception of linguistic conventions according to which background linguistic conventions may be rapidly expanded to incorporate new word meanings or shifted to revise the meanings of words already in circulation. I argue that this dynamic account of conventions both resolves the problem raised by cases of lexical innovation and that it does so in a way that is preferable to those who—like Davidson—deny important explanatory roles for linguistic conventions. (shrink)
Knowledge ascriptions of the form ‘S knows that p’ are a central area of research in philosophy. But why do humans think and talk about knowledge? What are knowledge ascriptions for? This article surveys a variety of proposals about the role of knowledge ascriptions and attempts to provide a unified account of these seemingly distinct views.
This paper focuses on value as ascribed to what can be desired, enjoyed, cherished, admired, loved, and so on: value that putatively serves as ground for evaluating such attitudes and for justifying conduct. The main question of the paper is whether such value ascriptions are property ascriptions as traditional cognitivism claims. The paper makes the case that although the linguistic evidence favors traditional cognitivism over non-cognitivism about evaluative language, the main tenet of cognitivism is best restated as the thesis that (...) evaluative terms are linguistically encoded classificatory devices. This opens up the theoretical possibility, for even inflationists about properties, to embrace cognitivism without inviting any metaphysical worries about the properties ascribed in evaluative language. (shrink)
I outline Brandom’s theory of de re and de dicto belief ascriptions, which plays a central role in Brandom’s overall theory of linguistic communication, and show that this theory offers a surprising, new response to Burge’s (Midwest Stud 6:73–121, 1979) argument for social externalism. However, while this response is in principle available from the perspective of Brandom’s theory of belief ascription in abstraction from his wider theoretical enterprise, it ceases to be available from this perspective in the wider context (...) of his inferential role semantics and his doctrines of scorekeeping and of the expressive role of belief ascriptions in discourse. In this wider context, Brandom’s theory of belief ascriptions implies that Burge’s argument trivially fails to have the disquieting implications for psychological explanations that it is widely taken to have. Yet since this is not trivially so, Brandom’s theory apparently provides a false picture of our practice of interpreting belief ascriptions. I then argue that Brandom might as well accept the alternative picture of interpreting belief ascriptions that Burge’s argument presupposes: even in the context of his overall project, Brandom’s take on our practice of interpreting them does not afford belief ascriptions with the discursive significance Brandom claims they have. (shrink)
Theory of mind, the capacity to understand and ascribe mental states, has traditionally been conceptualized as analogous to a scientific theory. However, recent work in philosophy and psychology has documented a "side-effect effect" suggesting that moral evaluations influence mental state ascriptions, and in particular whether a behavior is described as having been performed 'intentionally.' This evidence challenges the idea that theory of mind is analogous to scientific psychology in serving the function of predicting and explaining, rather than evaluating, behavior. In (...) three experiments, we demonstrate that moral evaluations do inform ascriptions of intentional action, but that this relationship arises because behavior that conforms to norms (moral or otherwise) is less informative about underlying mental states than is behavior that violates norms. This analysis preserves the traditional understanding of theory of mind as a tool for predicting and explaining behavior, but also suggests the importance of normative considerations in social cognition. (shrink)
This paper concerns two points of intersection between de se attitudes and the study of natural language: attitude ascription and communication. I first survey some recent work on the semantics of de se attitude ascriptions, with particular attention to ascriptions that are true only if the subject of the ascription has the appropriate de se attitude. I then examine – and attempt to solve – some problems concerning the role of de se attitudes in linguistic communication.
I articulate and defend a necessary and sufficient condition for predication. The condition is that a term or term-occurrence stands in the relation of ascription to its designatum, ascription being a fundamental semantic relation that differs from reference. This view has dramatically different semantic consequences from its alternatives. After outlining the alternatives, I draw out these consequences and show how they favour the ascription view. I then develop the view and elicit a number of its virtues.
In a 2010 paper Daley argues, contra Fodor, that several syntactically simple predicates express structured concepts. Daley develops his theory of structured concepts within Tichý’s Transparent Intensional Logic . I rectify various misconceptions of Daley’s concerning TIL. I then develop within TIL an improved theory of how structured concepts are structured and how syntactically simple predicates are related to structured concepts.
In the late preschool years children acquire a "theory of mind", the ability to ascribe intentional states, including beliefs, desires and intentions, to themselves and others. In this paper I trace how children's ability to ascribe intentions is derived from parental attempts to hold them responsible for their talk and action, that is, the attempt to have their behavior meet a normative standard or rule. Self-control is children's developing ability to take on or accept responsibility, that is, the ability to (...) ascribe intentions to themselves. This is achieved, I argue, when they possess the ability to hold an utterance or rule in mind in the form of a quoted expression, and second, when they grasp the causal relation between the rule and their action. The account of how children learn to ascribe intention to themselves and others will then be used to explore the larger question of the relations amongst language, intentional states and the ascription and avowal of those states. (shrink)
When I say ‘Hesperus is Phosphorus’, I seem to express a proposition. And when I say ‘Joan believes that Hesperus is Phosphorus’, I seem to ascribe to Joan an attitude to the same proposition. But what are propositions? And what is involved in ascribing propositional attitudes?
In the past few years there has been a turn towards evaluating the empirical foundation of epistemic contextualism using formal (rather than armchair) experimental methods. By-and-large, the results of these experiments have not supported the original motivation for epistemic contextualism. That is partly because experiments have only uncovered effects of changing context on knowledge ascriptions in limited experimental circumstances (when contrast is present, for example), and partly because existing experiments have not been designed to distinguish between contextualism and one of (...) its main competing theories, subject-sensitive invariantism. In this paper, we discuss how a particular, "third-person", experimental design is needed to provide evidence that would support contextualism over subject-sensitive invariantism. In spite of the theoretical significance of third-person knowledge ascriptions for debates surrounding contextualism, no formal experiments evaluating such ascriptions that assess contextualist claims have previously been conducted. In this paper, we conduct an experiment specifically designed to examine that central gap in contextualism’s empirical foundation. The results of our experiment provide crucial support for epistemic contextualism over subject-sensitive invariantism. (shrink)
What are the truth conditions of want ascriptions? According to a highly influential and fruitful approach, championed by Heim (1992) and von Fintel (1999), the answer is intimately connected to the agent’s beliefs: ⌜S wants p⌝ is true iff within S’s belief set, S prefers the p worlds to the ~p worlds. This approach faces a well-known and as-yet unsolved problem, however: it makes the entirely wrong predictions with what we call '(counter)factual want ascriptions', wherein the agent either believes p (...) or believes ~p—e.g., ‘I want it to rain tomorrow and that is exactly what is going to happen’ or ‘I want this weekend to last forever but of course it will end in a few hours’. We solve this problem. The truth conditions for want ascriptions are, we propose, connected to the agent’s conditional beliefs. We bring out this connection by pursuing a striking parallel between (counter)factual and non-(counter)factual want ascriptions on the one hand and counterfactual and indicative conditionals on the other. (shrink)
What are the effects of word-by-word predictability on sentence processing times during the natural reading of a text? Although information complexity metrics such as surprisal and entropy reduction have been useful in addressing this question, these metrics tend to be estimated using computational language models, which require some degree of commitment to a particular theory of language processing. Taking a different approach, this study implemented a large-scale cumulative cloze task to collect word-by-word predictability data for 40 passages and compute surprisal (...) and entropy reduction values in a theory-neutral manner. A separate group of participants read the same texts while their eye movements were recorded. Results showed that increases in surprisal and entropy reduction were both associated with increases in reading times. Furthermore, these effects did not depend on the global difficulty of the text. The findings suggest that surprisal and entropy reduction independently contribute to variation in reading times, as these metrics seem to capture different aspects of lexical predictability. (shrink)
Abstract: In this article, the logic and functions of character-trait ascriptions in ethics and epistemology is compared, and two major problems, the "generality problem" for virtue epistemologies and the "global trait problem" for virtue ethics, are shown to be far more similar in structure than is commonly acknowledged. I suggest a way to put the generality problem to work by making full and explicit use of a sliding scale--a "narrow-broad spectrum of trait ascription"-- and by accounting for the various (...) uses of it in an inquiry-pragmatist account. In virtue theories informed by inquiry pragmatism, the agential habits and abilities deemed salient in explanations/evaluations of agents in particular cases, and the determination of what relevant domains and conditions an agent's habit or ability is reliably efficacious in, is determined by pragmatic concerns related to our evaluative epistemic practices. (shrink)