When making end-of-life decisions in intensive care units (ICUs), different staff groups have different roles in the decision-making process and may not always assess the situation in the same way. The aim of this study was to examine the challenges Danish nurses, intensivists, and primary physicians experience with end-of-life decisions in ICUs and how these challenges affect the decision-making process. Interviews with nurses, intensivists, and primary physicians were conducted, and data is discussed from an ethical perspective. All three groups found (...) that the main challenges were associated with interdisciplinary collaboration and future perspectives for the patient. Most of these challenges were connected with ethical issues. The challenges included different assessments of treatment potential, changes and postponements of withholding and withdrawing therapy orders, how and when to identify patients’ wishes, and suffering caused by the treatment. To improve end-of-life decision-making in the ICU, these challenges need to be addressed by interdisciplinary teams. (shrink)
Gotthold Ephraim Lessing stands out among the thinkers of the 18th century for his refusal to synthesize theology and philosophy. But due to his notorious ambivalence about religious questions, even Lessing’s contemporaries remained uncertain whether he ultimately sided with the former or the latter. The short dialogue Hercules and Omphale is, to the detriment of research on this topic, largely unknown. I show that the dialogue offers in a nutshell Lessing’s comprehensive analysis of the intellectual and religious situation of his (...) time. By calling on the mythical travesty of the Asian queen and the Greek hero, Lessing illustrates the mutual attraction that has led astray both Enlightenment philosophy and contemporary Lutheran orthodoxy. Implicitly, his diagnosis of the aberrations of philosophy and theology sheds light on Lessing’s own position. The twofold criticism is an attempt to restore theology as well as philosophy in their genuine forms and to reestablish their proper relationship. Through his twofold restitutio in integrum, Lessing is able to reopen the quarrel between orthodoxy and the Enlightenment and, thus, to radically renew the all but forgotten theologico-philosophical antagonism. (shrink)
Recently, several scholars have argued that scientists can accept scientific claims in a collective process, and that the capacity of scientific groups to form joint acceptances is linked to a functional division of labor between the group members. However, these accounts reveal little about how the cognitive content of the jointly accepted claim is formed, and how group members depend on each other in this process. In this paper, I shall therefore argue that we need to link analyses of joint (...) acceptance with analyses of distributed cognition. To sketch how this can be done, I shall present a detailed case study, and on the basis of the case, analyze the process through which a group of scientists jointly accept a new scientific claim and at a later stage jointly accept to revise previously accepted claims. I shall argue that joint acceptance in science can be established in situations where an overall conceptual structure is jointly accepted by a group of scientists while detailed parts of it are distributed among group members with different areas of expertise, a condition that I shall call a heterogeneous conceptual consensus. Finally, I shall show how a heterogeneous conceptual consensus can work as a constraint against scientific change and address the question how changes may nevertheless occur. (shrink)
In everyday life we either express our beliefs in all-or-nothing terms or we resort to numerical probabilities: I believe it's going to rain or my chance of winning is one in a million. The Stability of Belief develops a theory of rational belief that allows us to reason with all-or-nothing belief and numerical belief simultaneously.
This essay develops a joint theory of rational (all-or-nothing) belief and degrees of belief. The theory is based on three assumptions: the logical closure of rational belief; the axioms of probability for rational degrees of belief; and the so-called Lockean thesis, in which the concepts of rational belief and rational degree of belief figure simultaneously. In spite of what is commonly believed, this essay will show that this combination of principles is satisfiable (and indeed nontrivially so) and that the principles (...) are jointly satisfied if and only if rational belief is equivalent to the assignment of a stably high rational degree of belief. Although the logical closure of belief and the Lockean thesis are attractive postulates in themselves, initially this may seem like a formal “curiosity”; however, as will be argued in the rest of the essay, a very reasonable theory of rational belief can be built around these principles that is not ad hoc and that has various philosophical features that are plausible independently. In particular, this essay shows that the theory allows for a solution to the Lottery Paradox, and it has nice applications to formal epistemology. The price that is to be paid for this theory is a strong dependency of belief on the context, where a context involves both the agent's degree of belief function and the partitioning or individuation of the underlying possibilities. But as this essay argues, that price seems to be affordable. This essay develops a joint theory of rational (all-or-nothing) belief and degrees of belief. The theory is based on three assumptions: the logical closure of rational belief; the axioms of probability for rational degrees of belief; and the so-called Lockean thesis, in which the concepts of rational belief and rational degree of belief figure simultaneously. In spite of what is commonly believed, I will show that this combination of principles is satisfiable (and indeed nontrivially so) and that the principles are jointly satisfied if and only if rational belief is equivalent to the assignment of a stably high rational degree of belief. Although the logical closure of belief and the Lockean thesis are attractive postulates in themselves, initially this may seem like a formal “curiosity”; however, as I am going to argue in the rest of the essay, a very reasonable theory of rational belief can be built around these principles that is not ad hoc but that has various philosophical features that are plausible independently. (shrink)
Michael Jensen made a name for himself in the 1970s–1990 s with his ‘agency theory’ and its application to questions of corporate governance and economic policy. The effects of his theory were acutely felt in the pedagogics of business studies, as Jensen lent his authority to combat all attempts to integrate social considerations and moral values into business education. Lately, however, Michael Jensen has come to defend quite a different approach, promoting an ‘integrity theory’ of management learning. (...)Jensen now rather aspires to empower students to give authentic expression to their personal values in their professional lives, and he sees the main function of management studies in assisting them in this effort. This article reconstructs the transformation of Jensen’s outlook, drawing on Jensen’s theories as an exemplar of wider trends in the current literature on management learning, away from a decidedly ‘mechanistic’ and towards a more ‘humanistic’ pedagogy of management. Jensen’s case serves to highlight developments that might make for better preconditions for the appreciation of business ethics on part of business students. On closer inspection, though, it appears that his remaining within a positivistic framework ultimately impedes the kind of progress Michael Jensen envisions for business studies. (shrink)
When do children acquire a propositional attitude folk psychology or theory of mind? The orthodox answer to this central question of developmental ToM research had long been that around age 4 children begin to apply “belief” and other propositional attitude concepts. This orthodoxy has recently come under serious attack, though, from two sides: Scoffers complain that it over-estimates children’s early competence and claim that a proper understanding of propositional attitudes emerges only much later. Boosters criticize the orthodoxy for underestimating early (...) competence and claim that even infants ascribe beliefs. In this paper, the orthodoxy is defended on empirical grounds against these two kinds of attacks. On the basis of new evidence, not only can the two attacks safely be countered, but the orthodox claim can actually be strengthened, corroborated and refined: what emerges around age 4 is an explicit, unified, flexibly conceptual capacity to ascribe propositional attitudes. This unified conceptual capacity contrasts with the less sophisticated, less unified implicit forms of tracking simpler mental states present in ontogeny long before. This refined version of the orthodoxy can thus most plausibly be spelled out in some form of 2-systems-account of theory of mind. (shrink)
In this contribution to contemporary political philosophy, Jensen aims to develop a model of civil society for deliberative democracy. In the course of developing the model, he also provides a thorough account of the meaning and use of "civil society" in contemporary scholarship as well as a critical review of rival models, including those found in the work of scholars such as John Rawls, Jurgen Habermas, Michael Walzer, Benjamin Barber, and Nancy Rosenblum. Jensen's own ideal treats civil society (...) as both the context in which citizens live out their comprehensive views of the good life as well as the context in which citizens learn to be good deliberative democrats. According to his idealization, groups of citizens in civil society are actively engaged in a grand conversation about the nature of the good life. Their commitment to this conversation grounds dispositions of epistemic humility, tolerance, curiosity, and moderation. Moreover, their regard for the grand conversation explains their interest in deliberative democracy and their regard for democratic virtues, principles, and practices. Jensen is not a naive utopian, however; he argues that this ideal must be realized in stages, that it faces a variety of barriers, and that it cannot be realized without luck. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
This article introduces, studies, and applies a new system of logic which is called ‘HYPE’. In HYPE, formulas are evaluated at states that may exhibit truth value gaps and truth value gluts. Simple and natural semantic rules for negation and the conditional operator are formulated based on an incompatibility relation and a partial fusion operation on states. The semantics is worked out in formal and philosophical detail, and a sound and complete axiomatization is provided both for the propositional and the (...) predicate logic of the system. The propositional logic of HYPE is shown to contain first-degree entailment, to have the Finite Model Property, to be decidable, to have the Disjunction Property, and to extend intuitionistic propositional logic conservatively when intuitionistic negation is defined appropriately by HYPE’s logical connectives. Furthermore, HYPE’s first-order logic is a conservative extension of intuitionistic logic with the Constant Domain Axiom, when intuitionistic negation is again defined appropriately. The system allows for simple model constructions and intuitive Euler-Venn-like diagrams, and its logical structure matches structures well-known from ordinary mathematics, such as from optimization theory, combinatorics, and graph theory. HYPE may also be used as a general logical framework in which different systems of logic can be studied, compared, and combined. In particular, HYPE is found to relate in interesting ways to classical logic and various systems of relevance and paraconsistent logic, many-valued logic, and truthmaker semantics. On the philosophical side, if used as a logic for theories of type-free truth, HYPE is shown to address semantic paradoxes such as the Liar Paradox by extending non-classical fixed-point interpretations of truth by a conditional as well-behaved as that of intuitionistic logic. Finally, HYPE may be used as a background system for modal operators that create hyperintensional contexts, though the details of this application need to be left to follow-up work. (shrink)
Thomas Kuhn's Structure of Scientific Revolutions became the most widely read book about science in the twentieth century. His terms 'paradigm' and 'scientific revolution' entered everyday speech, but they remain controversial. In the second half of the twentieth century, the new field of cognitive science combined empirical psychology, computer science, and neuroscience. In this book, the theories of concepts developed by cognitive scientists are used to evaluate and extend Kuhn's most influential ideas. Based on case studies of the Copernican revolution, (...) the discovery of nuclear fission, and an elaboration of Kuhn's famous 'ducks and geese' example of concept learning, this volume, first published in 2006, offers accounts of the nature of normal and revolutionary science, the function of anomalies, and the nature of incommensurability. (shrink)
In “On what matters: Personal identity as a phenomenological problem”, Steven Crowell engages a number of contemporary interpretations of Husserl’s account of the person and personal identity by noting that they lack a phenomenological elucidation of the self as commitment. In this article, in response to Crowell, I aim to show that such an account of the self as commitment can be drawn from Husserl’s work by looking more closely at his descriptions from the time of Ideas and after of (...) the self as ego or I and egoic experience as attentive experience. I specifically aim to sketch the beginning of a response to three questions I take Crowell to be posing to a Husserlian account of the person and personal identity: What more than pre-reflective self-awareness can be attributed to the self on phenomenological grounds so that we can understand, phenomenologically speaking, how selves become persons? How can what characterizes the self in addition to pre-reflective self-awareness be discerned in both our commitment to truth and our feeling bound by love and other emotive commitments that cannot be fully rationally justified, which Husserl acknowledges are both sources of personal self-constitution? And, do all selves become persons? In the paper I elaborate how my answers to the first two questions turn on the self not just being self-aware but active in a particular sense. And to begin to address the third question, I suggest that while any form of wakeful conscious experience is both self-aware and active, this activity of the self makes a difference for those who are socio-historically embedded in the way we are. Specifically, on the proposed Husserlian account, selves that are socio-historically embedded become persons in and through their active relating to what they attentively experience. In concluding, I indicate how this Husserlian account might compare to Crowell’s claim that “self-identity is not mere logical identity but a normative achievement […] which makes a ‘personal’ kind of identity possible”. (shrink)
Over the last decades, science has grown increasingly collaborative and interdisciplinary and has come to depart in important ways from the classical analyses of the development of science that were developed by historically inclined philosophers of science half a century ago. In this paper, I shall provide a new account of the structure and development of contemporary science based on analyses of, first, cognitive resources and their relations to domains, and second of the distribution of cognitive resources among collaborators and (...) the epistemic dependence that this distribution implies. On this background I shall describe different ideal types of research activities and analyze how they differ. Finally, analyzing values that drive science towards different kinds of research activities, I shall sketch the main mechanisms underlying the perceived tension between disciplines and interdisciplinarity and argue for a redefinition of accountability and quality control for interdisciplinary and collaborative science. (shrink)
Is it possible to give an explicit definition of belief in terms of subjective probability, such that believed propositions are guaranteed to have a sufficiently high probability, and yet it is neither the case that belief is stripped of any of its usual logical properties, nor is it the case that believed propositions are bound to have probability 1? We prove the answer is ‘yes’, and that given some plausible logical postulates on belief that involve a contextual “cautiousness” threshold, there (...) is but one way of determining the extension of the concept of belief that does the job. The qualitative concept of belief is not to be eliminated from scientific or philosophical discourse, rather, by reducing qualitative belief to assignments of resiliently high degrees of belief and a “cautiousness” threshold, qualitative and quantitative belief turn out to be governed by one unified theory that offers the prospects of a huge range of applications. Within that theory, logic and probability theory are not opposed to each other but go hand in hand. (shrink)
Since the 1980s the concept of ANT has remained unsettled. ANT has continuously been critiqued and hailed, ridiculed and praised. It is still an open question whether ANT should be considered a theory or a method or whether ANT is better understood as entailing the dissolution of such modern ‘‘genres’’. In this paper the authors engage with some important reflections by John Law and Bruno Latour in order to analyze what it means to ‘‘do ANT,’’ and, doing so after ‘‘doing (...) ANT on ANT.’’ In particular the authors examine two post-ANT case studies by Annemarie Mol and Marilyn Strathern and outline the notions of complexity, multiplicity, and fractality. The purpose is to illustrate the analytical consequences of thinking with post-ANT. The analysis offers insights into how it is possible to ‘‘go beyond ANT,’’ without leaving it entirely behind. (shrink)
I will defend the claim that we need to differentiate between thinking and reasoning in order to make progress in understanding the intricate relation between language and mind. The distinction between thinking and reasoning will allow us to apply a structural equivalent of Ludwig Wittgenstein’s Private Language Argument to the domain of mind and language. This argumentative strategy enables us to show that and how a certain subcategory of cognitive processes, namely reasoning, is constitutively dependent on language. The final outcome (...) and claim of this paper can be summarized as follows: We can think without language, but we cannot reason without language. While this still leaves several questions about the relation between mind and language unanswered, I hold that the insights defended in this paper provide the basis and proper framework for further investigation about the relationship between language and the mind.Keywords: Private language argument, Wittgenstein, thought/mind and language, reasoning, linguistic relativity, non-linguistic cognition. (shrink)
As yet, there is no enactive account of social cognition. This paper extends the enactive concept of sense-making into the social domain. It takes as its departure point the process of interaction between individuals in a social encounter. It is a well-established finding that individuals can and generally do coordinate their movements and utterances in such situations. We argue that the interaction process can take on a form of autonomy. This allows us to reframe the problem of social cognition as (...) that of how meaning is generated and transformed in the interplay between the unfolding interaction process and the individuals engaged in it. The notion of sense-making in this realm becomes participatory sense-making. The onus of social understanding thus moves away from strictly the individual only. (shrink)
Current theories of social cognition are mainly based on a representationalist view. Moreover, they focus on a rather sophisticated and limited aspect of understanding others, i.e. on how we predict and explain others’ behaviours through representing their mental states. Research into the ‘social brain’ has also favoured a third-person paradigm of social cognition as a passive observation of others’ behaviour, attributing it to an inferential, simulative or projective process in the individual brain. In this paper, we present a concept of (...) social understanding as an ongoing, dynamical process of participatory sense-making and mutual incorporation. This process may be described (1) from a dynamical agentive systems point of view as an interaction and coordination of two embodied agents; (2) from a phenomenological approach as a mutual incorporation, i.e. a process in which the lived bodies of both participants extend and form a common intercorporality. Intersubjectivity, it is argued, is not a solitary task of deciphering or simulating the movements of others but means entering a process of embodied interaction and generating common meaning through it. This approach will be further illustrated by an analysis of primary dyadic interaction in early childhood. (shrink)
McDowell and Merleau-Ponty share a critical attitude towards a certain Cartesian picture of the mind. According to the picture in question nothing which properly belongs to subjectivity can be hidden to the subject herself. Nevertheless there is a striking asymmetry in how the two philosophers portray the problematic consequences of such a picture. They can seem to offer exact opposite views of these consequences, which, given the almost identical characterization of the transparency claim, is puzzling. I argue that a closer (...) look at the prima facie puzzling asymmetry dissolves the apparent disagreement and reveals a deeper agreement concerning both the nature and the origin of the problems haunting the Cartesian picture in question. Both McDowell and Merleau-Ponty argue that on the picture of the relation of between mind and world in question, we lose our grip on the very idea of a perceptual appearance. Furthermore, the two authors regard a certain conception of nature as conceived in the image of science, as one of the crucial elements in making the picture of the mind in question look attractive. (shrink)
The increasing interconnectedness of academic research and external industry has left research vulnerable to conflicts of interest. These conflicts have the potential to undermine the integrity of scientific research as well as to threaten public trust in scientific findings. The present effort sought to identify themes in the perspectives of faculty researchers regarding conflicts of interest. Think-aloud interview responses were qualitatively analyzed in an effort to provide insights with regard to appropriate ways to address the threat of conflicts of interest (...) in research. Themes in participant responses included disclosure of conflicts of interest, self-removal from situations where conflict exists, accommodation of conflict, denial of the existence of conflict, and recognition of complexity of situations involving conflicts of interest. Moral disengagement operations are suggested to explain the appearance of each identified theme. In addition, suggestions for best practices regarding addressing conflicts of interest given these themes in faculty perspectives are provided. (shrink)
In interdisciplinary research scientists have to share and integrate knowledge between people and across disciplinary boundaries. An important issue for philosophy of science is to understand how scientists who work in these kinds of environments exchange knowledge and develop new concepts and theories across diverging fields. There is a substantial literature within social epistemology that discusses the social aspects of scientific knowledge, but so far few attempts have been made to apply these resources to the analysis of interdisciplinary science. Further, (...) much of the existing work either ignores the issue of differences in background knowledge, or it focuses explicitly on conflicting background knowledge. In this paper we provide an analysis of the interplay between epistemic dependence between individual experts with different areas of expertise. We analyze the cooperative activity they engage in when participating in interdisciplinary research in a group, and we compare our findings with those of other studies in interdisciplinary research. (shrink)
In this paper, we introduce an enactive account of loving as participatory sense-making inspired by the “I love to you” of the feminist philosopher Luce Irigaray. Emancipating from the fusionist concept of romantic love, which understands love as unity, we conceptualise loving as an existential engagement in a dialectic of encounter, in continuous processes of becoming-in-relation. In these processes, desire acquires a certain prominence as the need to know more. We build on Irigaray’s account of love to present a phenomenology (...) of loving interactions and then our enactive account. Finally, we draw some implications for ethics. These concern language, difference, vulnerability, desire, and self-transformation. (shrink)
An important shift is taking place in social cognition research, away from a focus on the individual mind and toward embodied and participatory aspects of social understanding. Empirical results already imply that social cognition is not reducible to the workings of individual cognitive mechanisms. To galvanize this interactive turn, we provide an operational definition of social interaction and distinguish the different explanatory roles – contextual, enabling and constitutive – it can play in social cognition. We show that interactive processes are (...) more than a context for social cognition: they can complement and even replace individual mechanisms. This new explanatory power of social interaction can push the field forward by expanding the possibilities of scientific explanation beyond the individual. (shrink)
TABLE OF CONTENTS -/- * Inhalt * Verletzende Worte. Eine Einleitung * Sprache als Gewalt oder: Warum verletzen Worte? * Bedingungen für den Erfolg von Degradierungszeremonien * Gesichtsbedrohende Akte * Die Dialektik von Herausforderung und Erwiderung der Herausforderung * Sprechakte und unsprechbare Akte * Diskriminierende Sprechakte. Ein funktionaler Ansatz * Symbolische Verletzbarkeit und sprachliche Gewalt * Über die Körperkraft von Sprache * Die geraubte Stimme * Nach dem angeblichen Ende der ›Sprachvergessenheit‹: Vorläufige Fragen zur Unvermeidlichkeit der * Verletzung Anderer in (...) und mit Worten * Verletzende Anerkennung. Über das Verhältnis von Anerkennung, Subjektkonstitution und ›sozialer Gewalt‹ * Zur Praxis verbaler Gewalt unter Schülerinnen und Schülern * Sprachliche Strategien verbaler Ablehnung in öffentlichen Diskussionsforen im Internet * Zur Sprache der Sprachlosen. Ebenen der Gewalt in der diskursiven Produktion von Behinderung * Words like violence. Konstellationen des Unvernehmens * Die Autorinnen und Autoren * Nachweise. (shrink)
This volume gives an overview of the rising field of Experimental Ethics. It is organized into five main parts: PART I – Introduction: An Experimental Philosophy of Ethics? // PART II – Applied Experimental Ethics: Case studies // PART III – On Methodology // PART IV – Critical Reflections // PART V – Future Perspectives. Among the contributors: Walter Sinnott-Armstrong, Eric Schwitzgebel, Ezio di Nucci, Jacob Rosenthal, and Fernando Aguiar.
Niklas Luhmann’s theory of social systems is one of the most ambitious attempts to create a coherent account of global modernity. Primarily interested in the fundamental structures of modern society, however, Luhmann himself paid relatively little attention to regional variations. The aim of this book is to seek out modernity in one particular location: The United States of America. Gathering essays from a group of cultural and literary scholars, sociologists, and philosophers, Addressing Modernity reassesses the claims of American exceptionalism by (...) setting them in the context of Luhmann’s conception of modernity, and explores how social systems theory can generate new perspectives on what has often been described as the first thoroughly modern nation. As a study of American society and culture from a Luhmannian vantage point, the book is of interest to scholars from both American Studies and social systems theory in general. (shrink)
This study offers a comprehensive survey of the philosophy of language of Thomas Aquinas by analysing the acts of human reason as principles of language and by establishing them as the overarching point of reference for semiotic, epistemological, semantic, pragmatic and scientific considerations.
In discussions about whether the Principle of the Identity of Indiscernibles is compatible with structuralist ontologies of mathematics, it is usually assumed that individual objects are subject to criteria of identity which somehow account for the identity of the individuals. Much of this debate concerns structures that admit of non-trivial automorphisms. We consider cases from graph theory that violate even weak formulations of PII. We argue that (i) the identity or difference of places in a structure is not to be (...) accounted for by anything other than the structure itself and that (ii) mathematical practice provides evidence for this view. We want to thank Leon Horsten, Jeff Ketland, Øystein Linnebo, John Mayberry, Richard Pettigrew, and Philip Welch for valuable comments on drafts of this paper. We are especially grateful to Fraser MacBride for correcting our interpretation of two of his papers and for other helpful comments. CiteULike Connotea Del.icio.us What's this? (shrink)
Ethical decision making is of concern to researchers across all fields. However, researchers typically focus on the biases that may act to undermine ethical decision making. Taking a new approach, this study focused on identifying the most common compensatory strategies that counteract those biases. These strategies were identified using a series of interviews with university researchers in a variety of areas, including biological, physical, social, and health as well as scholarship and the performing arts. Interview transcripts were assessed with two (...) scoring procedures, an expert rating system and computer-assisted qualitative analysis. Although the expert rating system identified Understanding Guidelines, Recognition of Insufficient Information, and Recognizing Boundaries as the most frequently used compensatory strategies across fields, other strategies, Striving for Transparency, Value/norm Assessment, and Following Appropriate Role Models, were identified as most common by the computer-assisted qualitative analyses. Potential reasons for these findings and implications for training and practice are identified and discussed. (shrink)
In search of our highest capacities, cognitive scientists aim to explain things like mathematics, language, and planning. But are these really our most sophisticated forms of knowing? In this paper, I point to a different pinnacle of cognition. Our most sophisticated human knowing, I think, lies in how we engage with each other, in our relating. Cognitive science and philosophy of mind have largely ignored the ways of knowing at play here. At the same time, the emphasis on discrete, rational (...) knowing to the detriment of engaged, human knowing pervades societal practices and institutions, often with harmful effects on people and their relations. There are many reasons why we need a new, engaged—or even engaging—epistemology of human knowing. The enactive theory of participatory sense-making takes steps towards this, but it needs deepening. Kym Maclaren’s idea of letting be invites such a deepening. Characterizing knowing as a relationship of letting be provides a nuanced way to deal with the tensions between the knower’s being and the being of the known, as they meet in the process of knowing-and-being-known. This meeting of knower and known is not easy to understand. However, there is a mode of relating in which we know it well, and that is: in loving relationships. I propose to look at human knowing through the lens of loving. We then see that both knowing and loving are existential, dialectic ways in which concrete and particular beings engage with each other. (shrink)
What kinds of sentences with truth predicate may be inserted plausibly and consistently into the T-scheme? We state an answer in terms of dependence: those sentences which depend directly or indirectly on non-semantic states of affairs (only). In order to make this precise we introduce a theory of dependence according to which a sentence φ is said to depend on a set Φ of sentences iff the truth value of φ supervenes on the presence or absence of the sentences of (...) Φ in/from the extension of the truth predicate. Both φ and the members of Φ are allowed to contain the truth predicate. On that basis we are able define notions such as ungroundedness or self-referentiality within a classical semantics, and we can show that there is an adequate definition of truth for the class of sentences which depend on non-semantic states of affairs. (shrink)
Thomas Nagel recognizes that it is commonly believed that people can neither be held morally responsible nor morally assessed for what is beyond their control. Yet he is convinced that although such a belief may be intuitively plausible, upon reflection we find that we do make moral assessments of persons in a large number of cases in which such assessments depend on factors not under their control. Of such factors he says: (p. 26).
This is part B of a paper in which we defend a semantics for counterfactuals which is probabilistic in the sense that the truth condition for counterfactuals refers to a probability measure. Because of its probabilistic nature, it allows a counterfactual to be true even in the presence of relevant -worlds, as long such exceptions are not too widely spread. The semantics is made precise and studied in different versions which are related to each other by representation theorems. Despite its (...) probabilistic nature, we show that the semantics and the resulting system of logic may be regarded as a naturalistically vindicated variant of David Lewis work. We argue that counterfactuals have two kinds of pragmatic meanings and come attached with two types of degrees of acceptability or belief, one being suppositional, the other one being truth based as determined by our probabilistic semantics; these degrees could not always coincide due to a new triviality result for counterfactuals, and they should not be identified in the light of their different interpretation and pragmatic purpose. However, for plain assertability the difference between them does not matter. Hence, if the suppositional theory of counterfactuals is formulated with sufficient care, our truth-conditional theory of counterfactuals is consistent with it. The results of our investigation are used to assess a claim considered by Hawthorne and Hájek, that is, the thesis that most ordinary counterfactuals are false. (shrink)
Research misconduct is of growing concern within the scientific community. As a result, organizations must identify effective approaches to training for ethics in research. Previous research has suggested that biases and compensatory strategies may represent important influences on the ethical decision-making process. The present effort investigated a training intervention targeting these variables. The results of the intervention are presented, as well as a description of accompanying exercises tapping self-reflection, sensemaking, and forecasting and their differential effectiveness on transfer to an ethical (...) decision-making task. (shrink)