This essay defends a rational reconstruction of a genealogical debunking argument that begins with the premise “that's just what the economic elite want you to believe” and ends in the conclusion “you should lower your confidence in your belief.” The argument is genealogical because it includes a causal explanation of your beliefs; it is debunking because it claims that the contingencies uncovered by the genealogy undermine your beliefs. The essay begins by defending a plausible causal explanation of your belief in (...) terms of the wants of the elite. Then a number of recent objections to genealogical debunking arguments are considered. It is argued that the genealogy offered in the first part constitutes evidence that a testimony-based belief is not safe and therefore does not constitute knowledge if the economic elite wants you to believe it. (shrink)
In a seminal book, Alvin I. Goldman outlines a theory for how to evaluate social practices with respect to their “veritistic value”, i.e., their tendency to promote the acquisition of true beliefs in society. In the same work, Goldman raises a number of serious worries for his account. Two of them concern the possibility of determining the veritistic value of a practice in a concrete case because we often don't know what beliefs are actually true, and even if we did, (...) the task of determining the veritistic value would be computationally extremely difficult. Neither problem is specific to Goldman's theory and both can be expected to arise for just about any account of veritistic value. It is argued here that the first problem does not pose a serious threat to large classes of interesting practices. The bulk of the paper is devoted to the computational problem, which, it is submitted, can be addressed in promising terms by means of computer simulation. In an attempt to add vividness to this proposal, an up-and-running simulation environment is presented and put to some preliminary tests. (shrink)
In order to describe the logic of morality, "contractualist" philosophers have studied how individuals behave when they choose to follow their moral intuitions. These individuals, contractualists note, often act as if they have bargained and thus reached an agreement with others about how to distribute the benefits and burdens of mutual cooperation. Using this observation, such philosophers argue that the purpose of morality is to maximize the benefits of human interaction. The resulting "contract" analogy is both insightful and puzzling. On (...) one hand, it captures the pattern of moral intuitions, thus answering questions about human cooperation: why do humans cooperate? Why should the distribution of benefits be proportionate to each person's contribution? Why should the punishment be proportionate to the crime? Why should the rights be proportionate to the duties? On the other hand, the analogy provides a mere as-if explanation for human cooperation, saying that cooperation is "as if" people have passed a contract-but since they didn't, why should it be so? To evolutionary thinkers, the puzzle of the missing contract is immediately reminiscent of the puzzle of the missing "designer" of life-forms, a puzzle that Darwin's theory of natural selection essentially resolved. Evolutionary and contractualist theory originally intersected at the work of philosophers John Rawls and David Gauthier, who argued that moral judgments are based on a sense of fairness that has been naturally selected. In this book, Nicolas Baumard further explores the theory that morality was originally an adaptation to the biological market of cooperation, an arena in which individuals competed to be selected for cooperative interactions. In this environment, Baumard suggests, the best strategy was to treat others with impartiality and to share the costs and benefits of cooperation in a fair way, so that those who offered less than others were left out of cooperation while those who offered more were exploited by their partners. It is with this evolutionary approach that Baumard ultimately accounts for the specific structure of human morality. (shrink)
It is a widely accepted doctrine in epistemology that knowledge has greater value than mere true belief. But although epistemologists regularly pay homage to this doctrine, evidence for it is shaky. Is it based on evidence that ordinary people on the street make evaluative comparisons of knowledge and true belief, and consistently rate the former ahead of the latter? Do they reveal such a preference by some sort of persistent choice behavior? Neither of these scenarios is observed. Rather, epistemologists come (...) to this conclusion because they have some sort of conception or theory of what knowledge is, and they find reasons why people should rate knowledge, so understood, ahead of mere true belief. But what if these epistemological theories are wrong? Then the assumption that knowledge is more valuable than true belief might be in trouble. We don’t wish to take a firm position against the thesis that knowledge is more valuable than true belief. But we begin this paper by arguing that there is one sense of ‘know’ under which the thesis cannot be right. In particular, there seems to be a sense of ‘know’ in which it means, simply, ‘believe truly.’ If this is correct, then knowledge—in this weak sense of the term—cannot be more valuable than true belief. What evidence is there for a weak sense of ‘knowledge’ in which it is equivalent to ‘true belief’? Knowledge seems to contrast with ignorance. Not only do knowledge and ignorance contrast with one another but they seem to exhaust the alternatives, at least for a specified person and fact. Given a true proposition p, Diane either knows p or is ignorant of it. The same point can be expressed using rough synonyms of ‘know.’ Diane is either aware of (the fact that) p or is ignorant of it. She is either cognizant of p or ignorant of it. She either possesses the information that p or she is uninformed (ignorant) of it. To illustrate these suggestions, consider a case discussed by John Hawthorne (2002). If I ask you how many people in the room know that Vienna is the capital of Austria, you will tally up the number of people in the room who possess the information that Vienna is the capital of Austria.. (shrink)
It is tempting to think that, if a person's beliefs are coherent, they are also likely to be true. This truth conduciveness claim is the cornerstone of the popular coherence theory of knowledge and justification. Erik Olsson's new book is the most extensive and detailed study of coherence and probable truth to date. Setting new standards of precision and clarity, Olsson argues that the value of coherence has been widely overestimated. Provocative and readable, Against Coherence will make stimulating (...) reading for epistemologists and anyone with a serious interest in truth. (shrink)
According to the Argument from Disagreement (AD) widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by moral facts, either because there are no such facts or because there are such facts but they fail to influence our moral opinions. In an innovative paper, Gustafsson and Peterson (Synthese, published online 16 October, 2010) study the argument by means of computer simulation of opinion dynamics, relying on the well-known model of Hegselmann and Krause (J Artif (...) Soc Soc Simul 5(3):1–33, 2002; J Artif Soc Soc Simul 9(3):1–28, 2006). Their simulations indicate that if our moral opinions were influenced at least slightly by moral facts, we would quickly have reached consensus, even if our moral opinions were also affected by additional factors such as false authorities, external political shifts and random processes. Gustafsson and Peterson conclude that since no such consensus has been reached in real life, the simulation gives us increased reason to take seriously the AD. Our main claim in this paper is that these results are not as robust as Gustafsson and Peterson seem to think they are. If we run similar simulations in the alternative Laputa simulation environment developed by Angere and Olsson (Angere, Synthese, forthcoming and Olsson, Episteme 8(2):127–143, 2011) considerably less support for the AD is forthcoming. (shrink)
In this paper we examine C. I. Lewis's view on the roleof coherence – what he calls ''congruence'' – in thejustification of beliefs based on memory ortestimony. Lewis has two main theses on the subject. His negativethesis states that coherence of independent items ofevidence has no impact on the probability of a conclusionunless each item has some credibility of its own. Thepositive thesis says, roughly speaking, that coherenceof independently obtained items of evidence – such asconverging memories or testimonies – raises (...) the probabilityof a conclusion to the extent sufficient for epistemicjustification, or, to use Lewis's expression, ''rationaland practical reliance''.It turns out that, while thenegative thesis is essentially correct, astrong positive connection between congruence andprobability – a connection of the kind Lewis ultimatelyneeds in his validation of memory – is contingent on thePrinciple of Indifference. In the final section we assess therepercussions of the latter fact for Lewis's theory in particularand for coherence justification in general. (shrink)
What makes someone responsible for a crime and therefore liable to punishment under the criminal law? Modern lawyers will quickly and easily point to the criminal law's requirement of concurrent actus reus and mens rea, doctrines of the criminal law which ensure that someone will only be found criminally responsible if they have committed criminal conduct while possessing capacities of understanding, awareness, and self-control at the time of offense. Any notion of criminal responsibility based on the character of the offender, (...) meaning an implication of criminality based on reputation or the assumed disposition of the person, would seem to today's criminal lawyer a relic of the 18th Century. In this volume, Nicola Lacey demonstrates that the practice of character-based patterns of attribution was not laid to rest in 18th Century criminal law, but is alive and well in contemporary English criminal responsibility-attribution. Building upon the analysis of criminal responsibility in her previous book, Women, Crime, and Character, Lacey investigates the changing nature of criminal responsibility in English law from the mid-18th Century to the early 21st Century. Through a combined philosophical, historical, and socio-legal approach, this volume evidences how the theory behind criminal responsibility has shifted over time. The character and outcome responsibility which dominated criminal law in the 18th Century diminished in ideological importance in the following two centuries, when the idea of responsibility as founded in capacity was gradually established as the core of criminal law. Lacey traces the historical trajectory of responsibility into the 21st Century, arguing that ideas of character responsibility and the discourse of responsibility as founded in risk are enjoying a renaissance in the modern criminal law. These ideas of criminal responsibility are explored through an examination of the institutions through which they are produced, interpreted and executed; the interests which have shaped both doctrines and institutions; and the substantive social functions which criminal law and punishment have been expected to perform at different points in history. (shrink)
The coherentist theory of justification provides a response to the sceptical challenge: even though the independent processes by which we gather information about the world may be of dubious quality, the internal coherence of the information provides the justification for our empirical beliefs. This central canon of the coherence theory of justification is tested within the framework of Bayesian networks, which is a theory of probabilistic reasoning in artificial intelligence. We interpret the independence of the information gathering processes (IGPs) in (...) terms of conditional independences, construct a minimal sufficient condition for a coherence ranking of information sets and assess whether the confidence boost that results from receiving information through independent IGPs is indeed a positive function of the coherence of the information set. There are multiple interpretations of what constitute IGPs of dubious quality. Do we know our IGPs to be no better than randomization processes? Or, do we know them to be better than randomization processes but not quite fully reliable, and if so, what is the nature of this lack of full reliability? Or, do we not know whether they are fully reliable or not? Within the latter interpretation, does learning something about the quality of some IGPs teach us anything about the quality of the other IGPs? The Bayesian-network models demonstrate that the success of the coherentist canon is contingent on what interpretation one endorses of the claim that our IGPs are of dubious quality. (shrink)
The paper presents and defends a Bayesian theory of trust in social networks. In the first part of the paper, we provide justifications for the basic assumptions behind the model, and we give reasons for thinking that the model has plausible consequences for certain kinds of communication. In the second part of the paper we investigate the phenomenon of overconfidence. Many psychological studies have found that people think they are more reliable than they actually are. Using a simulation environment that (...) has been developed in order to make our model computationally tractable we show that in our model inquirers are indeed sometimes better off from an epistemic perspective overestimating the reliability of their own inquiries. We also show, by contrast, that people are rarely better off overestimating the reliability of others. On the basis of these observations we formulate a novel hypothesis about the value of overconfidence. (shrink)
Nicola Lacey presents a new approach to the question of the moral justification of punishment by the State. She focuses on the theory of punishments in context of other political questions, such as the nature of political obligation and the function and scope of criminal law. Arguing that no convincing set of justifying reasons has so far been produced, she puts forward a theory of punishments which places the values of the community at its centre.
In his groundbreaking book, Against Coherence (2005), Erik Olsson presents an ingenious impossibility theorem that appears to show that there is no informative relationship between probabilistic measures of coherence and higher likelihood of truth. Although Olsson's result provides an important insight into probabilistic models of epistemological coherence, the scope of his negative result is more limited than generally appreciated. The key issue is the role conditional independence conditions play within the witness testimony model Olsson uses to establish (...) his result. Olsson maintains that his witness model yields charitable ceteris paribus conditions for any theory of probabilistic coherence. Not so. In fact, Olsson's model, like Bayesian witness models in general, selects a peculiar class of models that are in no way representative of the range of options available to coherence theorists. Recent positive results suggest that there is a way to develop a formal theory of coherence after all. Further, although Bayesian witness models are not conducive to the truth, they are conducive to reliability. (shrink)
Research programs regularly compete to achieve the same goal, such as the discovery of the structure of DNA or the construction of a TEA laser. The more the competing programs share information, the faster the goal is likely to be reached, to society’s benefit. But the “priority rule”-the scientific norm according to which the first program to reach the goal in question must receive all the credit for the achievement-provides a powerful disincentive for programs to share information. How, then, is (...) the clash between social and individual interest resolved in scientific practice? This chapter investigates what Robert Merton called science’s “communist” norm, which mandates universal sharing of knowledge, and uses mathematical models of discovery to argue that a communist regime may be on the whole advantageous and fair to all parties, and so might be implemented by a social contract that all scientists would be willing to sign. (shrink)
This paper explores the work of Nicolas Rashevsky, a Russian émigré theoretical physicist who developed a program in "mathematical biophysics" at the University of Chicago during the 1930s. Stressing the complexity of many biological phenomena, Rashevsky argued that the methods of theoretical physics -- namely mathematics -- were needed to "simplify" complex biological processes such as cell division and nerve conduction. A maverick of sorts, Rashevsky was a conspicuous figure in the biological community during the 1930s and early 1940s: (...) he participated in several Cold Spring Harbor symposia and received several years of funding from the Rockefeller Foundation. However, in contrast to many other physicists who moved into biology, Rashevsky's work was almost entirely theoretical, and he eventually faced resistance to his mathematical methods. Through an examination of the conceptual, institutional, and scientific context of Rashevsky's work, this paper seeks to understand some of the reasons behind this resistance. (shrink)
A new approach to sentencing Not Just Deserts inaugurates a radical shift in the research agenda of criminology. The authors attack currently fashionable retributivist theories of punishment, arguing that the criminal justice system is so integrated that sentencing policy has to be considered in the system-wide context. They offer a comprehensive theory of criminal justice which draws on a philosophical view of the good and the right, and which points the way to practical intervention in the real world of incremental (...) reform. They put the case for a criminal justice system which maximizes freedom in the old republican sense of the term, and which they call `dominion'. (shrink)
Within contemporary penal philosophy, the view that punishment can only be justified if the offender is a moral agent who is responsible and hence blameworthy for their offence is one of the few areas on which a consensus prevails. In recent literature, this precept is associated with the retributive tradition, in the modern form of ‘just deserts’. Turning its back on the rehabilitative ideal, this tradition forges a strong association between the justification of punishment, the attribution of responsible agency in (...) relation to the offence, and the appropriateness of blame. By contrast, effective clinical treatment of disorders of agency employs a conceptual framework in which ideas of responsibility and blameworthiness are clearly separated from what we call ‘affective blame’: the range of hostile, negative attitudes and emotions that are typical human responses to criminal or immoral conduct. We argue that taking this clinical model of ‘responsibility without blame’ into the legal realm offers new possibilities. Theoretically, it allows for the reconciliation of the idea of ‘just deserts’ with a rehabilitative ideal in penal philosophy. Punishment can be reconceived as consequences—typically negative but occasionally not, so long as they are serious and appropriate to the crime and the context—imposed in response to, by reason of, and in proportion to responsibility and blameworthiness, but without the hard treatment and stigma typical of affective blame. Practically, it suggests how sentencing and punishment can better avoid affective blame and instead further rehabilitative and related ends, while yet serving the demands of justice. (shrink)
This paper offers a framework for consciousness of internal reality. Recent PET experiments are reviewed, showing partial overlap of cortical activation during self-produced actions and actions observed from other people. This overlap suggests that representations for actions may be shared by several individuals, a situation which creates a potential problem for correctly attributing an action to its agent. The neural conditions for correct agency judgments are thus assigned a key role in self/other distinction and self-consciousness. A series of behavioral experiments (...) that demonstrate, in normal subjects, the poor monitoring of action-related signals and the difficulty in recognizing self-produced actions are described. In patients presenting delusions, this difficulty dramatically increases and actions become systematically misattributed. These results point to schizophrenia and related disorders as a paradigmatic alteration of a ''Who?'' system for self-consciousness. (shrink)
What do you do when faced with wrongdoing—do you blame or do you forgive? Especially when confronted with offences that lie on the more severe end of the spectrum and cause terrible psychological or physical trauma or death, nothing can feel more natural than blame. Indeed, in the UK and the USA, increasingly vehement and righteous public expressions of blame and calls for vengeance have become commonplace; correspondingly, contemporary penal philosophy has witnessed a resurgence of the retributive tradition, in the (...) modern form usually known as the ‘justice’ model. On the other hand, people can and routinely do forgive others, even in cases of severe crime. Evolutionary psychologists argue that both vengeance and forgiveness are universal human adaptations that have evolved as alternative responses to exploitation, and, crucially, strategies for reducing risk of re-offending. We are naturally endowed with both capacities: to blame and retaliate, or to forgive and seek to repair relations. Which should we choose? Drawing on evolutionary psychology, we offer an account of forgiveness and argue that the choice to blame, and not to forgive, is inconsistent with the political values of a broadly liberal society and can be instrumentally counter-productive to reducing the risk of future re-offending. We then sketch the shape of penal philosophy and criminal justice policy and practice with forgiveness in place as a guiding ideal. (shrink)
This book provides an extensive treatment of Husserl's phenomenology of time-consciousness. Nicolas de Warren uses detailed analysis of texts by Husserl, some only recently published in German, to examine Husserl's treatment of time-consciousness and its significance for his conception of subjectivity. He traces the development of Husserl's thinking on the problem of time from Franz Brentano's descriptive psychology, and situates it in the framework of his transcendental project as a whole. Particular discussions include the significance of time-consciousness for other (...) phenomenological themes: perceptual experience, the imagination, remembrance, self-consciousness, embodiment, and the consciousness of others. The result is an illuminating exploration of how and why Husserl considered the question of time-consciousness to be the most difficult, yet also the most central, of all the challenges facing his unique philosophical enterprise. (shrink)
To generations of lawyers, H. L. A. Hart is known as the twentieth century's greatest legal philosopher. Whilst his scholarship revolutionized the study of law, as a social commentator he gave intellectual impetus to the liberalizing of society in the 1960s. But behind his public success, Hart struggled with demons. His Jewish background, ambivalent sexuality, and unconventional marriage all fuelled his psychological complexity; allegations of espionage, though immediately quashed, nearly destroyed him. Nicola Lacey s biography explores the forces that shaped (...) an extraordinary life. (shrink)
We prove that four theses commonly associated with coherentism are incompatible with the representation of a belief state as a logically closed set of sentences. The result is applied to the conventional coherence interpretation of the AGM theory of belief revision, which appears not to be tenable. Our argument also counts against the coherentistic acceptability of a certain form of propositional holism. We argue that the problems arise as an effect of ignoring the distinction between derived and non-derived beliefs, and (...) we suggest that the kind of coherence relevant to epistemic justification is the coherence of non-derived beliefs. (shrink)
Nicola Lacey presents a new approach to the question of the moral justification of punishment by the State. She focuses on the theory of punishments in context of other political questions, such as the nature of political obligation and the function and scope of criminal law. Arguing that no convincing set of justifying reasons has so far been produced, she puts forward a theory of punishments which places the values of the community at its centre.
Aggregating snippets from the semantic memories of many individuals may not yield a good map of an individual’s semantic memory. The authors analyze the structure of semantic networks that they sampled from individuals through a new snowball sampling paradigm during approximately 6 weeks of 1-hr daily sessions. The semantic networks of individuals have a small-world structure with short distances between words and high clustering. The distribution of links follows a power law truncated by an exponential cutoff, meaning that most words (...) are poorly connected and a minority of words has a high, although bounded, number of connections. Existing aggregate networks mirror the individual link distributions, and so they are not scale-free, as has been previously assumed; still, there are properties of individual structure that the aggregate networks do not reflect. A simulation of the new sampling process suggests that it can uncover the true structure of an individual’s semantic memory. (shrink)
In the present article, I introduce Husserl’s analyses of ‘natural causality’ and ‘volitional causality’, which are collected in the volume ‘Wille und Handlung’ of the Husserliana edition Studien zur Struktur des Bewußtseins. My aim is to show that Husserl’s insight into these phenomena enables us to understand more clearly both the specificity of, and the relation between, the motivational nexus belonging to the sphere of the will in contrast with the causal laws of nature. In light of this understanding, in (...) the last part of the article I reflect on whether and to what extent there is, in fact, an ontological and epistemological compatibility between volitional causality and natural causality. (shrink)
Herbert Lionel Adolphus Hart was born in Yorkshire in 1907 to second generation Jewish immigrants. Having won a scholarship to Oxford University, he went on to become the most famous legal philosopher of the twentieth century. From 1932-40 H.L.A Hart practised as a barrister in London. He was pronounced physically unfit for military service in 1940, and was recruited by MI5, where he worked until 1945. During his time at the Bar he had continued to study philosophy and at M15 (...) his interest was further stimulated by his philosopher colleagues in M16, Stuart Hampshire and Gilbert Ryle. After the war, Hart returned to Oxford to take up a philosophy fellowship, later to become Professor of Jurisprudence. H.L.A Hart single-handedly reinvented the philosophy of law and influenced the nation's thinking in the 1960s on abortion, the legalization of homosexuality, and on capital punishment. Hart's approach to legal philosophy was at once disarmingly simple and breathtakingly ambitious, combining as it did the insights of Austin and Bentham and the new linguistic philosophy of J.L. Austin and Ludwig Wittgenstein. He sought to elucidate a concept of law which would be of relevance to all forms of law, wherever or whenever they arose: his bestselling book, The Concept of Law, has sold tens of thousands of copies worldwide. In 1941, he married Jenifer Williams (a high-ranking civil servant, later an Oxford academic) with whom he had four children. Their relationship was an enduring if unconventional one. In the early 1950s, Jenifer was rumoured to be having a long-standing affair with Isaiah Berlin, one of Hart's closest friends. She was also, falsely, accused by the Sunday Times of having been a Russian spy, an allegation which was all the more scandalous given Hart's position at MI5 during the War. Nicola Lacey draws on Hart's previously unpublished diaries and letters to reveal a complex inner life. Outwardly successful, Hart was in fact tormented by doubts about his intellectual abilities, his sexual identity and his capacity to form close relationships. Her biography also sheds fascinating light on the origins of his ideas, and assesses his overall contribution. Above all, it chronicles of a life which had a depth ands impact far greater than many of Hart's readers have realized. (shrink)
Shortlisted for the 2005 British Academy Book prize, Nicola Lacey's entrancing biography recounts the life of H.L.A. Hart, the pre-eminent legal philosopher of the twentieth century. Following Hart's life from modest origins as the son of Jewish tailor parents in Yorkshire to worldwide fame as the most influential English-speaking legal theorist of the post-War era, the book traces his successive metamorphoses; from Yorkshire schoolboy to Oxford scholar, from government intelligence officer to Professor of Jurisprudence, from awkward batchelor to family figurehead. (...) In the tradition of Ray Monk's biography of Wittgenstein, Nicola Lacey paints an absorbing picture of intellectual and psychological development, of a mind struggling to cope with intellectual self-doubt, uncertain sexuality, a difficult marriage and an anti-semitic society. In depicting the evolution of Hart's life and mind, Lacey provides a vivid recreation of both the intellectual and social climate of Oxford in the post-War era. (shrink)
In a seminal book, Alvin I. Goldman outlines a theory for how to evaluate social practices with respect to their , i.e., their tendency to promote the acquisition of true beliefs (and impede the acquisition of false beliefs) in society. In the same work, Goldman raises a number of serious worries for his account. Two of them concern the possibility of determining the veritistic value of a practice in a concrete case because (1) we often don't know what beliefs are (...) actually true, and (2) even if we did, the task of determining the veritistic value would be computationally extremely difficult. Neither problem is specific to Goldman's theory and both can be expected to arise for just about any account of veritistic value. It is argued here that the first problem does not pose a serious threat to large classes of interesting practices. The bulk of the paper is devoted to the computational problem, which, it is submitted, can be addressed in promising terms by means of computer simulation. In an attempt to add vividness to this proposal, an up-and-running simulation environment (Laputa) is presented and put to some preliminary tests. (shrink)
If you believe more things you thereby run a greater risk of being in error than if you believe fewer things. From the point of view of avoiding error, it is best not to believe anything at all, or to have very uncommitted beliefs. But considering the fact that we all in fact do entertain many specific beliefs, this recommendation is obviously in flagrant dissonance with our actual epistemic practice. Let us call the problem raised by this apparent conflict the (...) Addition Problem. In this paper we will find reasons to reject a particular premise used in the formulation of the Addition Problem, namely, the fundamental premise according to which believing more things increases the risk of error. As we will see, acquiring more beliefs need not decrease the probability of the whole, and hence need not increase the risk of error. In fact, more beliefs can mean an increase in the probability of the whole and a corresponding decrease in the risk of error. We will consider the Addition Problem as it arises in the context of the coherence theory of epistemic justification, while keeping firmly in mind that the point we wish to make is of epistemological importance also outside the specific coherentist dispute. The problem of determining exactly how the probability of the whole system depends on such factors as coherence, reliability and independence will be seen to open up an interesting area of research in which the theory of conditional independence structures is a helpful tool. (shrink)
I challenge a cornerstone of the Gettier debate: that a proposed analysis of the concept of knowledge is inadequate unless it entails that people don’t know in Gettier cases. I do so from the perspective of Carnap’s methodology of explication. It turns out that the Gettier problem per se is not a fatal problem for any account of knowledge, thus understood. It all depends on how the account fares regarding other putative counter examples and the further Carnapian desiderata of exactness, (...) fruitfulness and simplicity. Carnap proposed his methodology more than a decade before Gettier’s seminal paper appeared, making the present solution to the problem a candidate for being the least ad hoc proposal on the market, one whose independent standing cannot be questioned, among solutions that depart from the usual method of revising a theory of knowledge in the light of counterexamples. As an illustration of the method at work, I reconstruct reliabilism as an attempt to provide an explication of the concept of knowledge. (shrink)
In recent years much research has been undertaken regarding the feasibility of the human uterine transplant as a treatment for absolute uterine factor infertility. Should it reach clinical application this procedure would allow such individuals what is often a much-desired opportunity to become not only social mothers, or genetic and social mothers but mothers in a social, genetic and gestational sense. Like many experimental transplantation procedures such as face, hand, corneal and larynx transplants, UTx as a therapeutic option falls firmly (...) into the camp of the quality of life transplant, undertaken with the aim, not to save a life, but to enrich one. However, unlike most of these novel procedures – where one would be unlikely to find a willing living donor or an ethics committee that would sanction such a donation – the organs to be transplanted in UTx are potentially available from both living and deceased donors. In this article, in the light of the recent nine-case research trial in Sweden which used uteri obtained from living donors, and the assertions on the part of a number of other research teams currently preparing trials that they will only be using deceased donors, I explore the question of whether, in the case of UTx, there exist compelling moral reasons to prefer the use of deceased donors despite the benefits that may be associated with the use of organs obtained from the living. (shrink)