A real x is -Kurtz random if it is in no closed null set . We show that there is a cone of -Kurtz random hyperdegrees. We characterize lowness for -Kurtz randomness as being -dominated and -semi-traceable.
We prove that superhigh sets can be jump traceable, answering a question of Cole and Simpson. On the other hand, we show that such sets cannot be weakly 2-random. We also study the class $superhigh^\diamond$ and show that it contains some, but not all, of the noncomputable K-trivial sets.
Recent results on initial segments of the Turing degrees are presented, and some conjectures about initial segments that have implications for the existence of nontrivial automorphisms of the Turing degrees are indicated.
This is a review of The Turing Guide (2017), written by Jack Copeland, Jonathan Bowen, Mark Sprevak, Robin Wilson, and others. The review includes a new sociological approach to the problem of computability in physics.
We show that in the setting of fair-coin measure on the power set of the natural numbers, each sufficiently random set has an infinite subset that computes no random set. That is, there is an almost sure event [Formula: see text] such that if [Formula: see text] then X has an infinite subset Y such that no element of [Formula: see text] is Turing computable from Y.
In algorithmic randomness, when one wants to define a randomness notion with respect to some non-computable measure λ, a choice needs to be made. One approach is to allow randomness tests to access the measure λ as an oracle . The other approach is the opposite one, where the randomness tests are completely effective and do not have access to the information contained in λ . While the Hippocratic approach is in general much more restrictive, there are cases where the (...) two coincide. The first author showed in 2010 that in the particular case where the notion of randomness considered is Martin-Löf randomness and the measure λ is a Bernoulli measure, classical randomness and Hippocratic randomness coincide. In this paper, we prove that this result no longer holds for other notions of randomness, namely computable randomness and stochasticity. (shrink)
In algorithmic randomness, when one wants to define a randomness notion with respect to some non-computable measure λ, a choice needs to be made. One approach is to allow randomness tests to access the measure λ as an oracle. The other approach is the opposite one, where the randomness tests are completely effective and do not have access to the information contained in λ. While the Hippocratic approach is in general much more restrictive, there are cases where the two coincide. (...) The first author showed in 2010 that in the particular case where the notion of randomness considered is Martin-Löf randomness and the measure λ is a Bernoulli measure, classical randomness and Hippocratic randomness coincide. In this paper, we prove that this result no longer holds for other notions of randomness, namely computable randomness and stochasticity. (shrink)
We show that Carmo and Jones’ condition 5 conflicts with the other conditions on their models for contrary-to-duty obligations. We then propose a resolution to the conflict.
The Grätzer-Schmidt theorem of lattice theory states that each algebraic lattice is isomorphic to the congruence lattice of an algebra. We study the reverse mathematics of this theorem. We also show thatthe set of indices of computable lattices that are complete is Π11\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Pi ^1_1$$\end{document}-complete;the set of indices of computable lattices that are algebraic is Π11\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Pi ^1_1$$\end{document}-complete;the set of compact elements of a computable (...) lattice is Π11\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Pi ^{1}_{1}$$\end{document} and can be Π11\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Pi ^1_1$$\end{document}-complete; andthe set of compact elements of a distributive computable lattice is Π30\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Pi ^{0}_{3}$$\end{document}, and there is an algebraic distributive computable lattice such that the set of its compact elements is Π30\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Pi ^0_3$$\end{document}-complete. (shrink)
We divide the class of infinite computable trees into three types. For the first and second types, 0' computes a nontrivial self-embedding while for the third type 0'' computes a nontrivial self-embedding. These results are optimal and we obtain partial results concerning the complexity of nontrivial self-embeddings of infinite computable trees considered up to isomorphism. We show that every infinite computable tree must have either an infinite computable chain or an infinite Π01 antichain. This result is optimal and has connections (...) to the program of reverse mathematics. (shrink)
We consider two axioms of second-order arithmetic. These axioms assert, in two different ways, that infinite but narrow binary trees always have infinite paths. We show that both axioms are strictly weaker than Weak König's Lemma, and incomparable in strength to the dual statement (WWKL) that wide binary trees have paths.
We affirm a conjecture of Sacks [1972] by showing that every countable distributive lattice is isomorphic to an initial segment of the hyperdegrees, $\scr{D}_{h}$ . In fact, we prove that every sublattice of any hyperarithmetic lattice (and so, in particular, every countable, locally finite lattice) is isomorphic to an initial segment of $\scr{D}_{h}$ . Corollaries include the decidability of the two quantifier theory of $\scr{D}_{h}$ and the undecidability of its three quantifier theory. The key tool in the proof is a (...) new lattice representation theorem that provides a notion of forcing for which we can prove a version of the fusion lemma in the hyperarithmetic setting and so the preservation of $\omega _{1}^{CK}$ . Somewhat surprisingly, the set theoretic analog of this forcing does not preserve ω₁. On the other hand, we construct countable lattices that are not isomorphic to any initial segment of $\scr{D}_{h}$. (shrink)
Dieser Beitrag bietet eine umfassende Diskussion des Textes “Humanismus und Christentum” des dänischen Philosophen und Theologen Knud E. Løgstrup. Er verortet den Text in seinem geistesgeschichtlichen Kontext und analysiert seine wichtigsten Argumente wie auch seine zentrale These, der zufolge Humanismus und Christentum einen entscheidenden Grundsatz teilen, insofern beide die Ethik als “stumm“ oder “unausgesprochen“ verstehen. Darüber hinaus wird dargelegt, wie Løgstrups Text zentrale Überlegungen in dessen späteren Publikationen, besonders in dem Hauptwerk Die ethische Forderung, vorwegnimmt.
Medicalization is frequently defined as a process by which some non-medical aspects of human life become to be considered as medical problems. Overdiagnosis, on the other hand, is most often defined as diagnosing a biomedical condition that in the absence of testing would not cause symptoms or death in the person’s lifetime. Medicalization and overdiagnosis are related concepts as both expand the extension of the concept of disease. They are both often used normatively to critique unwarranted or contested expansion of (...) medicine and to address health services that are considered to be unnecessary, futile, or even harmful. However, there are important differences between the concepts, as not all cases of overdiagnosis are medicalizations and not all cases of medicalizations are overdiagnosis. The objective of this article is to clarify the differences between medicalization and overdiagnosis. It will demonstrate how the subject matter of medicalization traditionally has been non-medical phenomena, while the subject matter of overdiagnosis has been biological or biomolecular conditions or processes acknowledged being potentially harmful. They also refer to different types of uncertainty: medicalization is concerned with indeterminacy, while overdiagnosis is concerned with lack of prognostic knowledge. Medicalization is dealing with sickness while overdiagnosis with disease. Despite these differences, medicalization and overdiagnosis are becoming more alike. Medicalization is expanding, encompassing the more “technical” aspects of overdiagnosis, while overdiagnosis is becoming more ideologized. Moreover, with new trends in modern medicine, such as P4 medicine, medicalization will become all-encompassing, while overdiagnosis more or less may dissolve. In the end they may converge in some total “iatrogenization.” In doing so, the concepts may lose their precision and critical sting. (shrink)
We demonstrate how to validly quantify into hyperintensional contexts involving non-propositional attitudes like seeking, solving, calculating, worshipping, and wanting to become. We describe and apply a typed extensional logic of hyperintensions that preserves compositionality of meaning, referential transparency and substitutivity of identicals also in hyperintensional attitude contexts. We specify and prove rules for quantifying into hyperintensional contexts. These rules presuppose a rigorous method for substituting variables into hyperintensional contexts, and the method will be described. We prove the following. First, it (...) is always valid to quantify into hyperintensional attitude contexts and over hyperintensional entities. Second, factive empirical attitudes validate, furthermore, quantifying over intensions and extensions, and so do non-factive attitudes, both empirical and non-empirical , provided the entity to be quantified over exists. We focus mainly on mathematical attitudes, because they are uncontroversially hyperintensional. (shrink)
In this article we discuss what we call the deliberative division of epistemic labor. We present evidence that the human tendency to engage in motivated reasoning in defense of our beliefs can facilitate the occurrence of divisions of epistemic labor in deliberations among people who disagree. We further present evidence that these divisions of epistemic labor tend to promote beliefs that are better supported by the evidence. We show that promotion of these epistemic benefits stands in tension with what extant (...) theories in epistemology take rationality to require in cases of disagreement. We argue that the epistemic benefits that result from the deliberative division of epistemic labor can provide epistemic reason to maintain confidence in cases of disagreement. We then show that the deliberative division of epistemic labor constitutes a distinct kind of epistemic dependence. (shrink)
Overdiagnosis and disease are related concepts. Widened conceptions of disease increase overdiagnosis and vice versa. This is partly because there is a close and complex relationship between disease and overdiagnosis. In order to address the problems with overdiagnosis, we may benefit from a closer understanding this relationship. Accordingly, the objective of this article is to elucidate the relationship between disease and overdiagnosis. To do so, the article starts with scrutinizing how overdiagnosis can explain the expansion of the concept of disease. (...) Then it investigates how definitions of disease address various challenges of overdiagnosis. The article specifically investigates recent attempts to clarify the relationship between the concepts of disease and overdiagnosis. Several shortcomings are identified and lead to a closer analysis of overdiagnosis in the diagnostic process. Contrary to recent contributions to the field, it is argued that cases of overdiagnosis are not cases of disease. They are non-verified labelling of disease. It is revealed how overdiagnosis establishes an unwarranted link between indicative phenomena, such as polyps or cell changes, and harm, and thereby generates a link to disease. One implication of this study is that we should stop attributing disease language to indicative phenomena. That is, we should stop calling it “cancer screening” when we are actually searching for polyps. Another implications is that we should strive for scientific progress in differentiating phenomena that are of negative value to us from those that are not. In overdiagnosis we diagnose something that is not disease: it is over-diagnosis. (shrink)
The degree of doxastic revision required in response to evidence of disagreement is typically thought to be a function of our beliefs about (1) our interlocutor’s familiarity with the relevant evidence and arguments, and their intellectual capacities and virtues, relative to our own, or (2) the expected probability of our interlocutor being correct, conditional on our disagreeing. While these two factors are typically used interchangeably, I show that they have an inverse correlation in cases of disagreement about politically divisive propositions. (...) This presents us with a puzzle about the epistemic impact of disagreement in these cases. The most significant disagreements on (1) are the least significant disagreements on (2), and vice versa. I show that assessing the epistemic status of an interlocutor by reference to either (1) or (2) has uncomfortable consequences in these cases. I then argue that this puzzle cannot be escaped by claiming that we usually have dispute-independent reason to reject the significance of politically charged disagreement altogether. (shrink)
The Scandinavian welfare states have public health care systems which have universal coverage and traditionally low influence of private insurance and private provision. Due to raises in costs, elaborate public control of health care, and a significant technological development in health care, priority setting came on the public agenda comparatively early in the Scandinavian countries. The development of health care priority setting has been partly homogeneous and appears to follow certain phases. This can be of broader interest as it may (...) shed light on alternative models and strategies in health care priority setting. Some general trends have been identified: from principles to procedures, from closed to open processes, and from experts to participation. Five general approaches have been recognized: The moral principles and values based approach, the moral principles and economic assessment approach, the procedural approach, the expert based practice defining approach, and the participatory practice defining approach. There are pros and cons with all of these approaches. For the time being the fifth approach appears attractive, but its lack of true participation and the lack of clear success criteria may pose significant challenges in the future. (shrink)
In the debate on conscientious objection in healthcare, proponents of conscience rights often point to the imperative to protect the health professional’s moral integrity. Their opponents hold that the moral integrity argument alone can at most justify accommodation of conscientious objectors as a “moral courtesy”, as the argument is insufficient to establish a general moral right to accommodation, let alone a legal right. This text draws on political philosophy in order to argue for a legal right to accommodation. The moral (...) integrity arguments should be supplemented by the requirement to protect minority rights in liberal democracies. Citizens have a right to live in accordance with their fundamental moral convictions, and a right to equal access to employment. However, this right should not be unconditional, as that would unduly infringe on the rights of other citizens. The right must be limited to cases where the moral basis is more fundamental in a sense that all reasonable citizens in a liberal democracy should accept, such as the constitutive role of the inviolability of human life in liberal democracies. There should be a legal, yet circumscribed, right to accommodation for conscientious objectors refusing to provide healthcare services that they reasonably consider to involve the intentional killing of a human being. (shrink)
This paper addresses the mereological problem of the unity of structured propositions. The problem is how to make multiple parts interact such that they form a whole that is ultimately related to truth and falsity. The solution I propose is based on a Platonist variant of procedural semantics. I think of procedures as abstract entities that detail a logical path from input to output. Procedures are modeled on a function/argument logic, but are not functions. Instead they are higher-order, fine-grained structures. (...) I identify propositions with particular kinds of molecular procedures containing multiple sub-procedures as parts. Procedures are among the basic entities of my ontology, while propositions are derived entities. The core of a structured proposition is the procedure of predication, which is an instance of the procedure of functional application. The main thesis I defend is that procedurally conceived propositions are their own unifiers detailing how their parts interact so as to form a unit. They are not unified by one of their constituents, e.g., a relation or a sub-procedure, on pain of regress. The relevant procedural semantics is Transparent Intensional Logic, a hyperintensional, typed λ-calculus, whose λ-terms express four different kinds of procedures. While demonstrating how the theory works, I place my solution in a wider historical and systematic context. (shrink)
Theories of structured meanings are designed to generate fine-grained meanings, but they are also liable to overgenerate structures, thus drawing structural distinctions without a semantic difference. I recommend the proliferation of very fine-grained structures, so that we are able to draw any semantic distinctions we think we might need. But, in order to contain overgeneration, I argue we should insert some degree of individuation between logical equivalence and structural identity based on structural isomorphism. The idea amounts to forming an equivalence (...) class of different structures according to one or more formal criteria and designating a privileged element as a representative of all the elements, i.e., a first among equals. The proposed method helps us to a cluster of notions of co-hyperintensionality. As a test case, I consider a recent objection levelled against the act theory of structured propositions. I also respond to an objection against my methodology. (shrink)
Fairness, the notion that people deserve or have rights to certain resources or kinds of treatment, is a fundamental dimension of moral cognition. Drawing on recent evidence from economics, psychology, and neuroscience, we ask whether self-interest is always intuitive, requiring self-control to override with reasoning-based fairness concerns, or whether fairness itself can be intuitive. While we find strong support for rejecting the notion that self-interest is always intuitive, the literature has reached conflicting conclusions about the neurocognitive systems underpinning fairness. We (...) propose that this disagreement can largely be resolved in light of an extended Social Heuristics Hypothesis. Divergent findings may be attributed to the interpretation of behavioral effects of ego depletion or neurostimulation, reverse inference from brain activity to the underlying psychological process, and insensitivity to social context and inter-individual differences. To better dissect the neurobiological basis of fairness, we outline how future research should embrace cross-disciplinary methods that combine psychological manipulations with neuroimaging, and that can probe inter-individual, and cultural heterogeneities. (shrink)
This article presents and evaluates arguments supporting that an approval procedure for genome-edited organisms for food or feed should include a broad assessment of societal, ethical and environmental concerns; so-called non-safety assessment. The core of analysis is the requirement of the Norwegian Gene Technology Act that the sustainability, ethical and societal impacts of a genetically modified organism should be assessed prior to regulatory approval of the novel products. The article gives an overview how this requirement has been implemented in the (...) regulatory practice, demonstrating that such assessment is feasible and justified. Even in situations where genome-edited organisms are considered comparable to non-modified organisms in terms of risk, the technology may have—in addition to social benefits—negative impacts that warrant assessments of the kind required in the Act. The main reason is the disruptive character of the genome editing technologies due to their potential for novel, ground-breaking solutions in agriculture and aquaculture combined with the economic framework shaped by the patent system. Food is fundamental for a good life, biologically and culturally, which warrants stricter assessment procedures than what is required for other industries, at least in countries like Norway with a strong tradition for national control over agricultural markets and breeding programs. (shrink)
There is a tendency in the business ethics literature to think of ethics in restrictive terms: what one should not do, and how to control this. Drawing on Lawrence Kohlberg''s theory of moral development, the paper focuses on, and draws attention to, another more positive aspect of ethics: the capacity of ethics to inspire and empower individuals, as well as groups. To understand and facilitate such empowerment, it is argued that it is necessary to move beyond Kohlberg''s justice reasoning so (...) as to appreciate the value and importance of feeling and care. Accordingly, we draw upon case study material to review the meaning of Kohlberg''s higher stages — 5, 6 and 7 — to question the meaning of ethical reasoning. With such deeper understanding of particular ethical codes or practices, it is thought that members of organisations may come closer to thespirit, as opposed to the letter, of ethical conduct in organisations. This, we argue, is consistent with the degree of trust and integrity demanded by leaner, post-bureaucratic ways of organizing and conducting business as well as being personally beneficial to the people involved. (shrink)
New emerging biotechnologies, such as gene editing, vastly extend our ability to alter the human being. This comes together with strong aspirations to improve humans not only physically, but also mentally, morally, and socially. These conjoined ambitions aggregate to what can be labelled “the gene editing of super-ego.” This article investigates a general way used to argue for new biotechnologies, such as gene-editing: if it is safe and efficacious to implement technology X for the purpose of a common good Y, (...) why should we not do so? This is a rhetorical question with a conditional, and may be dismissed as such. Moreover, investigating the question transformed into a formal argument reveals that the argument does not hold either. Nonetheless, the compelling force of the question calls for closer scrutiny, revealing that this way of arguing for biotechnology is based on five assumptions. Analysis of these assumptions shows their significant axiological, empirical, and philosophical challenges. This makes it reasonable to claim that these kinds of question based promotions of specific biotechnologies fail. Hence, the aspirations to make a super-man with a super-ego appear fundamentally flawed. As these types of moral bioenhancement arguments become more prevalent, a revealing hype test is suggested: What is special with this technology, compared to existing methods, that makes it successful in improving human social characteristics in order to make the world a better place for all? Valid answers to this question will provide good reasons to pursue such technologies. Hence, the aim is not to bar the development of modern biotechnology, but rather to ensure good developments and applications of highly potent technologies. So far, we still have a long way to go to make persons with goodness gene. (shrink)
Logical semantics includes once again structured meanings in its repertoire. The leading idea is that semantic and syntactic structure are more or less isomorphic. A key motive for reintroducing sensitivity to semantic structure is to obtain fine‐grained meanings, which are individuated more finely than in possible‐world semantics, namely up to necessary equivalence. Just getting the truth‐conditions right is deemed insufficient for a full semantic analysis of sentences. This paper surveys some of the most recent contributions to the program of structured (...) meaning, while providing historical background. I suggest that to make substantial advances the program needs to solve the problem of propositional unity and develop an intensional mereology of abstract objects. (shrink)
Aim of the paper is to present a new logic of technical malfunction. The need for this logic is motivated by a simple-sounding philosophical question: Is a malfunctioning corkscrew, which fails to uncork bottles, nonetheless a corkscrew? Or in general terms, is a malfunctioning F, which fails to do what Fs do, nonetheless an F? We argue that ‘malfunctioning’ denotes the modifier Malfunctioning rather than a property, and that the answer depends on whether Malfunctioning is subsective or privative. If subsective, (...) a malfunctioning F is an F; if privative, a malfunctioning F is not an F. An intensional logic is required to raise and answer the question, because modifiers operate directly on properties and not on sets or individuals. This new logic provides the formal tools to reason about technical malfunction by means of a logical analysis of the sentence “a is a malfunctioning F”. (shrink)
We answer a question of Ambos-Spies and Kuˇcera in the affirmative. They asked whether, when a real is low for Schnorr randomness, it is already low for Schnorr tests.
This article argues that anthropology may represent untapped perspectives of relevance to social theory. The article starts by critically reviewing how anthropology has come to serve as the ‘Other’ in various branches of social theory, from Marx and Durkheim to Parsons to Habermas, engaged in a hopeless project of positing ‘primitive’ or ‘traditional’ society as the opposite of modernity. In contemporary debates, it is becoming increasingly recognized that social theory needs history, back to the axial age and beyond. The possible (...) role of anthropology in theorizing modernity receives far less attention. That role should go much beyond representing a view from ‘below’ or a politically correct appreciation of cultural diversity. It involves attention to key theoretical concepts and insights developed by maverick anthropologists like Arnold van Gennep, Marcel Mauss, Victor Turner and Gregory Bateson, concepts that uniquely facilitate an understanding of some of the underlying dynamics of modernity. (shrink)
We present here the papers selected for the volume on the Unity of Propositions problems. After summarizing what the problems are, we locate them in a spectrum from those aiming to provide substantive, reductive explanations, to those with a more deflationary take on the problems.
Soames's cognitive propositions are strings of acts to be performed by an agent, such as predicating a property of an individual. King takes these structured propositions to task for proliferating too easily. King's objection is based on an example that purports to show that three of Soames's propositions are really just one proposition. I translate the informally stated propositions King attributes to Soames into the intensional λ-calculus. It turns out that they are all β-equivalent to the proposition King claims Soames's (...) three propositions are identical to. I argue on philosophical grounds against identifying β-equivalent propositions. The reason is that β-conversion obliterates too many of the procedural distinctions that are central to an act-based theory such as Soames's and which are worth preserving. In fact, β-expansion allows the addition of a fifth proposition that highlights additional procedural distinctions and propositional structure. The welcome conclusion is that we have five procedurally distinct, if equivalent, propositions. (shrink)
The topic of this paper is the notion of technical (as opposed to biological) malfunction. It is shown how to form the property being a malfunctioning F from the property F and the property modifier malfunctioning (a mapping taking a property to a property). We present two interpretations of malfunctioning. Both interpretations agree that a malfunctioning F lacks the dispositional property of functioning as an F. However, its subsective interpretation entails that malfunctioning Fs are Fs, whereas its privative interpretation entails (...) that malfunctioning Fs are not Fs. We chart various of their respective logical consequences and discuss some of the philosophical implications of both interpretations. (shrink)
New technologies facilitate the enhancement of a wide range of human dispositions, capacities, or abilities. While it is argued that we need to set limits to human enhancement, it is unclear where we should find resources to set such limits. Traditional routes for setting limits, such as referring to nature, the therapy-enhancement distinction, and the health-disease distinction, turn out to have some shortcomings. However, upon closer scrutiny the concept of enhancement is based on vague conceptions of what is to be (...) enhanced. Explaining why it is better to become older, stronger, and more intelligent presupposes a clear conception of goodness, which is seldom provided. In particular, the qualitative better is frequently confused with the quantitative more. We may therefore not need “external” measures for setting its limits – they are available in the concept of enhancement itself. While there may be shortcomings in traditional sources of limit setting to human enhancement, such as nature, therapy, and disease, such approaches may not be necessary. The specification-of-betterment problem inherent in the conception of human enhancement itself provides means to restrict its unwarranted proliferation. We only need to demand clear, sustainable, obtainable goals for enhancement that are based on evidence, and not on lofty speculations, hypes, analogies, or weak associations. Human enhancements that specify what will become better, and provide adequate evidence, are good and should be pursued. Others should not be accepted. (shrink)
With Contingency, Irony, and Solidarity Richard Rorty tries to persuade us that a case for liberalism is better served by historical narrative than by philosophical theory. The liberal ironist is the complex protagonist of Rorty’s anti-foundationalist story. Why does Rorty think irony serves—rather than undermines—commitments to liberal democracy? I distinguish political from existential dimensions of irony, consider criticisms of Rorty’s ironist, and then draw on recent work by Lear to argue that Rorty’s ironist character nevertheless can be recast as an (...) image useful to the self-understanding of contemporary liberal democrats. (shrink)
The emerging concept of systems medicine is at the vanguard of the post-genomic movement towards ‘precision medicine’. It is the medical application of systems biology, the biological study of wholes. Of particular interest, P4 systems medicine is currently promised as a revolutionary new biomedical approach that is holistic rather than reductionist. This article analyzes its concept of holism, both with regard to methods and conceptualization of health and disease. Rather than representing a medical holism associated with basic humanistic ideas, we (...) find a technoscientific holism resulting from altered technological and theoretical circumstances in biology. We argue that this holism, which is aimed at disease prevention and health optimization, points towards an expanded form of medicalization, which we call ‘holistic medicalization’: Each person’s whole life process is defined in biomedical, technoscientific terms as quantifiable and controllable and underlain a regime of medical control that is holistic in that it is all-encompassing. It is directed at all levels of functioning, from the molecular to the social, continual throughout life and aimed at managing the whole continuum from cure of disease to optimization of health. We argue that this medicalization is a very concrete materialization of a broader trend in medicine and society, which we call ‘the medicalization of health and life itself’. We explicate this holistic medicalization, discuss potential harms and conclude by calling for preventive measures aimed at avoiding eventual harmful effects of overmedicalization in systems medicine. (shrink)