The Turing Test is of limited use for entities differing substantially from human performance levels. We suggest an extension of Turing’s idea to a more differentiated measure - the "Turing Ratio" - which provides a framework for comparing human and algorithmic task performance, up to and beyond human performance levels. Games and talent levels derived from pairwise comparisons provide examples of the concept. We also discuss the related notions of intelligence amplification and task breadth. Intelligence amplification measures total computational efficiency (...) (the computational benefit gained relative to investment, including programmer time, hardware, and so on); we argue that evolutionary computation is a key amplifier of human intelligence. Task breadth is an attempt to weight Turing Ratios by the frequency and importance of the task they measure - doing well at a broad range of tasks is an empirical definition of “intelligence”. Measuring Turing Ratios and considering task breadth, prior knowledge, and time series of the measures may yield long-term insight into both open-ended computational approaches and the underlying task domains being measured. (shrink)
Ethical approval must be obtained before medical research can start. We describe the differences in EA for an pseudonymous, non-interventional, observational European study. Sixteen European national coordinators of the international study on very old intensive care patients answered an online questionnaire concerning their experience getting EA. N = 8/16 of the NCs could apply at one single national ethical committee, while the others had to apply to various regional ECs and/or individual hospital institutional research boards. The time between applying for (...) EA and the first decision varied between 7 days and 300 days. In 9/16 informed consent from the patient was not deemed necessary; in 7/16 informed consent was required from the patient or relatives. The upload of coded data to a central database required additional information in 14/16. In 4/16 the NCs had to ask separate approval to keep a subject identification code list to de-pseudonymize the patients if questions would occur. Only 2/16 of the NCs agreed that informed consent was necessary for this observational study. Overall, 6/16 of the NCs were satisfied with the entire process and 8/16 were unsatisfied. 11/16 would welcome a European central EC that would judge observational studies for all European countries. Variations in the process and prolonged time needed to get EA for observational studies hampers inclusion of patients in some European countries. This might have a negative influence on the external validity. Further harmonization of ethical approval process across Europe is welcomed for low-risk observational studies. Getting ethical approval for low-risk, non-interventional, observational studies varies enormously across European countries. (shrink)
Oregon is the only state in the United States where a physician may legally prescribe a lethal dose of barbiturate for a patient intending suicide. The Oregon Death with Dignity Act was passed by voters in 1994 and came into effect after much legal wrangling in October of 1997. At the same time, a cabinetmaker named Pat Matheny was struggling with progressive weakness from amyotrophic lateral sclerosis, or ALS. I met with Pat and his family for a lengthy interview in (...) October 1998 in Coos Bay, Oregon, for a television news report on his decision to get a lethal prescription. Below is an extract from that interview. On the day this introduction was written, 10 March 1999, Pat took the prescribed lethal overdose of barbiturates and died at home. His illness was taking his voice, he could not move his hands or legs, and breathing was becoming very difficult. His mother told me he knew that was for him. (shrink)
This is a book symposium on Steffen Borge’s The Philosophy of Football. It has contributions from William Morgan, Murray Smith and Brian Weatherson with replies from Borge.
Connecting human minds to various technological devices and applications through brain-computer interfaces affords intriguingly novel ways for humans to engage and interact with the world. Not only do BCIs play an important role in restorative medicine, they are also increasingly used outside of medical or therapeutic contexts. A striking peculiarity of BCI technology is that the kind of actions it enables seems to differ from paradigmatic human actions, because, effects in the world are brought about by devices such as robotic (...) arms, prosthesis, or other machines, and their execution runs through a computer directed by brain signals. In contrast to usual forms of action, the sequence does not need to involve bodily or muscle movements at all. A motionless body, the epitome of inaction, might be acting. How do theories of action relate to such BCI-mediated forms of changing the world? We wish to explore this question through the lenses of three perspectives on agency: subjective experience of agency, philosophical action theory, and legal concepts of action. Our analysis pursues three aims: First, we shall discuss whether and which BCI-mediated events qualify as actions, according to the main concepts of action in philosophy and law. Secondly, en passant, we wish to highlight the ten most interesting novelties or peculiarities of BCI-mediated movements. Thirdly, we seek to explore whether these novel forms of movement may have consequences for concepts of agency. More concretely, we think that convincing assessments of BCI-movements require more fine-grained accounts of agency and a distinction between various forms of control during movements. In addition, we show that the disembodied nature of BCI-mediated events causes troubles for the standard legal account of actions as bodily movements. In an exchange with views from philosophy, we wish to propose that the law ought to reform its concept of action to include some, but not all, BCI-mediated events and sketch some of the wider implications this may have, especially for the venerable legal idea of the right to freedom of thought. In this regard, BCIs are an example of the way in which technological access to yet largely sealed-off domains of the person may necessitate adjusting normative boundaries between the personal and the social sphere. (shrink)
Unlike conceptual analysis, conceptual engineering does not aim to identify the content that our current concepts do have, but the content which these concepts should have. For this method to show the results that its practitioners typically aim for, being able to change meanings seems to be a crucial presupposition. However, certain branches of semantic externalism raise doubts about whether this presupposition can be met. To the extent that meanings are determined by external factors such as causal histories or microphysical (...) structures, it seems that they cannot be changed intentionally. This paper gives an extended discussion of this ‘externalist challenge’. Pace Herman Cappelen’s recent take on this issue, it argues that the viability of conceptual engineering crucially depends on our ability to bring about meaning change. Furthermore, it argues that, contrary to first appearance, causal theories of reference do allow for a sufficient degree of meaning control. To this purpose, it argues that there is a sense of what is called ‘collective long-range control’, and that popular versions of the causal theory of reference imply that people have this kind of control over meanings. (shrink)
Conceptual engineers aim to revise rather than describe our concepts. But what are concepts? And how does one engineer them? Answering these questions is of central importance for implementing and theorizing about conceptual engineering. This paper discusses and criticizes two influential views of this issue: semanticism, according to which conceptual engineers aim to change linguistic meanings, and psychologism, according to which conceptual engineers aim to change psychological structures. I argue that neither of these accounts can give us the full story. (...) Instead, I propose and defend the Dual Content View of Conceptual Engineering. On this view, conceptual engineering targets concepts, where concepts are understood as having two kinds of contents: referential content and cognitive content. I show that this view is independently plausible and that it gives us a comprehensive account of conceptual engineering that helps to make progress on some of the most difficult problems surrounding conceptual engineering. (shrink)
It seems natural to think that Carnapian explication and experimental philosophy can go hand in hand. But what exactly explicators can gain from the data provided by experimental philosophers remains controversial. According to an influential proposal by Shepherd and Justus, explicators should use experimental data in the process of ‘explication preparation’. Against this proposal, Mark Pinder has recently suggested that experimental data can directly assist an explicator’s search for fruitful replacements of the explicandum. In developing his argument, he also proposes (...) a novel aspect of what makes a concept fruitful, namely, that it is taken up by the relevant community. In this paper, I defend explication preparation against Pinder’s objections and argue that his uptake proposal conflates theoretical and practical success conditions of explications. Furthermore, I argue that Pinder’s suggested experimental procedure needs substantial revision. I end by distinguishing two kinds of explication projects, and showing how experimental philosophy can contribute to each of them. (shrink)
Max Deutsch has recently argued that conceptual engineering is stuck in a dilemma. If it is construed as the activity of revising the semantic meanings of existing terms, then it faces an unsurmountable implementation problem. If, on the other hand, it is construed as the activity of introducing new technical terms, then it becomes trivial. According to Deutsch, this conclusion need not worry us, however, for conceptual engineering is ill-motivated to begin with. This paper responds to Deutsch by arguing, first, (...) that there is a third construal of conceptual engineering, neglected by him, which renders it both implementable and non-trivial, and second, that even the more ambitious project of changing semantic meanings is no less feasible than other normative projects we currently pursue. Lastly, the value of conceptual engineering is defended against Deutsch’s objections. (shrink)
Bayesian approaches for estimating multilevel latent variable models can be beneficial in small samples. Prior distributions can be used to overcome small sample problems, for example, when priors that increase the accuracy of estimation are chosen. This article discusses two different but not mutually exclusive approaches for specifying priors. Both approaches aim at stabilizing estimators in such a way that the Mean Squared Error of the estimator of the between-group slope will be small. In the first approach, the MSE is (...) decreased by specifying a slightly informative prior for the group-level variance of the predictor variable, whereas in the second approach, the decrease is achieved directly by using a slightly informative prior for the slope. Mathematical and graphical inspections suggest that both approaches can be effective for reducing the MSE in small samples, thus rendering them attractive in these situations. The article also discusses how these approaches can be implemented in Mplus. (shrink)
Multi-stakeholder initiatives have become a vital part of the organizational landscape for corporate social responsibility. Recent debates have explored whether these initiatives represent opportunities for the “democratization” of transnational corporations, facilitating civic participation in the extension of corporate responsibility, or whether they constitute new arenas for the expansion of corporate influence and the private capture of regulatory power. In this article, we explore the political dynamics of these new governance initiatives by presenting an in-depth case study of an organization often (...) heralded as a model MSI: the Forest Stewardship Council. An effort to address global deforestation in the wake of failed efforts to agree a multilateral convention on forests at the Rio Summit in 1992, the FSC was launched in 1993 as a non-state regulatory experiment: a transnational MSI, administering a global eco-labeling scheme for timber and forest products. We trace the scheme’s evolution over the past two decades, showing that while the FSC has successfully facilitated multi-sectoral determination of new standards for forestry, it has nevertheless failed to transform commercial forestry practices or stem the tide of tropical deforestation. Applying a neo-Gramscian analysis to the organizational evolution of the FSC, we examine how broader market forces and resource imbalances between non-governmental and market actors can serve to limit the effectiveness of MSIs in the current neo-liberal environment. This presents dilemmas for NGOs which can lead to their defection, ultimately undermining the organizational legitimacy of MSIs. (shrink)
The ancient Stoics repeatedly stressed the monolithic comprehensiveness of their philosophy, and this book is the only one to provide a holistic grasp of their attempt to synthesize the whole of the human condition into a unified view. Originally published in 1962, _An Essay on the Unity of Stoic Philosophy_ was far ahead of its time. Now a pivotal text, it lays out the core ideas of Stoicism and their interconnection against the backdrop of Aristotelian philosophy, providing a coherent understanding (...) of the many—and sometimes divergent—philosophies the Stoics formulated. At once penetrating and lucid, Johnny Christensen’s book is brought back into print in a second edition for a new audience. (shrink)
Ethics and Experience introduces students to the key topics in moral theory through provocative moral issues—just war, abortion, physician assisted suicide, the death penalty and more. Steffen helps students bridge the gap between ethical theory and experience through developing a “common agreement” ethical system that is applicable to a variety of moral problems and issues with clear language and real-life examples.
‘There is no place in the phenomenology of fully absorbed coping’, writes Hubert Dreyfus, ‘for mindfulness. In flow, as Sartre sees, there are only attractive and repulsive forces drawing appropriate activity out of an active body’1. Among the many ways in which history animates dynamical systems at a range of distinctive timescales, the phenomena of embodied human habit, skilful movement, and absorbed coping are among the most pervasive and mundane, and the most philosophically puzzling. In this essay we examine both (...) habitual and skilled movement, sketching the outlines of a multidimensional framework within which the many differences across distinctive cases and domains might be fruitfully understood. Both the range of movement phenomena which can plausibly be seen as instances of habit or skill, and the space of possible theories of such phenomena are richer and more disparate than philosophy easily encompasses. We seek to bring phenomenology into contact with relevant movements in psychological theories of skilful action, in the belief that phenomenological philosophy and cognitive science can be allies rather than antagonists. (shrink)
In this paper, I am going to present a condensed version of my theory of what sport is from my book The Philosophy of Football. In that work, I took my starting point in Bernard Suits’ celebrated,...
Currently, electronic agents are being designed and implemented that, unprecedentedly, will be capable of performing legally binding actions. These advances necessitate a thorough treatment of their legal consequences. In our paper, we first demonstrate that electronic agents behave structurally similar to human agents. Then we study how declarations of intention stated by an electronic agent are related to ordinary declarations of intention given by natural persons or legal entities, and also how the actions of electronic agents in this respect have (...) to be classified under German law. We discuss four different approaches of classifying agent declarations. As one of these, we propose the concept of an electronic person (i.e., agents with limited liability), enrolment of agents into an agent register, and agent liability funds as means to serve the needs of all contracting parties. (shrink)
It is obvious that we would not want to demand that an agent' s beliefs at different times exhibit the same sort of consistency that we demand from an agent' s simultaneous beliefs; there' s nothing irrational about believing P at one time and not-P at another. Nevertheless, many have thought that some sort of coherence or stability of beliefs over time is an important component of epistemic rationality.
This commentary draws critical attention to the ongoing commodification of trust in policy and scholarly discourses of artificial intelligence and society. Based on an assessment of publications discussing the implementation of AI in governmental and private services, our findings indicate that this discursive trend towards commodification is driven by the need for a trusting population of service users to harvest data at scale and leads to the discursive construction of trust as an essential good on a par with data as (...) raw material. This discursive commodification is marked by a decreasing emphasis on trust understood as the expected reliability of a trusted agent, and increased emphasis on instrumental and extractive framings of trust as a resource. This tendency, we argue, does an ultimate disservice to developers, users, and systems alike, insofar as it obscures the subtle mechanisms through which trust in AI systems might be built, making it less likely that it will be. (shrink)
This paper argues for explanatory eliminativism about topics relative to the domain of conceptual engineering. It has become usual to think that topics serve an important explanatory role in theories of conceptual engineering, namely, to determine the limits of revision. I argue, first, that such limits can be understood either as the normative limits pertaining to the justification of conceptual engineering, as the metaphysical limits pertaining to the identity of the concepts in question, or as the terminological limits pertaining to (...) usage of the original terminology. Second, I argue that the metaphysical reading is disputable as a theory of concepts and inconsequential for conceptual engineers, and that neither of the two leading accounts of topics that have been presented in the literature—the samesaying account and functionalism—determine the limits of revision in either of the two remaining senses. In the absence of more promising competitors, I conclude that there is no theoretical role for topics to play in theories of conceptual engineering. An upshot of my argument is that conceptual engineers should stop worrying about things like topic continuity, and instead shift their attention to the issues that really matter for justifying conceptual revisions or replacements, making terminological choices, and underpinning conceptual engineering with a theory of concepts. (shrink)
Any account of “what is special about the human brain” must specify the neural basis of our unique ability to produce speech and delineate how these remarkable motor capabilities could have emerged in our hominin ancestors. Clinical data suggest that the basal ganglia provide a platform for the integration of primate-general mechanisms of acoustic communication with the faculty of articulate speech in humans. Furthermore, neurobiological and paleoanthropological data point at a two-stage model of the phylogenetic evolution of this crucial prerequisite (...) of spoken language: monosynaptic refinement of the projections of motor cortex to the brainstem nuclei that steer laryngeal muscles, presumably, as part of a “phylogenetic trend” associated with increasing brain size during hominin evolution; subsequent vocal-laryngeal elaboration of cortico-basal ganglia circuitries, driven by human-specificFOXP2mutations.;>This concept implies vocal continuity of spoken language evolution at the motor level, elucidating the deep entrenchment of articulate speech into a “nonverbal matrix”, which is not accounted for by gestural-origin theories. Moreover, it provides a solution to the question for the adaptive value of the “first word” since even the earliest and most simple verbal utterances must have increased the versatility of vocal displays afforded by the preceding elaboration of monosynaptic corticobulbar tracts, giving rise to enhanced social cooperation and prestige. At the ontogenetic level, the proposed model assumes age-dependent interactions between the basal ganglia and their cortical targets, similar to vocal learning in some songbirds. In this view, the emergence of articulate speech builds on the “renaissance” of an ancient organizational principle and, hence, may represent an example of “evolutionary tinkering”. (shrink)
Human beings are the only creatures known to engage in sport. We are sporting animals, and our favourite pastime of football is the biggest sport spectacle on earth. The Philosophy of Footballpresents the first sustained, in-depth philosophical investigation of the phenomenon of football. In explaining the complex nature of football, the book draws on literature in sociology, history, psychology and beyond, offering real-life examples of footballing actions alongside illuminating thought experiments. The book is organized around four main themes considering the (...) character, nature, analysis and aesthetics of football. It discusses football as an extra-ordinary, unnecessary, rule-based, competitive, skill-based physical activity, articulated as a social kind that is fictional in character, and where fairness or fair play - contrary to much sport ethical discussion - is not centre stage. Football, it is argued, is a constructive- destructive contact sport and, in comparison to other sports, is lower scoring and more affected by chance. The latter presents to its spectators a more unpredictable game and a darker, more complex and denser drama to enjoy. The Philosophy of Footballdeepens our understanding of the familiar features of the game, offering novel interpretations on what football is, how and why we play it, and what the game offers its followers that makes us so eagerly await match day. This is essential reading for anybody with an interest in the world's most popular game or in the philosophical or social study of sport. nstructive- destructive contact sport and, in comparison to other sports, is lower scoring and more affected by chance. The latter presents to its spectators a more unpredictable game and a darker, more complex and denser drama to enjoy. The Philosophy of Footballdeepens our understanding of the familiar features of the game, offering novel interpretations on what football is, how and why we play it, and what the game offers its followers that makes us so eagerly await match day. This is essential reading for anybody with an interest in the world's most popular game or in the philosophical or social study of sport. (shrink)
This is the narrative of the Scandinavian scientist, Hans Christian Ørsted, the discoverer of electromagnetism. Ørsted was also one of the cultural leaders and organizers of the Danish Golden Age, making significant contributions to aesthetics, philosophy, pedagogy, politics, and religion.
In his recent book Criticism and Social Change Frank Lentricchia melodramatically pits his critical hero Kenneth Burke, advocate of the intellect’s intervention in social life, against the villainous Paul de Man, “undisputed master in the United States of what is called deconstruction.” Lentricchia charges that “the insidious effect of [de Man’s] work is not the proliferating replication of his way of reading … but the paralysis of praxis itself: an effect that traditionalism, with its liberal view of the division of (...) culture and political power, should only applaud.”1 He goes on to prophesy thatThe deconstruction of deconstruction will reveal, against apparent intention, a tacit political agenda after all, one that can only embarrass deconstruction, particularly its younger proponents whose activist experiences within the socially wrenching upheavals of the 1960s and early 1970s will surely not permit them easily to relax, without guilt and self-hatred, into resignation and ivory tower despair. [CSC, p. 40]Such is Lentricchia’s strenuous conjuration of a historical moment in which he can forcefully intervene—a summons fraught with the pathos excited by any reference to the heady days of political enthusiasm during the war in Vietnam. Lentricchia ominously figures a scene of rueful solitude where de Manian lucidity breaks into the big chill. And maybe it will. But Lentricchia furnishes no good reason why it should. De Manian deconstruction is “deconstructed” by Lentricchia to reveal “against apparent intention, a tacit political agenda.” And this revelation is advertised as a sure embarrassment to the younger practitioners of deconstruction—sweepingly characterized as erstwhile political activists who have, wide-eyed, opted for a critical approach that magically entangles its proponents in the soul-destroying delights of rhetoric and reaction. Left unexamined in Lentricchia’s story, however, is the basis for the initial rapport between radicalism and deconstruction. Why should collegiate activists have turned into deconstructionsists? Is not that, in Lentricchia’s terms, the same question as asking why political activists should have turned to literary criticism at all? If we suppose this original turn to be intentional, how could the initiates of this critical approach ever be genuinely betrayed into embarrassment by time or by its herald, Frank Lentricchia? On the face of it, the traducement of a secret intention would be unlikely to come as a surprise, since deconstructing deconstruction is not only the enterprise of Marxist critics like Lentricchia but also of Jacques Derrida, archdeconstructor, who unashamedly identified the embarrassment of intention as constitutive of the deconstructive method. If deconstruction is at once a natural outlet for activists and the first step on a slippery slope that ends in apostasy , it suggests a phenomenon with contours more suggestively intricate, if not less diabolically seductive, than the program Lentricchia outlines. And it is a phenomenon as worrisomely affiliative as it is bafflingly intricate. We need to know whether the relations between deconstruction and radical politics, between deconstruction and apostasy between deconstruction and criticism, and between apostasy and criticism are necessary or contingent, or neither and both at once. 1. Frank Lentricchia, Criticism and Social Change , p. 38; all further references to this work, abbreviated CSC, will be included in the text. Jerome Christensen, professor of English at the Johns Hopkins University, is the author of Coleridge’s Blessed Machine of Language and the forthcoming Hume’s Practice: The Career of an Enlightenment Man of Letters. He is currently at work on a study of Byron and the issue of strong romanticism. (shrink)
We develop an extension of the familiar linear mixed logit model to allow for the direct estimation of parametric non-linear functions defined over structural parameters. Classic applications include the estimation of coefficients of utility functions to characterize risk attitudes and discounting functions to characterize impatience. There are several unexpected benefits of this extension, apart from the ability to directly estimate structural parameters of theoretical interest.
The Anthropocene concept arose within the Earth System science community, albeit explicitly as a geological time term. Its current analysis by the stratigraphical community, as a potential formal addition to the Geological Time Scale, necessitates comparison of the methodologies and patterns of enquiry of these two communities. One means of comparison is to consider some of the most widely used results of the ESS, the ‘planetary boundaries’ concept of Rockström and colleagues, and the ‘Great Acceleration’ graphs of Steffen and (...) colleagues, in terms of their stratigraphical expression. This expression varies from virtually non-existent to pronounced and many-faceted, while in some cases stratigraphical proxies may help constrain anthropogenic process. The Anthropocene concepts of the ESS and stratigraphy emerge as complementary, and effective stratigraphic definition should facilitate wider transdisciplinary communication. (shrink)
How should one react when one has a belief, but knows that other people—who have roughly the same evidence as one has, and seem roughly as likely to react to it correctly—disagree? This paper argues that the disagreement of other competent inquirers often requires one to be much less confident in one’s opinions than one would otherwise be.
The Flow of Influence: From Newton to Locke - and Back- In this essay, the affinity between Locke’s empiricism and Newton’s natural philosophy is scrutinized. Parallels are distinguished from influences. I argue, pace G.A.J. Rogers, that Newton’s doctrine of absolute space and time influenced Locke’s Essay concerning Human Understanding from the second edition onwards. I also show that Newton used Lockean terminology in his criticism of Cartesianism. It is further argued that Locke’s endorsement of corpuscularianism is merely methodological, i.e. he (...) accepts it as a scientifically useful and psychologically intelligible paradigm, but not as a realist explanation of rerum natura. Like Newton, Locke was reluctant to accept the corpuscular theory of light. However, his reasons for doing so were different from those of Newton. This essay is divided into three parts: in the first, the stage is set by looking at the fundamentals of Locke’s epistemology; in the second, several correspondences between Locke’s and Newton’s thought are explored and two cases of influence are argued for; and in the third, several arguments are provided for interpreting Locke’s corpuscularianism as methodological. (shrink)
ABSTRACTIn this article, I consider Bernard Suits’ Utopia where the denizens supposedly fill their days playing Utopian sports, with regard to the relevance of the thought experiment for understand...
First concerns about the use of nanosilver were raised almost a decade ago, but assessing the risks has been extremely challenging scientifically, and regulation to protect environmental and human health remains controversial. In order to understand the known risks and issues associated with the use of nanosilver, we carried out a DPSIR analysis and analysed drivers, pressures, state, impacts and potential policy responses. We found that most concerns relate to the potential development of multi-resistant bacteria and the environmental impacts of (...) nanosilver. From the DPSIR analysis, we found that new legislation for nanomaterials in general and nanosilver-specific changes in the current European chemical, biocide and medical legislation were the optimal policy responses, along with limiting the overall use of nanosilver. In order to qualify the identified potential policy responses, we carried out a stakeholder analysis, in order to explore possibilities for reaching consensus amongst stakeholders. Through the stakeholder analysis, the interests, views, power and influence of the identified stakeholders were mapped. Overall, the policy options identified in the DPSIR analysis were deemed not to be implementable, as industry and NGOs seem to have fundamentally conflicting views and interests. The use of the combination of DPSIR and stakeholder analysis proved valuable for use in cases of complexity, as they compensate for each other’s limitations and open up for a discussion what can be done to reduce risks. (shrink)
This book takes a philosophical approach to questions concerning violence, war, and justice in human affairs. It offers the reader a broad introduction to underlying assumptions, values, concepts, theories, and the historical contexts informing much of the current discussion worldwide regarding these morally crucial topics. It provides brief summaries and analyses of a wide range of relevant belief systems, philosophical positions, and policy problems. While not first and foremost a book of advocacy, it is clearly oriented throughout by the ethical (...) preference for nonviolent strategies in the achievement of human ends and a belief in the viability of a socially just—and thus peaceful—human future. It also maintains a consistently skeptical stance towards the all-too-easily accepted apologies, past and present, for violence, war, and the continuation of injustice. (shrink)
This book examines the intersections of gender and race in a liberation movement propelled by an African spiritual ethos in the Caribbean and restores agency to the RastaWoman’s subversive participation in the ritual known as Reasoning. With powerful narrative, this book appeals to studies of religious transformation, resistance movements, gender and race theory, and Caribbean history and culture.
Aesthetics is today widely seen as the philosophy of art and/or beauty, limited to artworks and their perception. In this paper, I will argue that today's aesthetics and the original programme developed by the German Enlightenment thinker Alexander Gottlieb Baumgarten in the first half of the eighteenth century have only the name in common. Baumgarten did not primarily develop his aesthetics as a philosophy of art. The making and understanding of artworks had served in his original programme only as an (...) example for the application of his philosophy. What he really attempts to present is an alternative philosophy of knowledge that goes beyond the purely rationalist, empiricist, and sensualist approaches. In short, Baumgarten transcends the old opposition between rationalism and sensualism. His core theme is the improvement (perfectio) of human knowledge and cognition and the ways to reach this goal. The study of Baumgarten's foundational works on aesthetics should not be undertaken merely out of antiquarian interest. I will argue, instead, that Baumgarten's importance and contemporary relevance lies in this: that his Aesthetica may serve as a profound contribution to the philosophy of the cultural sciences and humanities. Revisiting Baumgarten's original idea of aesthetics will lead us to a more inclusive concept of that philosophical discipline. (shrink)
Ethical issues concerning brain–computer interfaces have already received a considerable amount of attention. However, one particular form of BCI has not received the attention that it deserves: Affective BCIs that allow for the detection and stimulation of affective states. This paper brings the ethical issues of affective BCIs in sharper focus. The paper briefly reviews recent applications of affective BCIs and considers ethical issues that arise from these applications. Ethical issues that affective BCIs share with other neurotechnologies are presented and (...) ethical concerns that are specific to affective BCIs are identified and discussed. (shrink)
In this paper I argue against a criticism by Matthew Weiner to Grice’s thesis that cancellability is a necessary condition for conversational implicature. I argue that the purported counterexamples fail because the supposed failed cancellation in the cases Weiner presents is not meant as a cancellation but as a reinforcement of the implicature. I moreover point out that there are special situations in which the supposed cancellation may really work as a cancellation.
Wissen, so lautet die unausgesprochene Maxime der Humanisten und Polyhistoren der Frühen Neuzeit, kann man nie genug erwerben. Riesige Bibliotheken und Kunstkammern sowie voluminöse Abhandlungen, Geschichtswerke und Enzyklopädien geben noch heute eindrucksvoll Rechenschaft von dieser Leidenschaft. Doch wächst mit der Größe jeder Sammlung auch die Notwendigkeit ihrer Ordnung. Es ist daher kein Zufall, dass ›Ordnung‹ zum Schlüsselbegriff des humanistischen Enzyklopädismus aufgestiegen und zum Gegenstand hartnäckiger Auseinandersetzungen unter den Gelehrten geworden ist. In Steffen Siegels Untersuchung werden die vielfältigen philosophischen Versuche, (...) die Fülle des Wissens systematisch zu ordnen, nicht allein anhand der Überlieferung von Texten rekonstruiert. Im Zentrum dieser reich illustrierten Studie, die bei mittelalterlichen Wissenspraktiken ihren Ausgang nimmt, sich insbesondere den vielschichtigen Bildkulturen des 16. und 17. Jahrhunderts widmet und einen Ausblick auf moderne Visualisierungstechniken bietet, steht die faszinierende Vielfalt jener Bilder, die Ordnungen des Wissens als sichtbare Figuren vor Augen rückten. Die Bedeutung von Schautafeln und Bildallegorien, von wissenschaftlichen Illustrationen und Karten wird, mit Blick auf die Frage nach möglichen Ordnungen des Wissens, in diesem Buch zum ersten Mal übergreifend analysiert. Ein Akzent der Untersuchungen liegt auf der bislang nur wenig erforschten Geschichte des Diagramms in der Frühen Neuzeit. In der Mitte zwischen den Darstellungsmöglichkeiten von Bildern und Texten, standen insbesondere diagrammatische Schemata im Dienst der Konstruktion und der Beglaubigung von Ordnungen des Wissens. Anhand der reichen Tradition frühneuzeitlicher Diagramme lassen sich in hervorragender Weise die Potentiale, aber auch die Probleme eines Zusammenspiels von Wissens-, Ideen- und Mediengeschichte in der Zeit um 1600 ablesen und profilieren. Mit den im Jahr 1587 in Paris publizierten Tableaux legte Montaignes Zeitgenosse Christophe de Savigny eines der erstaunlichsten und reichsten Zeugnisse frühneuzeitlicher Kunst im Dienst einer solchen Wissensgeschichte vor. Das Tafelwerk steht daher im Mittelpunkt der Untersuchungen zu den Figuren enzyklopädischer Wissensordnungen. Erstmals seit seiner Publikation vor über fünfhundert Jahren wird dieses sehr selten gewordene, für die Gelehrtenkultur des 16. Jahrhunderts äußerst repräsentative Werk hier vollständig und in einer farbigen Reproduktion wieder zum Druck gebracht. Die komplexen Strategien, mit Hilfe von Bildern, Diagrammen und Texten für eine bestimmte Wissensordnung zu argumentieren, lassen sich auf diese Weise an einer bedeutenden Quelle der vormodernen Gelehrtengeschichte minutiös nachvollziehen. Die Fülle des Wissens und die Möglichkeiten seiner Ordnung werden dabei als Herausforderungen sichtbar, die in gleicher Weise das Interesse der Kunstgeschichte und der Kulturwissenschaft, der Wissenschafts- und Pädagogikgeschichte und der Philosophie berühren. (shrink)
Die Beiträge aus den unterschiedlichsten Wissenschaftsdisziplinen zeigen, dass Ideologien von jeher Wahrheitsansprüche formulierten, die im Verlauf ihrer Wirkungsgeschichte entweder als Irrtümer buchstäblich "Lügen gestraft" wurden oder von zeitgenössischen ideologischen Gegenentwürfen als solche bezeichnet wurden.
Sometimes we get evidence of our own epistemic malfunction. This can come from finding out we’re fatigued, or have been drugged, or that other competent and well-informed thinkers disagree with our beliefs. This sort of evidence seems to seems to behave differently from ordinary evidence about the world. In particular, getting such evidence can put agents in a position where the most rational response involves violating some epistemic ideal.
What role, if any, does formal logic play in characterizing epistemically rational belief? Traditionally, belief is seen in a binary way - either one believes a proposition, or one doesn't. Given this picture, it is attractive to impose certain deductive constraints on rational belief: that one's beliefs be logically consistent, and that one believe the logical consequences of one's beliefs. A less popular picture sees belief as a graded phenomenon.