Starting with a discussion of what I call `Koyré's paradox of conceptual novelty', I introduce the ideas of Damerow et al. on the establishment of classical mechanics in Galileo's work. I then argue that although their view on the nature of Galileo's conceptual innovation is convincing, it misses an essential element: Galileo's use of the experiments described in the first day of the Two New Sciences. I describe these experiments and analyze their function. Central to my analysis is the idea (...) that Galileo's pendulum experiments serve to secure the reference of his theoretical models in actually occurring cases of free fall. In this way, Galileo's experiments constitute an essential part of the meaning of the new concepts of classical mechanics. (shrink)
Many recent developments in artificial intelligence research are relevant for traditional issues in the philosophy of science. One of the developments in AI research we want to focus on in this article is diagnostic reasoning, which we consider to be of interest for the theory of explanation in general and for an understanding of explanatory arguments in economic science in particular. Usually, explanation is primarily discussed in terms of deductive inferences in classical logic. However, in recent AI research it is (...) observed that a diagnostic explanation is actually quite different from deductive reasoning. In diagnostic reasoning the emphasis is on restoring consistency rather than on deduction. Intuitively speaking, the problem diagnostic reasoning is concerned with is the following. Consider a description of a system in which the normal behavior of the system is characterized and an observation that conflicts with this normal behavior. The diagnostic problem is to determine which of the components of the system can, when assumed to be functioning abnormally, account for the conflicting observation. A diagnosis is a set of allegedly malfunctioning components that can be used to restore the consistency of the system description and the observation. In this article, this kind of reasoning is formalized and we show its importance for the theory of explanation. We will show how the diagnosis nondeductively explains the discrepancy between the observed and the correct system behavior. The article also shows the relevance of the subject for real scientific arguments by showing that examples of diagnostic reasoning can be found in Friedman's Theory of the Consumption Function. Moreover, it places the philosophical implications of diagnostic reasoning in the context of Mill's aprioristic methodology. (shrink)
Philosophers and psychologists often assume that mirror reflections are optical illusions. According to many authors, what we see in a mirror appears to be behind it. I discuss two strategies to resist this piece of dogma. As I will show, the conviction that mirror reflections are illusions is rooted in a confused conception of the relations between location, direction, and visibility. This conception is unacceptable to those who take seriously the way in which mirrors contribute to our experience of the (...) world. My argument may be read as an advertisement of the neglected field of philosophical catoptrics, the philosophical study of the optical properties of mirrors. It enables us to recast familiar issues in the philosophy of perception. (shrink)
What sets the practice of rigorously tested, sound science apart from pseudoscience? In this volume, the contributors seek to answer this question, known to philosophers of science as “the demarcation problem.” This issue has a long history in philosophy, stretching as far back as the early twentieth century and the work of Karl Popper. But by the late 1980s, scholars in the field began to treat the demarcation problem as impossible to solve and futile to ponder. However, the essays that (...) Massimo Pigliucci and Maarten Boudry have assembled in this volume make a rousing case for the unequivocal importance of reflecting on the separation between pseudoscience and sound science. (shrink)
Relationalism maintains that perceptual experience involves, as part of its nature, a distinctive kind of conscious perceptual relation between a subject of experience and an object of experience. Together with the claim that perceptual experience is presentational, relationalism is widely believed to be a core aspect of the naive realist outlook on perception. This is a mistake. I argue that naive realism about perception can be upheld without a commitment to relationalism.
Conceived by Johan van Benthem and Yde Venema, arrow logic started as an attempt to give a general account of the logic of transitions. The generality of the approach provided a wide application area ranging from philosophy to computer science. The book gives a comprehensive survey of logical research within and around arrow logic. Since the natural operations on transitions include composition, inverse and identity, their logic, arrow logic can be studied from two different perspectives, and by two methodologies: modal (...) logic and the algebra of relations. Some of the results in this volume can be interpreted as price tags. They show what the prices of desirable properties, such as decidability, axiomatisability, Craig interpolation property, Beth definability etc. are in terms of semantic properties of the logic. The research program of arrow logic has considerably broadened in the last couple of years and recently also covers the enterprise to explore the border between decidable and undecidable versions of other applied logics. The content of this volume reflects this broadening. The editors included a number of papers which are in the spirit of this generalised research program. (shrink)
The present paper presents a philosophical analysis of earth science, a discipline that has received relatively little attention from philosophers of science. We focus on the question of whether earth science can be reduced to allegedly more fundamental sciences, such as chemistry or physics. In order to answer this question, we investigate the aims and methods of earth science, the laws and theories used by earth scientists, and the nature of earth-scientific explanation. Our analysis leads to the tentative conclusion that (...) there are emergent phenomena in earth science but that these may be reducible to physics. However, earth science does not have irreducible laws, and the theories of earth science are typically hypotheses about unobservable (past) events or generalised - but not universally valid - descriptions of contingent processes. Unlike more fundamental sciences, earth science is characterised by explanatory pluralism: earth scientists employ various forms of narrative explanations in combination with causal explanations. The main reason is that earth-scientific explanations are typically hampered by local underdetermination by the data to such an extent that complete causal explanations are impossible in practice, if not in principle. (shrink)
Part of the distinction between artefacts, objects made by humans for particular purposes, and natural objects is that artefacts are subject to normative judgements. A drill, say, can be a good drill or a poor drill, it can function well or correctly or it can malfunction. In this paper I investigate how such judgements fit into the domain of the normative in general and what the grounds for their normativity are. Taking as a starting point a general characterization of normativity (...) proposed by Dancy, I argue how statements such as ‘this is a good drill’ or ‘this drill is malfunctioning’ can be seen to express normative facts, or the content of normative statements. What they say is that a user who has a desire to achieve a particular relevant outcome has a reason to use, or not to use, the artefact in question. Next this analysis is extended to show that not just statements that say that an artefact performs its function well or poorly, but all statements that ascribe a function to an artefact can be seen as expressing a normative fact. On this approach the normativity of artefacts is analyzed in terms of reasons on grounds of practical, and to a lesser extent theoretical, rationality. I close by investigating briefly to what extent reasons on moral grounds are, in the analysis adopted here, involved in the normativity of artefacts.Keywords: Artefact; Normativity; Instrumental reason; Practical rationality; Function; Use. (shrink)
This paper presents an overview of the elements which characterize a research attitude and approach introduced by Michel Foucault and further developed as ?studies of governmentality? into a sub?discipline of the humanities during the past decade, including also applications in the field of education. The paper recalls Foucault's introduction of the notion of ?governmentality? and its relation to the ?mapping of the present? and sketches briefly the way in which the studies of governmentality have been elaborated in general and in (...) the context of research in education more particularly. It indicates how the studies of governmentality can be related to a cartography of the learning society, a cartography which helps us to get lost and to liberate our view. (shrink)
What makes beliefs thrive? In this paper, we model the dissemination of bona fide science versus pseudoscience, making use of Dan Sperber's epidemiological model of representations. Drawing on cognitive research on the roots of irrational beliefs and the institutional arrangement of science, we explain the dissemination of beliefs in terms of their salience to human cognition and their ability to adapt to specific cultural ecologies. By contrasting the cultural development of science and pseudoscience along a number of dimensions, we gain (...) a better understanding of their underlying epistemic differences. Pseudoscience can achieve widespread acceptance by tapping into evolved cognitive mechanisms, thus sacrificing intellectual integrity for intuitive appeal. Science, by contrast, defies those deeply held intuitions precisely because it is institutionally arranged to track objective patterns in the world, and the world does not care much about our intuitions. In light of these differences, we discuss the degree of openness or resilience to conceptual change (evidence and reason), and the divergent ways in which science and pseudoscience can achieve cultural “success”. (shrink)
Philosophers of science have given up on the quest for a silver bullet to put an end to all pseudoscience, as such a neat formal criterion to separate good science from its contenders has proven elusive. In the literature on critical thinking and in some philosophical quarters, however, this search for silver bullets lives on in the taxonomies of fallacies. The attractive idea is to have a handy list of abstract definitions or argumentation schemes, on the basis of which one (...) can identify bad or invalid types of reasoning, abstracting away from the specific content and dialectical context. Such shortcuts for debunking arguments are tempting, but alas, the promise is hardly if ever fulfilled. Different strands of research on the pragmatics of argumentation, probabilistic reasoning and ecological rationality have shown that almost every known type of fallacy is a close neighbor to sound inferences or acceptable moves in a debate. Nonetheless, the kernel idea of a fallacy as an erroneous type of argument is still retained by most authors. We outline a destructive dilemma we refer to as the Fallacy Fork: on the one hand, if fallacies are construed as demonstrably invalid form of reasoning, then they have very limited applicability in real life. On the other hand, if our definitions of fallacies are sophisticated enough to capture real-life complexities, they can no longer be held up as an effective tool for discriminating good and bad forms of reasoning. As we bring our schematic “fallacies” in touch with reality, we seem to lose grip on normative questions. Even approaches that do not rely on argumentation schemes to identify fallacies fail to escape the Fallacy Fork, and run up against their own version of it. (shrink)
Over the last twenty years, in all of these neighbouring fields, modal systems have been developed that we call multi-dimensional. (Our definition of multi ...
Undergraduate geoscience students are rarely exposed to history and philosophy of science. I will describe the experiences with a short course unfavourably placed in the first year of a bachelor of earth science. Arguments how HPS could enrich their education in many ways are sketched. One useful didactic approach is to develop a broader interest by connecting HPS themes to practical cases throughout the curriculum, and develop learning activities that allow students to reflect on their skills, methods and their field (...) in relation to other disciplines and interactions with society with abilities gained through exposure to HPS. Given support of the teaching staff, the tenets of philosophy of science in practice, of conceptual history of knowledge, and of ethics of science for society can fruitfully and directly be connected to the existing curriculum. This is ideally followed by a capstone HPS course late in the bachelor programme. (shrink)
The ‘European Space of Higher Education’ could be mapped as an infrastructure for entrepreneurship and a place where the distinction between the social and the economic becomes obsolete. Using Foucault's understanding of biopolitics and discussing the analyses of Agamben and Negri/Hardt it is argued that the actual governmental configuration, i.e. the economisation of the social, also has a biopolitical dimension. Focusing on the intersection between a politicisation and economisation of human life allows us to discuss a kind of ‘bio‐economisation’ , (...) a regime of economic terror and learning as investment. Finally it is argued how fostering learning, i.e. fostering life could turn into ‘let die’ and even into ‘make die’. (shrink)
This article takes up a text that Rancière published shortly after The Ignorant School Master appeared in French, 'École, production, égalité'[School, Production, Equality] (1988), in which he sketched the school as being preeminently the place of equality. In this vein, and opposed to the story of the school as the place where inequality is reproduced and therefore in need of reform, the article wants to recount the story of the school as the invention of a site of equality and as (...) primordially a public space. Inspired by Rancière, we indicate first how the actual (international and national) policy story about the school and the organizational technologies that accompany it install and legitimate profound inequalities, which consequently can no longer be questioned (and become 'invisible'). Second, the article recasts and rethinks different manifestations of equality and of 'public-ness' in school education and, finally, indicates various ways in which these manifestations are neutralized or immunized in actual discourses and educational technologies. (shrink)
One way to address such questions about artifact kinds is to look for clues in the available literature on parallel questions that have been posed with respect to kinds in the natural domain. Philosophers have long been concerned with the ...
Decision making in noisy and changing environments requires a fine balance between exploiting knowledge about good courses of action and exploring the environment in order to improve upon this knowledge. We present an experiment on a restless bandit task in which participants made repeated choices between options for which the average rewards changed over time. Comparing a number of computational models of participants’ behavior in this task, we find evidence that a substantial number of them balanced exploration and exploitation by (...) considering the probability that an option offers the maximum reward out of all the available options. (shrink)
Focal points seem to be important in helping players coordinate their strategies in coordination problems. Game theory lacks, however, a formal theory of focal points. This paper proposes a theory of focal points that is based on individual rationality considerations. The two principles upon which the theory rest are the Principle of Insufficient Reason (IR) and a Principle of Individual Team Member Rationality. The way IR is modelled combines the classic notion of description symmetry and a new notion of pay-off (...) symmetry, which yields different predictions in a variety of games. The theory can explain why people do better than pure randomization in matching games. (shrink)
The present collection deals with philosophical thinking at the medieval university from the threefold perspective of Institution and Career, Organizational Forms and Literary Genres, and School Formation and School Conflict.
For an arbitrary similarity type of Boolean Algebras with Operators we define a class of Sahlqvist identities. Sahlqvist identities have two important properties. First, a Sahlqvist identity is valid in a complex algebra if and only if the underlying relational atom structure satisfies a first-order condition which can be effectively read off from the syntactic form of the identity. Second, and as a consequence of the first property, Sahlqvist identities are canonical, that is, their validity is preserved under taking canonical (...) embedding algebras. Taken together, these properties imply that results about a Sahlqvist variety V van be obtained by reasoning in the elementary class of canonical structures of algebras in V. We give an example of this strategy in the variety of Cylindric Algebras: we show that an important identity called Henkin's equation is equivalent to a simpler identity that uses only one variable. We give a conceptually simple proof by showing that the firstorder correspondents of these two equations are equivalent over the class of cylindric atom structures. (shrink)
The scientific study of living organisms is permeated by machine and design metaphors. Genes are thought of as the ‘‘blueprint’’ of an organism, organisms are ‘‘reverse engineered’’ to discover their func- tionality, and living cells are compared to biochemical factories, complete with assembly lines, transport systems, messenger circuits, etc. Although the notion of design is indispensable to think about adapta- tions, and engineering analogies have considerable heuristic value (e.g., optimality assumptions), we argue they are limited in several important respects. In (...) particular, the analogy with human-made machines falters when we move down to the level of molecular biology and genetics. Living organisms are far more messy and less transparent than human-made machines. Notoriously, evolution is an oppor- tunistic tinkerer, blindly stumbling on ‘‘designs’’ that no sensible engineer would come up with. Despite impressive technological innovation, the prospect of artificially designing new life forms from scratch has proven more difficult than the superficial analogy with ‘‘programming’’ the right ‘‘software’’ would sug- gest. The idea of applying straightforward engineering approaches to living systems and their genomes— isolating functional components, designing new parts from scratch, recombining and assembling them into novel life forms—pushes the analogy with human artifacts beyond its limits. In the absence of a one-to-one correspondence between genotype and phenotype, there is no straightforward way to imple- ment novel biological functions and design new life forms. Both the developmental complexity of gene expression and the multifarious interactions of genes and environments are serious obstacles for ‘‘engi- neering’’ a particular phenotype. The problem of reverse-engineering a desired phenotype to its genetic ‘‘instructions’’ is probably intractable for any but the most simple phenotypes. Recent developments in the field of bio-engineering and synthetic biology reflect these limitations. Instead of genetically engi- neering a desired trait from scratch, as the machine/engineering metaphor promises, researchers are making greater strides by co-opting natural selection to ‘‘search’’ for a suitable genotype, or by borrowing and recombining genetic material from extant life forms. (shrink)
For a long time, philosophers of science have expressed little interest in the so-called demarcation project that occupied the pioneers of their field, and most now concur that terms like “pseudoscience” cannot be defined in any meaningful way. However, recent years have witnessed a revival of philosophical interest in demarcation. In this paper, I argue that, though the demarcation problem of old leads to a dead-end, the concept of pseudoscience is not going away anytime soon, and deserves a fresh look. (...) My approach proposes to naturalize and down-size the concept, anchoring it in real-life doctrines and fields of inquiry. First, I argue against the definite article “the” in “the demarcation problem”, distinguishing between territorial and normative demarcation, and between different failures and shortcomings in science apart from pseudoscience. Next, I argue that pseudosciences can be fruitfully regarded as simulacra of science, doctrines that are not epistemically warranted but whose proponents try to create the impression that they are. In this element of imitation or mimicry, I argue, lies the clue to their common identity. Despite the huge variety of doctrines and beliefs gathered under the rubric of “pseudoscience”, and the wide range of defects from which they suffer, pseudosciences all engage in similar strategies to create an impression of epistemic warrant. The indirect, symptomatic approach defended here leads to a general characterization of pseudosciences in all domains of inquiry, and to a useful diagnostic tool. (shrink)
In recent controversies about Intelligent Design Creationism (IDC), the principle of methodological naturalism (MN) has played an important role. In this paper, an often neglected distinction is made between two different conceptions of MN, each with its respective rationale and with a different view on the proper role of MN in science. According to one popular conception, MN is a self-imposed or intrinsic limitation of science, which means that science is simply not equipped to deal with claims of the supernatural (...) (Intrinsic MN or IMN). Alternatively, we will defend MN as a provisory and empirically grounded attitude of scientists, which is justified in virtue of the consistent success of naturalistic explanations and the lack of success of supernatural explanations in the history of science (Provisory MN or PMN). Science does have a bearing on supernatural hypotheses, and its verdict is uniformly negative. We will discuss five arguments that have been proposed in support of IMN: the argument from the definition of science, the argument from lawful regularity, the science stopper argument, the argument from procedural necessity, and the testability argument. We conclude that IMN, because of its philosophical flaws, proves to be an ill-advised strategy to counter the claims of IDC. Evolutionary scientists are on firmer ground if they discard supernatural explanations on purely evidential grounds, instead of ruling them out by philosophical fiat. (shrink)
The article analyses ethnonational conflicts in Belgium and Canada during the period 1960-1989. Using the most similar case design, it is argued that the different policy performances in Belgium and Canada can be accounted for by the institutional context in which the conflicts occurred. The institutional setup in Canada and Belgium created different modes of joint decision making. Through an analysis of three joint decision variables, namely, decision rules, preferences and default conditions, two empirical cases are scrutinized. The Canadian Pension (...) Plan in Canada and the institutional reform efforts in Belgium highlight the importance of institutional default conditions. On the basis of these empirical cases it is argued that the different conditions of joint decision making in the two states lead to a continuous production of compromises in Belgium and a genuine absence of mutual agreement in Canada. (shrink)
The concept of burden of proof is used in a wide range of discourses, from philosophy to law, science, skepticism, and even in everyday reasoning. This paper provides an analysis of the proper deployment of burden of proof, focusing in particular on skeptical discussions of pseudoscience and the paranormal, where burden of proof assignments are most poignant and relatively clear-cut. We argue that burden of proof is often misapplied or used as a mere rhetorical gambit, with little appreciation of the (...) underlying principles. The paper elaborates on an important distinction between evidential and prudential varieties of burdens of proof, which is cashed out in terms of Bayesian probabilities and error management theory. Finally, we explore the relationship between burden of proof and several (alleged) informal logical fallacies. This allows us to get a firmer grip on the concept and its applications in different domains, and also to clear up some confusions with regard to when exactly some fallacies (ad hominem, ad ignorantiam, and petitio principii) may or may not occur. (shrink)
Religious people seem to believe things that range from the somewhat peculiar to the utterly bizarre. Or do they? According to a new paper by Neil Van Leeuwen, religious “credence” is nothing like mundane factual belief. It has, he claims, more in common with fictional imaginings. Religious folk do not really “believe”—in the ordinary sense of the word—what they profess to believe. Like fictional imaginings, but unlike factual beliefs, religious credences are activated only within specific settings. We argue that Van (...) Leeuwen’s thesis contradicts a wealth of data on religiously motivated behavior. By and large, the faithful genuinely believe what they profess to believe. Although many religions openly embrace a sense of mystery, in general this does not prevent the attribution of beliefs to religious people. Many of the features of religious belief that Van Leeuwen alludes to, like invulnerability to refutation and incoherence, are characteristic of irrational beliefs in general and actually betray... (shrink)
Schools and classrooms, as well as the work place and the Internet, are considered today as learning environments . People are regarded as learners and the main target of school education has become 'learning' pupils and students how to learn. The roles of teachers and lecturers are redefined as instructors, designers of (powerful) learning environments and facilitators or coaches of learning processes. The aim of this paper is to argue that the current self-understanding in terms of learning environments is not (...) merely about a renewal of our vocabulary, but an indication of a far more general transformation of the world of education. It is argued that the current self-understanding in terms of 'learning environments' and 'learners' indicates a shift in our experience of time and place; a shift from (modern) historical self-understanding towards (post-modern) environmental self-understanding. The essay draws upon Foucauldian concepts in order to map the modern organisation of time and space in 'schools'. This past organisation is confronted with the current organisation of time and space in 'learning environments'. By contrasting both maps the paper focuses on the main characteristics of the current experience of time and space, that is, 'environmental self-understanding', and explores in the final section the dark side of this self-understanding. (shrink)
True beliefs are better guides to the world than false ones. This is the common-sense assumption that undergirds theorizing in evolutionary epistemology. According to Alvin Plantinga, however, evolution by natural selection does not care about truth: it cares only about fitness. If our cognitive faculties are the products of blind evolution, we have no reason to trust them, anytime or anywhere. Evolutionary naturalism, consequently, is a self-defeating position. Following up on earlier objections, we uncover three additional flaws in Plantinga's latest (...) formulation of his argument: a failure to appreciate adaptive path dependency, an incoherent conception of content ascription, and a conflation of common-sense and scientific beliefs, which we diagnose as the ‘foundationalist fallacy’. More fundamentally, Plantinga's reductive formalism with respect to the issue of cognitive reliability is inadequate to deal with relevant empirical details. (shrink)
The scientific study of living organisms is permeated by machine and design metaphors. Genes are thought of as the ‘‘blueprint’’ of an organism, organisms are ‘‘reverse engineered’’ to discover their functionality, and living cells are compared to biochemical factories, complete with assembly lines, transport systems, messenger circuits, etc. Although the notion of design is indispensable to think about adaptations, and engineering analogies have considerable heuristic value (e.g., optimality assumptions), we argue they are limited in several important respects. In particular, the (...) analogy with human-made machines falters when we move down to the level of molecular biology and genetics. Living organisms are far more messy and less transparent than human-made machines. Notoriously, evolution is an opportunistic tinkerer, blindly stumbling on ‘‘designs’’ that no sensible engineer would come up with. Despite impressive technological innovation, the prospect of artificially designing new life forms from scratch has proven more difficult than the superficial analogy with ‘‘programming’’ the right ‘‘software’’ would suggest. The idea of applying straightforward engineering approaches to living systems and their genomes— isolating functional components, designing new parts from scratch, recombining and assembling them into novel life forms—pushes the analogy with human artifacts beyond its limits. In the absence of a one-to-one correspondence between genotype and phenotype, there is no straightforward way to implement novel biological functions and design new life forms. Both the developmental complexity of gene expression and the multifarious interactions of genes and environments are serious obstacles for ‘‘engineering’’ a particular phenotype. The problem of reverse-engineering a desired phenotype to its genetic ‘‘instructions’’ is probably intractable for any but the most simple phenotypes. Recent developments in the field of bio-engineering and synthetic biology reflect these limitations. Instead of genetically engineering a desired trait from scratch, as the machine/engineering metaphor promises, researchers are making greater strides by co-opting natural selection to ‘‘search’’ for a suitable genotype, or by borrowing and recombining genetic material from extant life forms. (shrink)
This paper offers an epistemological discussion of self-validating belief systems and the recurrence of ?epistemic defense mechanisms? and ?immunizing strategies? across widely different domains of knowledge. We challenge the idea that typical ?weird? belief systems are inherently fragile, and we argue that, instead, they exhibit a surprising degree of resilience in the face of adverse evidence and criticism. Borrowing from the psychological research on belief perseverance, rationalization and motivated reasoning, we argue that the human mind is particularly susceptible to belief (...) systems that are structurally self-validating. On this cognitive-psychological basis, we construct an epidemiology of beliefs, arguing that the apparent convenience of escape clauses and other defensive ?tactics? used by believers may well derive not from conscious deliberation on their part, but from more subtle mechanisms of cultural selection. (shrink)
Many people assume that fictional entities are encapsulated in the world of fiction. I show that this cannot be right. Some works of fiction tell us about pieces of poetry, music, or theatre written by fictional characters. Such creations are fictional creations, as I will call them. Their authors do not exist. But that does not take away that we can perform, recite, or otherwise generate actual instances of such works. This means we can bring such individuals actually into existence, (...) as the works they are. I conclude that the assumption about encapsulation is untenable, unless an exception is made for types. (shrink)
An immunizing strategy is an argument brought forward in support of a belief system, though independent from that belief system, which makes it more or less invulnerable to rational argumentation and/or empirical evidence. By contrast, an epistemic defense mechanism is defined as a structural feature of a belief system which has the same effect of deflecting arguments and evidence. We discuss the remarkable recurrence of certain patterns of immunizing strategies and defense mechanisms in pseudoscience and other belief systems. Five different (...) types will be distinguished and analyzed, with examples drawn from widely different domains. The difference between immunizing strategies and defense mechanisms is analyzed, and their epistemological status is discussed. Our classification sheds new light on the various ways in which belief systems may achieve invulnerability against empirical evidence and rational criticism, and we propose our analysis as part of an explanation of these belief systems’ enduring appeal and tenacity. (shrink)
In this paper, we show that Arrow’s well-known impossibility theorem is instrumental in bringing the ongoing discussion about verisimilitude to a more general level of abstraction. After some preparatory technical steps, we show that Arrow’s requirements for voting procedures in social choice are also natural desiderata for a general verisimilitude definition that places content and likeness considerations on the same footing. Our main result states that no qualitative unifying procedure of a functional form can simultaneously satisfy the requirements of Unanimity, (...) Independence of irrelevant alternatives and Non-dictatorship at the level of sentence variables. By giving a formal account of the incompatibility of the considerations of content and likeness, our impossibility result makes it possible to systematize the discussion about verisimilitude, and to understand it in more general terms. (shrink)
Genes are often described by biologists using metaphors derived from computa- tional science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Accordingly, when the human genome project was initially announced, the promise was that we would soon know how a human being is made, just as we know how to make airplanes and buildings. Impor- tantly, (...) modern proponents of Intelligent Design, the latest version of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as ‘‘irreducible complexity’’ and on flawed analogies between living cells and mechanical factories. However, the living organ- ism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume’s criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do. In this article we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. Thus, we argue that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public. (shrink)
_Rancière, Public Education and the Taming of Democracy_ introduces the political and educational ideas of Jacques Rancière, a leading philosopher increasingly important in educational theory. In light of his ideas, the volume explores the current concern for democracy and equality in relation to education. The book introduces and discusses the works of Jacques Rancière, a leading philosopher increasingly important in the field of educational theory and philosophy The volume will have a broad appeal to those in the field of education (...) theory and philosophy, and those concerned with democracy, equal opportunities and pedagogy Balanced in its introduction of the political and educational ideas of this author and in its exploration in line with his work of some important issues in education and policy today Contributors from diverse countries and intellectual and cultural backgrounds, including the UK, US, Belgium, Sweden, Spain, France, Canada. (shrink)
Even though public awareness about privacy risks in the Internet is increasing, in the evolution of the Internet to the Internet of Things these risks are likely to become more relevant due to the large amount of data collected and processed by the “Things”. The business drivers for exploring ways to monetize such data are one of the challenges identified in this paper for the protection of Privacy in the IoT. Beyond the protection of privacy, this paper highlights the need (...) for new approaches, which grant a more active role to the users of the IoT and which address other potential issues such as the Digital Divide or safety risks. A key facet in ethical design is the transparency of the technology and services in how that technology handles data, as well as providing choice for the user. This paper presents a new approach for users’ interaction with the IoT, which is based on the concept of Ethical Design implemented through a policy-based framework. In the proposed framework, users are provided with wider controls over personal data or the IoT services by selecting specific sets of policies, which can be tailored according to users’ capabilities and to the contexts where they operate. The potential deployment of the framework in a typical IoT context is described with the identification of the main stakeholders and the processes that should be put in place. (shrink)
In recent years, there has been an intense public debate about whether and, if so, to what extent investments in nuclear energy should be part of strategies to mitigate climate change. Here, we address this question from an ethical perspective, evaluating different strategies of energy system development in terms of three ethical criteria, which will differentially appeal to proponents of different normative ethical frameworks. Starting from a standard analysis of climate change as arising from an intergenerational collective action problem, we (...) evaluate whether contributions from nuclear energy will, on expectation, increase the likelihood of successfully phasing out fossil fuels in time to avert dangerous global warming. For many socio-economic and geographic contexts, our review of the energy system modeling literature suggests the answer to this question is “yes.” We conclude that, from the point of view of climate change mitigation, investments in nuclear energy as part of a broader energy portfolio will be ethically required to minimize the risks of decarbonization failure, and thus the tail risks of catastrophic global warming. Finally, using a sensitivity analysis, we consider which other aspects of nuclear energy deployment, apart from climate change, have the potential to overturn the ultimate ethical verdict on investments in nuclear energy. Out of several potential considerations, we suggest that its potential interplay — whether beneficial or adverse — with the proliferation of nuclear weapons is the most plausible candidate. (shrink)