This text is the introduction of the special issue “Rhythm in social interaction” edited by Chiara Bassetti and EmanueleBottazzi in Etnografia e Ricerca Qualitativa, vol. 8, n. 3, December 2015. We thank Chiara Bassetti, EmanueleBottazzi and the journal Etnografia e Ricerca Qualitativa for the permission to republish it. But, friend, when you grasp the number and nature of the intervals of sound, from high to low, and the boundaries of those intervals, and how many (...) scales arise from them, - Anthropologie – Nouvel article. (shrink)
We analyze recent criticisms of the use of hyperreal probabilities as expressed by Pruss, Easwaran, Parker, and Williamson. We show that the alleged arbitrariness of hyperreal fields can be avoided by working in the Kanovei–Shelah model or in saturated models. We argue that some of the objections to hyperreal probabilities arise from hidden biases that favor Archimedean models. We discuss the advantage of the hyperreals over transferless fields with infinitesimals. In Paper II we analyze two underdetermination theorems by Pruss and (...) show that they hinge upon parasitic external hyperreal-valued measures, whereas internal hyperfinite measures are not underdetermined. (shrink)
ABSTRACTA probability model is underdetermined when there is no rational reason to assign a particular infinitesimal value as the probability of single events. Pruss claims that hyperreal probabilities are underdetermined. The claim is based upon external hyperreal-valued measures. We show that internal hyperfinite measures are not underdetermined. The importance of internality stems from the fact that Robinson’s transfer principle only applies to internal entities. We also evaluate the claim that transferless ordered fields may have advantages over hyperreals in probabilistic modeling. (...) We show that probabilities developed over such fields are less expressive than hyperreal probabilities. (shrink)
We present an ontological analysis of the notion of group agency developed by Christian List and Philip Pettit. We focus on this notion as it allows us to neatly distinguish groups, organizations, corporations – to which we may ascribe agency – from mere aggregates of individuals. We develop a module for group agency within a foundational ontology and we apply it to organizations.
The Journal of Scientific Exploration is devoted to the open-minded examination of scientific anomalies and other topics on the scientific frontier. Its articles and reviews, written by authorities in their respective fields, cover both data and theory in areas of science that are too often ignored or treated superficially by other scientific publications. This issue of the Journal features papers on a variety of subjects. The lead article discusses anomalous magnetic field activity during hands‑on healing and distant healing of mice (...) with experimentally induced tumors. The next paper is a contribution to the growing body of research providing evidence of "presentiment," or an unconscious awareness of a future event. The article focuses specifically on the relationship between presentiment and psychological absorption. The third paper discusses striking similarities between two historical cases of remote viewing. In 1905, the Icelandic medium Indridi Indridason described in Reykjavik through a drop‑in‑communicator, a fire that was burning in Copenhagen. Similarly, in 1759, Emanuel Swedenborg described in Gothenburg a fire that raged near his home in Stockholm. The next article critically and meticulously examines, and challenges, the arguments in favor of concluding that the parapsychologist S. G. Soal falsified data in his well-known experiments in precognitive telepathy. The next paper considers which types of belief are beliefs in the paranormal. The authors identify various measures of paranormal belief and propose a nine-factor structure analysis of common paranormal belief dimensions. The final article is a fascinating account of some 1907 psychokinetic experiments with the medium Eusapia Palladino, conducted in Naples by Filippo Bottazzi. These experiments are particularly notable for having made instrumental recordings of the observed, ostensible PK movements. This issue of the JSE is then filled out, as usual, with correspondence and substantive book reviews. (shrink)
Four ethical values — maximizing benefits, treating equally, promoting and rewarding instrumental value, and giving priority to the worst off — yield six specific recommendations for allocating medical resources in the Covid-19 pandemic: maximize benefits; prioritize health workers; do not allocate on a first-come, first-served basis; be responsive to evidence; recognize research participation; and apply the same principles to all Covid-19 and non–Covid-19 patients.
It is widely held that all lies are assertions: the traditional definition of lying entails that, in order to lie, speakers have to assert something they believe to be false. It is also widely held that assertion contrasts with presupposition and, in particular, that one cannot assert something by presupposing it. Together, these views imply that speakers cannot lie with presuppositions—a view that Andreas Stokke has recently explicitly defended. The aim of this paper is to argue that speakers can lie (...) with presuppositions, and to discuss some of the implications this outcome has for current research on lying, assertion and presupposition. (shrink)
The distinction between lying and mere misleading is commonly tied to the distinction between saying and conversationally implicating. Many definitions of lying are based on the idea that liars say something they believe to be false, while misleaders put forward a believed-false conversational implicature. The aim of this paper is to motivate, spell out, and defend an alternative approach, on which lying and misleading differ in terms of commitment: liars, but not misleaders, commit themselves to something they believe to be (...) false. This approach entails that lying and misleading involve speech-acts of different force. While lying requires the committal speech-act of asserting, misleading involves the non-committal speech-act of suggesting. The approach leads to a broader definition of lying that can account for lies that are told while speaking non-literally or with the help of presuppositions, and it allows for a parallel definition of misleading, which so far is lacking in the debate. (shrink)
Many recent definitions of lying are based on the notion of what is said. This paper argues that says-based definitions of lying cannot account for lies involving non-literal speech, such as metaphor, hyperbole, loose use or irony. It proposes that lies should instead be defined in terms of assertion, where what is asserted need not coincide with what is said. And it points to possible implications this outcome might have for the ethics of lying.
Machine learning (ML) has been praised as a tool that can advance science and knowledge in radical ways. However, it is not clear exactly how radical are the novelties that ML generates. In this article, I argue that this question can only be answered contextually, because outputs generated by ML have to be evaluated on the basis of the theory of the science to which ML is applied. In particular, I analyze the problem of novelty of ML outputs in the (...) context of molecular biology. In order to do this, I first clarify the nature of the models generated by ML. Next, I distinguish three ways in which a model can be novel (from the weakest to the strongest). Third, I dissect the way ML algorithms work and generate models in molecular biology and genomics. On these bases, I argue that ML is either a tool to identify instances of knowledge already present and codified, or to generate models that are novel in a weak sense. The notable contribution of ML to scientific discovery in the context of biology is that it can aid humans in overcoming potential bias by exploring more systematically the space of possible hypotheses implied by a theory. (shrink)
In this article, we propose the Fair Priority Model for COVID-19 vaccine distribution, and emphasize three fundamental values we believe should be considered when distributing a COVID-19 vaccine among countries: Benefiting people and limiting harm, prioritizing the disadvantaged, and equal moral concern for all individuals. The Priority Model addresses these values by focusing on mitigating three types of harms caused by COVID-19: death and permanent organ damage, indirect health consequences, such as health care system strain and stress, as well as (...) economic destruction. It proposes proceeding in three phases: the first addresses premature death, the second long-term health issues and economic harms, and the third aims to contain viral transmission fully and restore pre-pandemic activity. -/- To those who may deem an ethical framework irrelevant because of the belief that many countries will pursue "vaccine nationalism," we argue such a framework still has broad relevance. Reasonable national partiality would permit countries to focus on vaccine distribution within their borders up until the rate of transmission is below 1, at which point there would not be sufficient vaccine-preventable harm to justify retaining a vaccine. When a government reaches the limit of national partiality, it should release vaccines for other countries. -/- We also argue against two other recent proposals. Distributing a vaccine proportional to a country's population mistakenly assumes that equality requires treating differently situated countries identically. Prioritizing countries according to the number of front-line health care workers, the proportion of the population over 65, and the number of people with comorbidities within each country may exacerbate disadvantage and end up giving the vaccine in large part to wealthy nations. (shrink)
Recently, biologists have argued that data - driven biology fosters a new scientific methodology; namely, one that is irreducible to traditional methodologies of molecular biology defined as the discovery strategies elucidated by mechanistic philosophy. Here I show how data - driven studies can be included into the traditional mechanistic approach in two respects. On the one hand, some studies provide eliminative inferential procedures to prioritize and develop mechanistic hypotheses. On the other, different studies play an exploratory role in providing useful (...) generalizations to complement the procedure of prioritization. Overall this paper aims to shed light on the structure of contemporary research in molecular biology. (shrink)
In the past few years, the ethical ramifications of AI technologies have been at the center of intense debates. Considerable attention has been devoted to understanding how a morally responsible practice of data science can be promoted and which values have to shape it. In this context, ethics and moral responsibility have been mainly conceptualized as compliance to widely shared principles. However, several scholars have highlighted the limitations of such a principled approach. Drawing from microethics and the virtue theory tradition, (...) in this paper, we formulate a different approach to ethics in data science which is based on a different conception of “being ethical” and, ultimately, of what it means to promote a morally responsible data science. First, we develop the idea that, rather than only compliance, ethical decision-making consists in using certain moral abilities, which are cultivated by practicing and exercising them in the data science process. An aspect of virtue development that we discuss here is moral attention, which is the ability of data scientists to identify the ethical relevance of their own technical decisions in data science activities. Next, by elaborating on the capability approach, we define a technical act as ethically relevant when it impacts one or more of the basic human capabilities of data subjects. Therefore, rather than “applying ethics”, data scientists should cultivate ethics as a form of reflection on how technical choices and ethical impacts shape one another. Finally, we show how this microethical framework concretely works, by dissecting the ethical dimension of the technical procedures involved in data understanding and preparation of electronic health records. (shrink)
In arguing against a supposed ambiguity, philosophers often rely on the zeugma test. In an application of the zeugma test, a supposedly ambiguous expression is placed in a sentence in which several of its supposed meanings are forced together. If the resulting sentence sounds zeugmatic, that is taken as evidence for ambiguity; if it does not sound zeugmatic, that is taken as evidence against ambiguity. The aim of this article is to show that arguments based on the second direction of (...) the test are misguided: ambiguous expressions, and in particular philosophically contested ones, do not reliably lead to zeugmaticity, so an absence of zeugmaticity provides no meaningful evidence for an absence of ambiguity. (shrink)
In Emanuel Adler's distinctive constructivist approach to international relations theory, international practices evolve in tandem with collective knowledge of the material and social worlds. This book - comprising a selection of his journal publications, a new introduction and three previously unpublished articles - points IR constructivism in a novel direction, characterized as 'communitarian'. Adler's synthesis does not herald the end of the nation-state; nor does it suggest that agency is unimportant in international life. Rather, it argues that what mediates between (...) individual and state agency and social structures are communities of practice, which are the wellspring and repositories of collective meanings and social practices. The concept of communities of practice casts new light on epistemic communities and security communities, helping to explain why certain ideas congeal into human practices and others do not, and which social mechanisms can facilitate the emergence of normatively better communities. (shrink)
Allocation of very scarce medical interventions such as organs and vaccines is a persistent ethical challenge. We evaluate eight simple allocation principles that can be classiﬁed into four categories: treating people equally, favouring the worst-oﬀ, maximising total beneﬁts, and promoting and rewarding social usefulness. No single principle is suﬃcient to incorporate all morally relevant considerations and therefore individual principles must be combined into multiprinciple allocation systems. We evaluate three systems: the United Network for Organ Sharing points systems, quality-adjusted life-years, and (...) disability-adjusted life-years. We recommend an alternative system—the complete lives system—which prioritises younger people who have not yet lived a complete life, and also incorporates prognosis, save the most lives, lottery, and instrumental value principles. (shrink)
Pictures are notably absent from the current debate about how to define lying. Theorists in this debate tend to focus on linguistic means of communication and do not consider the possibility of lying with photographs, drawings and other kinds of pictures. The aim of this paper is to show that such a narrow focus is misguided: there is a strong case to be made for the possibility of lying with pictures and this possibility allows for insights concerning the question of (...) how lying should be defined. (shrink)
In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be (...) interpretable at the algorithmic level to make them trustworthy, as long as they meet some strict empirical desiderata. In this paper, we analyse and develop London’s position. In particular, we make two claims. First, we claim that London’s solution to the problem of trust can potentially address another problem, which is how to evaluate the reliability of ML tools in medicine for regulatory purposes. Second, we claim that to deal with this problem, we need to develop London’s views by shifting the focus from the opacity of algorithmic details to the opacity of the way in which ML tools are trained and built. We claim that to regulate AI tools and evaluate their reliability, agencies need an explanation of how ML tools have been built, which requires documenting and justifying the technical choices that practitioners have made in designing such tools. This is because different algorithmic designs may lead to different outcomes, and to the realization of different purposes. However, given that technical choices underlying algorithmic design are shaped by value-laden considerations, opening the black box of the design process means also making transparent and motivating (technical and ethical) values and preferences behind such choices. Using tools from philosophy of technology and philosophy of science, we elaborate a framework showing how an explanation of the training processes of ML tools in medicine should look like. (shrink)
Background: The health professionals are involved in the paths of care for patients with different medical conditions. Their life is frequently characterized by psychopathological outcomes so that it is possible to identify consistent burdens. Besides the possibility to develop pathological outcomes, some protective factors such as resilience play a fundamental role in facilitating the adaptation process and the management of maladaptive patterns. Personal characteristics and specific indexes such as burdens and resilience are essential variables useful to study in-depth ongoing conditions (...) and possible interventions. The study was aimed at highlighting the presence and the relations among factors as personal variables, burdens, and resilience, to understand health professionals' specific structure and functions.Methods: The observation group was composed of 210 participants, 55 males, and 155 females, aged from 18 to 30 years old with a mean age of 25.92 years old. The study considered personal characteristics of the subjects, such as age, gender, years of study, days of work per week, hours of work per week, and years of work. Our study had been conducted with the use of measures related to burdens and resilience.Results: The performed analyses consisted of descriptive statistics, correlations, and regressions among the considered variables. Several significant correlations emerged among personal characteristics, CBI, and RSA variables. Specifically, age and work commitment indexes appeared to be significantly related to the development of burdens, differently from the years of study. Significant correlations emerged among personal and RSA variables, indicating precise directions for both domains. Age and gender were identified as predictors to perform multivariate regression analyses concerning CBI factors. Significant dependence relations emerged with reference to all CBI variables.Conclusion: Pathological outcomes and resilience factors represent two sides of the health professionals' experiences, also known as “invisible patients.” Greater knowledge about present conditions and future possibilities is a well-known need in literature so that the current analyses considered fundamental factors. In line with state of the art, future studies are needed in order to deepen elusive phenomena underlying maladjustment. (shrink)
Drawing on evolutionary epistemology, process ontology, and a social-cognition approach, this book suggests cognitive evolution, an evolutionary-constructivist social and normative theory of change and stability of international social orders. It argues that practices and their background knowledge survive preferentially, communities of practice serve as their vehicle, and social orders evolve. As an evolutionary theory of world ordering, which does not borrow from the natural sciences, it explains why certain configurations of practices organize and govern social orders epistemically and normatively, and (...) why and how these configurations evolve from one social order to another. Suggesting a multiple and overlapping international social orders' approach, the book uses three running cases of contested orders - Europe's contemporary social order, the cyberspace order, and the corporate order - to illustrate the theory. Based on the concepts of common humanity and epistemological security, the author also submits a normative theory of better practices and of bounded progress. (shrink)
Molecular biologists exploit information conveyed by mechanistic models for experimental purposes. In this article, I make sense of this aspect of biological practice by developing Keller’s idea of the distinction between ‘models of’ and ‘models for’. ‘Models of (phenomena)’ should be understood as models representing phenomena and are valuable if they explain phenomena. ‘Models for (manipulating phenomena)’ are new types of material manipulations and are important not because of their explanatory force, but because of the interventionist strategies they afford. This (...) is a distinction between aspects of the same model. In molecular biology, models may be treated either as ‘models of’ or as ‘models for’. By analysing the discovery and characterization of restriction–modification systems and their exploitation for DNA cloning and mapping, I identify the differences between treating a model as a ‘model of’ or as a ‘model for’. These lie in the cognitive disposition of the modeller towards the model: a modeller will look at a model as a ‘model of’ if interested in its explanatory force, or as a ‘model for’ if interested in the material manipulations it can possibly afford. (shrink)
We argue that mechanistic models elaborated by machine learning cannot be explanatory by discussing the relation between mechanistic models, explanation and the notion of intelligibility of models. We show that the ability of biologists to understand the model that they work with severely constrains their capacity of turning the model into an explanatory model. The more a mechanistic model is complex, the less explanatory it will be. Since machine learning increases its performances when more components are added, then it generates (...) models which are not intelligible, and hence not explanatory. (shrink)
In the last few years, biologists and computer scientists have claimed that the introduction of data science techniques in molecular biology has changed the characteristics and the aims of typical outputs (i.e. models) of such a discipline. In this paper we will critically examine this claim. First, we identify the received view on models and their aims in molecular biology. Models in molecular biology are mechanistic and explanatory. Next, we identify the scope and aims of data science (machine learning in (...) particular). These lie mainly in the creation of predictive models which performances increase as data set increases. Next, we will identify a tradeoff between predictive and explanatory performances by comparing the features of mechanistic and predictive models. Finally, we show how this a priori analysis of machine learning and mechanistic research applies to actual biological practice. This will be done by analyzing the publications of a consortium—The Cancer Genome Atlas—which stands at the forefront in integrating data science and molecular biology. The result will be that biologists have to deal with the tradeoff between explaining and predicting that we have identified, and hence the explanatory force of the ‘new’ biology is substantially diminished if compared to the ‘old’ biology. However, this aspect also emphasizes the existence of other research goals which make predictive force independent from explanation. (shrink)
What is the content of a sentence in context? A proposition, says the standard propositional view accepted in much of semantics. A set of propositions, says the hitherto little-explored view of Semantic Pluralism. The aim of this book is to motivate, develop and defend Semantic Pluralism. To achieve this aim, the book puts forward two arguments against Contextualism, the most popular propositional theory. It spells out two versions of Semantic Pluralism: Flexible Pluralism, which takes many expressions to be context-sensitive, and (...) Strong Pluralism, which denies that context-sensitivity is widespread. And it shows how Pluralists can reply to several objections that have been lodged against non-propositional semantic theories. (shrink)
1. The opinions expressed are the author's own. They do not reflect any position or policy of the National Institutes of Health, Public Health Service, Department of Health and Human Services, or any of the authors affiliated organizations.
Intentionalism is the view that demonstratives, gradable adjectives, quantifiers, modals and other context‐sensitive expressions are intention‐sensitive: their semantic value on a given use is fixed by speaker intentions. The first aim of this paper is to defend Intentionalism against three recent objections, according to which speakers at least sometimes do not have suitable intentions when using supposedly intention‐sensitive expressions. Its second aim is to thereby shed light on the so far little‐explored question of which kinds of intentions can be semantically (...) relevant. (shrink)
Dieses Buch bietet die erste systematische Interpretation von Husserls Ideen für eine reine Phänomenologie und phänomenologische Philosophie anhand der neuen kritischen Edition von Ideen II. Es ermöglicht eine phänomenologische Auslegung des allgemein-metaphysischen Problems, wie physische, mentale und soziale Tatsachen zusammenhängen. Das Buch diskutiert und interpretiert detailliert einige von Husserls zentralen Konzeptionen und zeigt die Konsequenzen seines Denkansatzes und seiner Theorieentwicklung. Natur und Gemeingeist sind Husserl zufolge die Grundbegriffe der naturalistischen und der personalistischen Einstellungen und dienen als Leitfaden der Unterscheidung zwischen (...) Natur- und Geisteswissenschaften. In der kritischen Auseinandersetzung mit diesem wissenschaftstheoretischen Dualismus führt Husserl den Habitus-Begriff methodisch ein, um das Verhältnis von Natur- und Sozialontologie aus der konkreten Erfahrung heraus phänomenologisch neu zu deuten, womit der spätere, anti-dualistische Weg der Lebensweltphänomenologie vorbereitet wird. In Husserls Studien zur Regionalontologie des Gemeingeistes rückt das konkrete Subjekt in den Vordergrund der Intentionalitätsanalyse, indem die sinntragenden Elemente der Inaktualitität auf Habitualisierungsprozesse und die Intersubjektivität auf Sozialisierungsstufen zurückgeführt werden. Dank der durch den Habitus-Begriff ermöglichten klaren Unterscheidung zwischen konstituierender Aktualität und konstitutiver Relevanz des inaktuellen Horizonts kann Husserls Philosophie des Geistes als individualistisch und holistisch zugleich gelten. Dieser ontologischen Position entspricht auch Husserls sozialepistemologische Ansicht, dass sich Wissenschaften erst im Rahmen idealisierter Sozialstrukturen entfalten können. Durch diese idealisierenden Operationen wird die Konstitution der Objektivität möglich, welche die Wissenschaften anstreben. Deren Rationalität ist deshalb in ihren konkreten und idealisierten Sozialitätsstufen und Habitualitäten zu befragen. (shrink)
In biology—as in other scientific fields—there is a lively opposition between big and small science projects. In this commentary, I try to contextualize this opposition in the field of biomedicine, and I argue that, at least in this context, big science projects should come first.