This paper defends the thesis of learning from non-causal models: viz. that the study of some model can prompt justified changes in one’s confidence in empirical hypotheses about a real-world target in the absence of any known or predicted similarity between model and target with regards to their causal features. Recognizing that we can learn from non-causal models matters not only to our understanding of past scientific achievements, but also to contemporary debates in the philosophy of science. At one end (...) of the philosophical spectrum, my thesis undermines the views of those who, like Cartwright, follow Hesse in restricting the possibility of learning from models to only those situations where a model identifies some causal factors present in the target. At the other end of the spectrum, my thesis also helps undermine some extremely permissive positions, e.g., Grüne-Yanoff’s :81–99, 2009, Philos Sci 80: 850–861, 2013) claim that learning from a model is possible even in the absence of any similarity at all between model and target. The thesis that we can learn from non-causal models offers a cautious middle ground between these two extremes. (shrink)
Arguments from non-causal analogy form a distinctive class of analogical arguments in science not recognized in authoritative classifications by, e.g., Hesse and Bartha. In this paper, I illustrate this novel class of scientific analogies by means of historical examples from physics, biology and economics, at the same time emphasizing their broader significance for contemporary debates in epistemology.
This paper proposes a framework for representing in Bayesian terms the idea that analogical arguments of various degrees of strength may provide inductive support to yet untested scientific hypotheses. On this account, contextual information plays a crucial role in determining whether, and to what extent, a given similarity or dissimilarity between source and target may confirm an empirical hypothesis over a rival one. In addition to showing confirmation by analogy compatible with the adoption of a Bayesian standpoint, the proposal outlined (...) in this paper reveals a close agreement between the fulfillment of Hesse’s criteria for analogical arguments capable of inductive support and the attribution of confirmatory power by the lights of Bayesian confirmation theory. In this sense, the Bayesian representation not only enriches a framework, Hesse’s, of enduring relevance for understanding scientific activity, but may offer something akin to a proof of concept of it. (shrink)
Building upon work by Mary Hesse (1974), this paper aims to show that a single method of investigation lies behind Maxwell’s use of physical analogies in his major scientific works before the Treatise on Electricity and Magnetism. Key to understanding the operation of this method is to recognize that Maxwell’s physical analogies are intended to possess an ‘inductive’ function in addition to an ‘illustrative’ one. That is to say, they not only serve to clarify the equations proposed for an unfamiliar (...) domain with a working interpretation drawn from a more familiar science, but can also be sources of defeasible yet relatively strong arguments from features of the more familiar domain to features of the less. Compared with the reconstructions by Achinstein (1991), Siegel (1991), Harman (1998) and others, which postulate a discontinuity in Maxwell’s approach to physical analogy, the account defended in this paper i) makes sense of the continuity in Maxwell’s remarks on scientific methodology, ii) explains his quest for a “mathematical classification of physical quantities” and iii) offers a new and more plausible interpretation of the debated episode of the introduction of the displacement current in Maxwell’s “On Physical Lines of Forces”. (shrink)
Few contributions in the field of metaphysics can be compared, for their depth and impact, to the work of the American philosopher David K. Lewis. A feature of this work, which partly explain its great appeal, is its systematicity. Lewis’s views on intrinsicality, naturalness, supervenience, mind and modality, to mention just a few themes, constitute a unified and connected body of doctrines. As Lewis himself acknowledged in the introduction to the first volume of collected papers: “I should have liked to (...) be a piecemeal, unsystematic philosopher, offering independent proposals on a variety of topics. It was not to be”. Surely there is an element of beauty in this systematicity. But there is also an element of precariousness. For a body of doctrines has some vital organs: claims or assumptions that are so central to the life of the system that, if one were to reject them, the system as a whole would likely collapse. This seems to be true, in particular, of Lewis’s metaphysical system. What I present here are two investigations concerning, respectively, the problem of ontic vagueness and the existence of a fundamental level. I believe that the evaluation of these two issues is of vital importance for assessing the tenability of Lewis’s systematic metaphysics. There is a general worry lurking behind my discussion, which it’s worth making explicit here. The worry is that, if the justification for Lewis’s claims on ontic vagueness and fundamentality turned out to be wanting or otherwise unsatisfactory, and if I am right to think that these theses are part of a number of central claims constituting the basis of his metaphysical system, then it seems we should start being suspicious of the very tenability of the Lewisian metaphysics as a whole. What I will be arguing for in the two main chapters of this dissertation provides, in my view, enough material for a modest defense of Lewis’s views on ontic vagueness and fundamentality. I will clarify the content of this modest defense, and explain its significance for the development of the contemporary debate in metaphysics, as well as for a redefinition of a kind of Lewisian metaphysics, in a brief note at the end of the introduction. (shrink)
Este artigo apresenta as críticas de Francesco Patrizi à concepção aristotélica de tempo na sua Física, isto é, a crítica de Patrizi ao princípio de que o tempo é infinito em termos de infinidade matemática. A principal tese de Patrizi é a de que a “infinidade possível" da matemática acarreta contradições quando aplicada a substâncias naturais e à ciência natural em geral.
Il volume qui raccolto, nella pluralità degli interventi da parte di colleghi ed allievi, costituisce una vera e propria mappa degli interessi e delle relazioni intessute da Francesco Moiso con studiosi e istituzioni italiane e straniere nel corso di questi anni, e rappresenta al contempo uno specchio fedele dei temi di ricerca prediletti con cui lo studioso si è confrontato, come testimonia la bibliografia delle sue opere presente in questo libro. Annotation Supplied by Informazioni Editoriali.
On the one hand, after Matteo d'Acquasparta's distinction between the three types of eternity and the temporal necessity of the past, Meyronnes radicalized Scotus's dynamic vision of duration, conceiving the modality as a relation of implication between predicate and existing subject, and time as relationship between Creator and creature. On the other hand, after Ockham denied the real simultaneity of opposed potencies, the Ochamist extension of temporal necessity to the present was denied by Gregory of Rimini, who was favourable, together (...) with Wodeham, to the mutability of the past in a divided sense. Mirecourt, strong on the English subtleties, appears to follow Gregory and tries to find a solution to the interaction between the two contingencies, from top to bottom, which had been formalized by Gregory: if I, performing or not performing X, can act as if God, as the supreme intellect from eternity, could have known or not known X to come, and, if God as agent, absolutely willing omnipotent and unimpeadable from eternity can act as if X happened or did not happen, then can I act as if X, which is from eternity, did not happen from eternity? (shrink)
Understanding Institutions proposes a new unified theory of social institutions that combines the best insights of philosophers and social scientists who have written on this topic. Francesco Guala presents a theory that combines the features of three influential views of institutions: as equilibria of strategic games, as regulative rules, and as constitutive rules. -/- Guala explains key institutions like money, private property, and marriage, and develops a much-needed unification of equilibrium- and rules-based approaches. Although he uses game theory concepts, (...) the theory is presented in a simple, clear style that is accessible to a wide audience of scholars working in different fields. Outlining and discussing various implications of the unified theory, Guala addresses venerable issues such as reflexivity, realism, Verstehen, and fallibilism in the social sciences. He also critically analyses the theory of “looping effects” and “interactive kinds” defended by Ian Hacking, and asks whether it is possible to draw a demarcation between social and natural science using the criteria of causal and ontological dependence. Focusing on current debates about the definition of marriage, Guala shows how these abstract philosophical issues have important practical and political consequences. -/- Moving beyond specific cases to general models and principles, Understanding Institutions offers new perspectives on what institutions are, how they work, and what they can do for us. (shrink)
Conversazioni di Gianni Vattimo con Francesco Barone, Remo Bodei, Italo Mancini, Vittorio Mathieu, Mario Perniola, Pier Aldo Rovatti, Emanuele Severino e Carlo Sini.
A positive topology is a set equipped with two particular relations between elements and subsets of that set: a convergent cover relation and a positivity relation. A set equipped with a convergent cover relation is a predicative counterpart of a locale, where the given set plays the role of a set of generators, typically a base, and the cover encodes the relations between generators. A positivity relation enriches the structure of a locale; among other things, it is a tool to (...) study some particular subobjects, namely the overt weakly closed sublocales. We relate the category of locales to that of positive topologies and we show that the former is a reflective subcategory of the latter. We then generalize such a result to the category of suplattices, which we present by means of cover relations. Finally, we show that the category of positive topologies also generalizes that of formal topologies, that is, overt locales. (shrink)
The experimental approach in economics is a driving force behind some of the most exciting developments in the field. The 'experimental revolution' was based on a series of bold philosophical premises which have remained until now mostly unexplored. This book provides the first comprehensive analysis and critical discussion of the methodology of experimental economics, written by a philosopher of science with expertise in the field. It outlines the fundamental principles of experimental inference in order to investigate their power, scope and (...) limitations. The author demonstrates that experimental economists have a lot to gain by discussing openly the philosophical principles that guide their work, and that philosophers of science have a lot to learn from their ingenious techniques devised by experimenters in order to tackle difficult scientific problems. (shrink)
Objective correlates—behavioral, functional, and neural—provide essential tools for the scientific study of consciousness. But reliance on these correlates should not lead to the ‘fallacy of misplaced objectivity’: the assumption that only objective properties should and can be accounted for objectively through science. Instead, what needs to be explained scientifically is what experience is intrinsically— its subjective properties—not just what we can do with it extrinsically. And it must be explained; otherwise the way experience feels would turn out to be magical (...) rather than physical. We argue that it is possible to account for subjective properties objectively once we move beyond cognitive functions and realize what experience is and how it is structured. Drawing on integrated information theory, we show how an objective science of the subjective can account, in strictly physical terms, for both the essential properties of every experience and the specific properties that make particular experiences feel the way they do. (shrink)
In this paper, we take a meta-theoretical stance and aim to compare and assess two conceptual frameworks that endeavor to explain phenomenal experience. In particular, we compare Feinberg & Mallatt’s Neurobiological Naturalism (NN) and Tononi’s and colleagues' Integrated Information Theory (IIT), given that the former pointed out some similarities between the two theories (Feinberg & Mallatt 2016c-d). To probe their similarity, we first give a general introduction to both frameworks. Next, we expound a ground plan for carrying out our analysis. (...) We move on to articulate a philosophical profile of NN and IIT, addressing their ontological commitments and epistemological foundations. Finally, we compare the two point-by-point, also discussing how they stand on the issue of artificial consciousness. (shrink)
The Risk of Freedom presents an in-depth analysis of the philosophy of Jan Patočka, one of the most influential Central European thinkers of the twentieth century, examining both the phenomenological and ethical-political aspects of his work. In particular, Francesco Tava takes an original approach to the problem of freedom, which represents a recurring theme in Patočka’s work, both in his early and later writings.Freedom is conceived of as a difficult and dangerous experience. In his deep analysis of this particular (...) problem, Tava identifies the authentic ethical content of Patočka’s work and clarifies its connections with phenomenology, history of philosophy, politics and dissidence. The Risk of Freedom retraces Patočka’s philosophical journey and elucidates its more problematic and less evident traits, such as his original ethical conception, his political ideals and his direct commitment as a dissident. (shrink)
Recent debates on the nature of preferences in economics have typically assumed that they are to be interpreted either as behavioural regularities or as mental states. In this paper I challenge this dichotomy and argue that neither interpretation is consistent with scientific practice in choice theory and behavioural economics. Preferences are belief-dependent dispositions with a multiply realizable causal basis, which explains why economists are reluctant to make a commitment about their interpretation.
This book represents a unique attempt to restore a 'new-classical' aspiration towards a philosophical system able to provide some certainties. Using the distinctive feature of presenting an original and complete philosophical system, author Francesco Belfiore diverges from the philosophical literature of the last decades, which has been ever more focused upon specific fields.
This study considers the contribution of Francesco Patrizi da Cherso to the development of the concepts of void space and an infinite universe. Patrizi plays a greater role in the development of these concepts than any other single figure in the sixteenth century, and yet his work has been almost totally overlooked. I have outlined his views on space in terms of two major aspects of his philosophical attitude: on the one hand, he was a devoted Platonist and sought (...) always to establish Platonism, albeit his own version of it, as the only currect philosophy; and on the other hand, he was more determinedly anti-Aristotelian than any other philosopher at that time. Patrizi's concept of space has its beginnings in Platonic notions, but is extended and refined in the light of a vigorous critique of Aristotle's position. Finally, I consider the influence of Patrizi's ideas in the seventeenth century, when various thinkers are seeking to overthrow the Aristotelian concept of place and the equivalence of dimensionality with corporeality. Pierre Gassendi , for example, needed a coherent concept of void space in which his atoms could move, while Henry More sought to demonstrate the reality of incorporeal entities by reference to an incorporeal space. Both men could find the arguments they needed in Patrizi's comprehensive treatment of the subject. (shrink)
Strong Reciprocity theorists claim that cooperation in social dilemma games can be sustained by costly punishment mechanisms that eliminate incentives to free ride, even in one-shot and finitely repeated games. There is little doubt that costly punishment raises cooperation in laboratory conditions. Its efficacy in the field however is controversial. I distinguish two interpretations of experimental results, and show that the wide interpretation endorsed by Strong Reciprocity theorists is unsupported by ethnographic evidence on decentralised punishment and by historical evidence on (...) common pool institutions. The institutions that spontaneously evolve to solve dilemmas of cooperation typically exploit low-cost mechanisms, turning finite games into indefinitely repeated ones and eliminating the cost of sanctioning. (shrink)
Current debates in social ontology are dominated by approaches that view institutions either as rules or as equilibria of strategic games. We argue that these two approaches can be unified within an encompassing theory based on the notion of correlated equilibrium. We show that in a correlated equilibrium each player follows a regulative rule of the form ‘if X then do Y’. We then criticize Searle's claim that constitutive rules of the form ‘X counts as Y in C’ are fundamental (...) building blocks for institutions, showing that such rules can be derived from regulative rules by introducing new institutional terms. Institutional terms are introduced for economy of thought, but are not necessary for the creation of social reality. (shrink)
Manufacturing and industry practices are undergoing an unprecedented revolution as a consequence of the convergence of emerging technologies such as artificial intelligence, robotics, cloud computing, virtual and augmented reality, among others. This fourth industrial revolution is similarly changing the practices and capabilities of operators in their industrial environments. This paper introduces and explores the notion of the Operator 4.0 as well as how this novel way of conceptualizing the human operator necessarily implicates human values in the technologies that constitute it. (...) The design approach known as value sensitive design (VSD) is used to explore how these Operator 4.0 technologies can be designed for human values. Expert elicitation surveys were used to determine the values of industry stakeholders and examples of how the VSD methodology can be adopted by engineers in order to design for these values is illustrated. The results provide preliminary adoption strategies that industrial teams can take to Operator 4.0 technology for human values. (shrink)
Medieval Sovereignty examines the idea of sovereignty in the Middle Ages and asks if it can be considered a fundamental element of medieval constitutional order.
Language evolution, intended as an open problem in the evolutionary research programme, will be here analyzed from the theoretical perspective advanced by the supporters of the Extended Evolutionary Synthesis. Four factors and two associated concepts will be matched with a selection of critical examples concerning genus Homo evolution, relevant for the evolution of language, such as the evolution of hominin life-history traits, the enlargement of the social group, increased cooperation among individuals, behavioral change and innovations, heterochronic modifications leading to increased (...) synaptic plasticity. A particular form of niche construction will be considered in a multilevel framework. It will be argued that the four points mentioned above prove to be fundamental explanatory tools to understand how language might have emerged as a result of a gene-culture coevolutionary dynamics. (shrink)
Background Providing understandable information to patients is necessary to achieve the aims of the Informed Consent process: respecting and promoting patients’ autonomy and protecting patients from harm. In recent decades, new, primarily digital technologies have been used to apply and test innovative formats of Informed Consent. We conducted a systematic review to explore the impact of using digital tools for Informed Consent in both clinical research and in clinical practice. Understanding, satisfaction and participation were compared for digital tools versus the (...) non-digital Informed Consent process. Methods We searched for studies on available electronic databases, including Pubmed, EMBASE, and Cochrane. Studies were identified using specific Mesh-terms/keywords. We included studies, published from January 2012 to October 2020, that focused on the use of digital Informed Consent tools for clinical research, or clinical procedures. Digital interventions were defined as interventions that used multimedia or audio–video to provide information to patients. We classified the interventions into 3 different categories: video only, non-interactive multimedia, and interactive multimedia. Results Our search yielded 19,579 publications. After title and abstract screening 100 studies were retained for full-text analysis, of which 73 publications were included. Studies examined interactive multimedia, non-interactive multimedia, and videos, and most studies were conducted on adults. Innovations in consent were tested for clinical/surgical procedures and clinical research. For research IC, 21 outcomes were explored, with a positive effect on at least one of the studied outcomes being observed in 8/12 studies. For clinical/surgical procedures 49 outcomes were explored, and 21/26 studies reported a positive effect on at least one of the studied outcomes. Conclusions Digital technologies for informed consent were not found to negatively affect any of the outcomes, and overall, multimedia tools seem desirable. Multimedia tools indicated a higher impact than videos only. Presence of a researcher may potentially enhance efficacy of different outcomes in research IC processes. Studies were heterogeneous in design, making evaluation of impact challenging. Robust study design including standardization is needed to conclusively assess impact. (shrink)
The Triadic Structure of the Mind provides a philosophical system that offers fresh solutions in the fields of ontology, knowledge, ethics, and politics. The second edition includes a more extensive treatment of the topics addressed in the first edition, the introduction of new concepts, and the inclusion of additional thinkers.
This extensive collection develops the philosophical content of sections from the previously published The Structure of the Mind . Dr. Belfiore begins from the basic ontological conception that considers the human 'mind' or 'spirit' as an evolving, conscious triad composed of intellect, sensitivity, and power, each exerting a selfish or moral activity. Through this approach the author develops new concepts about ethics, political philosophy, and the philosophy of law. Dr. Belfiore poses these and other concepts under the opinion that issues (...) concerning human beings can only be discussed by referring to what humans are as ontological entities. Thus, the notions of good and norms of morality, law, and society are derived from the structure and function of the mind. It follows that the solutions Dr. Belfiore presents are the results of a discovery and not the consequence of a conscious choice. Otherwise stated, ethics, politics, and law are given an ontological foundation. For each topic considered, Dr. Belfiore shows how his thought can reinterpret the views of other philosophers. The result of this is an innovative and highly stimulating text, which is of interest to graduate students and scholars in the philosophical branches of ethics, politics, and law. (shrink)
The paper aims to explore the phenomenon of the spread in democracy of new powers – produced by inexhaustible technological developments – from the perspective of the philosophy of Institutions. It traces the original idea of democracy, in which the «government of the people» arises from the conversion of natural liberty into social and political liberty, dwells on the political and juridical meaning of authority, analyses the traditional instruments used to condition human opinions and behaviours, and reconstructs – in light (...) of this itinerary – the functioning and new grammar of the digital order. What opens before us is a fluid and disorganized scenario, dominated by digital systems, algorithms and artificial intelligence, that draws the attention of philosophers and sociologists, jurists and scholars of language and of anthropology. The old single order, outlined by the political and juridical machine of the modern State – which, through an aloof and solemn language, aimed to impart regularity to human behaviour and to give society direction – is replaced by multifarious models of order, each of which is generated by its own logic, practices, and autonomous control techniques. Under the omnipotence of technology, concepts such as authority, liberty, truth and power undergo a vortex of semantic transformations that penetrate a new symbolic space into human reasoning and actions. (shrink)
What is it for a car, a piece of art or a person to be good, bad or better than another? In this first book-length introduction to value theory, Francesco Orsi explores the nature of evaluative concepts used in everyday thinking and speech and in contemporary philosophical discourse. The various dimensions, structures and connections that value concepts express are interrogated with clarity and incision. -/- Orsi provides a systematic survey of both classic texts including Plato, Aristotle, Kant, Moore and (...) Ross and an array of contemporary theorists. The reader is guided through the moral maze of value theory with everyday examples and thought experiments. Rare stamps, Napoleon's hat, evil demons, and Kant's good will are all considered in order to probe our intuitions, question our own and philosophers' assumptions about value, and, ultimately, understand better what we want to say when we talk about value. -/- 1. Value and Normativity 1.1 Introduction 1.2 Which Evaluations? 1.3 The Idea of Value Theory 1.4 Value and Normativity 1.5 Overview 1.6 Meta-ethical Neutrality 1.7 Value Theory: The Questions -/- 2. Meet the Values: Intrinsic, Final & Co. 2.1 Introduction 2.2 Final and Unconditional Value: Some Philosophical Examples 2.3 Intrinsic Value and Final Value 2.4 The Reduction to Facts 2.5 Intrinsic and Conditional Value 2.6 Elimination of Extrinsic Value? 2.7 Summary -/- 3. The Challenge against Absolute Value 3.1 Introduction 3.2 Geach and Attributive Goodness 3.3 Foot and the Virtues 3.4 Thomson and Goodness in a Way 3.5 Zimmerman's Ethical Goodness 3.6 A Better Reply: Absolute Value and Fitting Attitudes 3.7 Summary -/- 4. Personal Value 4.1 Introduction 4.2 Moore on Good and Good For 4.3 Good For and Fitting Attitudes 4.4 Moore Strikes Back? 4.5 Agent-relative Value 4.6 Impersonal/Personal and Agent-neutral/Agent-relative 4.7 Summary -/- 5. The Chemistry of Value 5.1 Introduction 5.2 Supervenience and Other Relations 5.3 Organic Unities 5.4 Alternatives to Organic Unities: Virtual Value 5.5 Alternatives to Organic Unities: Conditional Value 5.6 Holism and Particularism 5.7 Summary -/- 6. Value Relations 6.1 Introduction 6.2 The Trichotomy Thesis and Incomparability 6.3 A Fitting Attitude Argument for Incomparability 6.4 Against Incomparability: Epistemic Limitations 6.5 Against Incomparability: Parity 6.6 Parity and Choice 6.7 Parity and Incomparability 6.8 Summary -/- 7. How Do I Favour Thee? 7.1 Introduction 7.2 Three Dimensions of Favouring 7.3 Responses to Value: Maximizing 7.4 Two Concepts of Intrinsic Value? 7.5 Summary -/- 8. Value and the Wrong Kind of Reasons 8.1 Introduction 8.2 The Fitting Attitude Account and its Rivals 8.3 The Wrong Kind of Reasons Problem 8.4 The Structure of the Problem and an Initial Response 8.5 Reasons for What? 8.6 Characteristic Concerns and Shared Reasons 8.7 Circular Path: No-Priority 8.8 Summary . (shrink)
This article investigates the effects of perceived supervisor support on ethical and unethical employee behavior using a multi-method approach. Specifically, we test the mediating mechanism and a boundary condition that moderate the relationship between support and ethical employee behaviors. We find that supervisor-based self-esteem fully mediates the relationship between supervisor support and ethical employee behavior and that employee task satisfaction intensifies the relationship between supervisor support and supervisor-based self-esteem.
This paper presents the evolution of the Islamic debates on iʿādat al-maʿdūm [restoration of the non-existent], examining the notion itself, the motives behind its adoption and rejection, and the arguments for and against its possibility. Restoration consists in an act of recreating a previously annihilated entity while preserving its identity. Most pre-Avicennian theologians accept the possibility of restoration, while disagreeing on one preliminary issue and one derivative issue. Adopting restoration enabled the mutakallimūn to reconcile a corporealist anthropology with the possibility (...) of resurrection. Avicenna presented an influential case against the possibility of restoration consisting of three main arguments: from intuition, from the indiscernibility of a restored entity from its equivalent copy, and from the contradiction entailed by the restoration of time. Among the post-Avicennian schools, only the Ashʿarites defended the possibility of restoration. The debates of the post-Classical period built upon the basic argumentative core outlined by Avicenna and the early Mutakallimūn, considering more sophisticated formulations, objections, and answers, as well as designing some totally new arguments both for and against restoration. (shrink)
Experimental “localism” stresses the importance of context‐specific knowledge, and the limitations of universal theories in science. I illustrate Latour's radical approach to localism and show that it has some unpalatable consequences, in particular the suggestion that problems of external validity (or how to generalize experimental results to nonlaboratory circumstances) cannot be solved. In the last part of the paper I try to sketch a solution to the problem of external validity by extending Mayo's error‐probabilistic approach.
The aim of the present book is to give a comprehensive account of the ‘state of the art’ of substructural logics, focusing both on their proof theory and on their semantics (both algebraic and relational. It is for graduate students in either philosophy, mathematics, theoretical computer science or theoretical linguistics as well as specialists and researchers.