In the remainder of this article, we will disarm an important motivation for epistemic contextualism and interest-relative invariantism. We will accomplish this by presenting a stringent test of whether there is a stakes effect on ordinary knowledge ascription. Having shown that, even on a stringent way of testing, stakes fail to impact ordinary knowledge ascription, we will conclude that we should take another look at classical invariantism. Here is how we will proceed. Section 1 lays out some limitations of previous (...) research on stakes. Section 2 presents our study and concludes that there is little evidence for a substantial stakes effect. Section 3 responds to objections. The conclusion clears the way for classical invariantism. (shrink)
Philosophers have long debated whether, if determinism is true, we should hold people morally responsible for their actions since in a deterministic universe, people are arguably not the ultimate source of their actions nor could they have done otherwise if initial conditions and the laws of nature are held fixed. To reveal how non-philosophers ordinarily reason about the conditions for free will, we conducted a cross-cultural and cross-linguistic survey (N = 5,268) spanning twenty countries and sixteen languages. Overall, participants tended (...) to ascribe moral responsibility whether the perpetrator lacked sourcehood or alternate possibilities. However, for American, European, and Middle Eastern participants, being the ultimate source of one’s actions promoted perceptions of free will and control as well as ascriptions of blame and punishment. By contrast, being the source of one’s actions was not particularly salient to Asian participants. Finally, across cultures, participants exhibiting greater cognitive reflection were more likely to view free will as incompatible with causal determinism. We discuss these findings in light of documented cultural differences in the tendency toward dispositional versus situational attributions. (shrink)
This article examines whether people share the Gettier intuition (viz. that someone who has a true justified belief that p may nonetheless fail to know that p) in 24 sites, located in 23 countries (counting Hong-Kong as a distinct country) and across 17 languages. We also consider the possible influence of gender and personality on this intuition with a very large sample size. Finally, we examine whether the Gettier intuition varies across people as a function of their disposition to engage (...) in “reflective” thinking. (shrink)
This article examines whether people share the Gettier intuition (viz. that someone who has a true justified belief that p may nonetheless fail to know that p) in 24 sites, located in 23 countries (counting Hong Kong as a distinct country) and across 17 languages. We also consider the possible influence of gender and personality on this intuition with a very large sample size. Finally, we examine whether the Gettier intuition varies across people as a function of their disposition to (...) engage in “reflective” thinking. (shrink)
Predictivists use the no miracle argument to argue that “novel” predictions are decisive evidence for theories, while mere accommodation of “old” data cannot confirm to a significant degree. But deductivists claim that since confirmation is a logical theory-data relationship, predicted data cannot confirm more than merely deduced data, and cite historical cases in which known data confirmed theories quite strongly. On the other hand, the advantage of prediction over accommodation is needed by scientific realists to resist Laudan’s criticisms of the (...) no miracle argument. So, if the deductivists are right, the most powerful argument for realism collapses. There seems to be an inescapable contradiction between these prima facie plausible arguments of predictivists and deductivists; but this puzzle can be solved by understanding what exactly counts as novelty, if novel predictions must support the no miracle argument, i.e., if they must be explainable only by the truth of theories. Taking my cues from the use-novelty tradition, I argue that (1) the predicted data must not be used essentially in building the theory or choosing the auxiliary assumptions. This is possible if the theory and its auxiliary assumptions are plausible independently of the predicted data, and I analyze the consequences of this requirement in terms of best explanation of diverse bodies of data. Moreover, the predicted data must be (2) a priori improbable, and (3) heterogeneous to the essentially used data. My proposed notion of novelty, therefore, is not historical, but functional. Hence, deductivists are right that confirmation is independent of time and of historical contingencies such as if the theorist knew a datum, used it, or intended to accommodate it. Predictivists, however, are right that not all consequences confirm equally, and confirmation is not purely a logical theory-data relation, as it crucially involves background epistemic conditions and the notion of best explanation. Conditions (1)–(3) make the difference between prediction and accommodation, and account for the confirming power of theoretical virtues such as non ad-hocness, non-fudging, non-overfitting, independence and consilience. I thus show that functional novelty (a) avoids the deductivist objections to predictivism, (b) is a gradual notion, in accordance with the common intuition that confirmation comes in degrees, and (c) supports the no miracle argument, so vindicating scientific realism. (shrink)
Does the Ship of Theseus present a genuine puzzle about persistence due to conflicting intuitions based on “continuity of form” and “continuity of matter” pulling in opposite directions? Philosophers are divided. Some claim that it presents a genuine puzzle but disagree over whether there is a solution. Others claim that there is no puzzle at all since the case has an obvious solution. To assess these proposals, we conducted a cross-cultural study involving nearly 3,000 people across twenty-two countries, speaking eighteen (...) different languages. Our results speak against the proposal that there is no puzzle at all and against the proposal that there is a puzzle but one that has no solution. Our results suggest that there are two criteria—“continuity of form” and “continuity of matter”— that constitute our concept of persistence and these two criteria receive different weightings in settling matters concerning persistence. (shrink)
Since at least Hume and Kant, philosophers working on the nature of aesthetic judgment have generally agreed that common sense does not treat aesthetic judgments in the same way as typical expressions of subjective preferences—rather, it endows them with intersubjective validity, the property of being right or wrong regardless of disagreement. Moreover, this apparent intersubjective validity has been taken to constitute one of the main explananda for philosophical accounts of aesthetic judgment. But is it really the case that most people (...) spontaneously treat aesthetic judgments as having intersubjective validity? In this paper, we report the results of a cross‐cultural study with over 2,000 respondents spanning 19 countries. Despite significant geographical variations, these results suggest that most people do not treat their own aesthetic judgments as having intersubjective validity. We conclude by discussing the implications of our findings for theories of aesthetic judgment and the purpose of aesthetics in general. (shrink)
There are two possible realist defense strategies against the pessimistic meta-induction and Laudan’s meta-modus tollens: the selective strategy, claiming that discarded theories are partially true, and the discontinuity strategy, denying that pessimism about past theories can be extended to current ones. A radical version of discontinuity realism is proposed by Gerald Doppelt: rather than discriminating between true and false components within theories, he holds that superseded theories cannot be shown to be even partially true, while present best theories are demonstrably (...) completely true. I argue that this position, running counter both the cumulativity of science and fallibilism, is untenable; it cannot account for the success of past theories, nor for the failures of current theories, and rather than shutting the door to the pessimistic historical objections it opens it wide. The best strategy, instead, joins the selective idea there was both some truth and some falsity in discarded theories, like in current ones, with the moderate discontinuity idea that the truth rate in present best theories is much greater than in past ones. (shrink)
Deployment Realism resists Laudan’s and Lyons’ objections to the “No Miracle Argument” by arguing that a hypothesis is most probably true when it is deployed essentially in a novel prediction. However, Lyons criticized Psillos’ criterion of essentiality, maintaining that Deployment Realism should be committed to all the actually deployed assumptions. But since many actually deployed assumptions proved false, he concludes that the No Miracle Argument and Deployment Realism fail. I reply that the essentiality condition is required by Occam’s razor. In (...) fact, there is a simpler formulation of essentiality which escapes Lyons’ criticisms and rescues the No Miracle Argument and Deployment Realism from their purported historical counterexamples: a hypothesis is essential when it has no proper parts (in Yablo’s sense) sufficient to derive the same prediction. Although essentiality so conceived cannot be detected prospectively, this is just natural, and it is not a problem but an advantage for Deployment Realism. (shrink)
Is behavioral integration (i.e., which occurs when a subjects assertion that p matches her non-verbal behavior) a necessary feature of belief in folk psychology? Our data from nearly 6,000 people across twenty-six samples, spanning twenty-two countries suggests that it is not. Given the surprising cross-cultural robustness of our findings, we suggest that the types of evidence for the ascription of a belief are, at least in some circumstances, lexicographically ordered: assertions are first taken into account, and when an agent sincerely (...) asserts that p, non-linguistic behavioral evidence is disregarded. In light of this, we take ourselves to have discovered a universal principle governing the ascription of beliefs in folk psychology. (shrink)
Criticisms à la Laudan can block the “no miracles” argument for the (approximate) truth of whole theories. Realists have thus retrenched, arguing that at least the individual claims deployed in the derivation of novel predictions should be considered (approximately) true. But for Lyons (2002) there are historical counterexamples even to this weaker “deployment” realism: he lists a number of novel predictions supposedly derived from (radically) false claims. But if so, those successes would seem unexplainable, even by Lyons’ “modest surrealism” or (...) other surrogates to realism. In fact, I argue, some of those predictions were an easy guess, or independently probable in the light of available evidence; hence, they are no counterexamples to deployment realism, for the no miracles argument wouldn’t apply to them. In other instances, pace Lyon, the prediction was actually false, and could be reinterpreted as true only by interpreting as true also the claim from which it was derived; or again, a false claim was employed in the derivation of a true prediction, but inessentially, the essential role being played by a weaker true claim. But as soon as the paradoxical air of such historical cases is explained away in any of these manners, they cease to represent counterexamples to deployment realism. If, as I suggest, all of them can be dealt with by these strategies, a theoretical claim can still be assumed to be true if it is crucial in deriving an improbable novel prediction. (shrink)
Many formulations of scientific realism (SR) include some commitment to metaphysical realism (MR). On the other hand, authors like Schlick, Carnap and Putnam held forms of scientific realism coupled with metaphysical antirealism (and this has analogies in Kant). So we might ask: do scientific realists really need MR? or is MR already implied by SR, so that SR is actually incompatible with metaphysical antirealism? And if MR must really be added to SR, why is that so? And which additional arguments (...) scientific realists need to support it? After reviewing and classifying a number of different kinds of realisms, metaphysical and not, I answer that SR and MR are logically independent of each other, so that there is no logical inconsistency in holding SR while rejecting MR. However, I argue that the “no miracle” argument (NMA) not only is the “ultimate” argument for SR, but by the same token it also supports MR. Therefore one cannot effectively defend SR without also subscribing to MR, but this can be done at no additional argumentative cost. I show this by discussing not only the standard version of the NMA, but also three more versions which are not usually considered as such in the literature. (shrink)
The empirical underdetermination of theories is a philosophical problem which until the last century has not seriously troubled actual science. The reason is that confirmation does not depend only on empirical consequences, and theoretical virtues allow to choose among empirically equivalent theories. Moreover, I argue that the theories selected in this way are not just pragmatically or aesthetically better, but more probably true. At present in quantum mechanics not even theoretical virtues allow to choose among many competing theories and interpretations, (...) but this is because none of them possess those virtues to a sufficient degree. However, first, we can hope for some future advancement. Second, even if no further progress came forth, all the most credited competitors agree on a substantial core of theoretical assumptions. Therefore underdetermination does not show that we cannot be realist on unobservable entities in general, but at most that in particular fields our inquiry may encounter some de facto limits. (shrink)
The currently most plausible version of scientific realism is probably “deployment” realism, based on various contributions in the recent literature, and worked out as a unitary account in Psillos. According to it we can believe in the at least partial truth of theories, because that is the best explanation of their predictive success, and discarded theories which had novel predictive success had nonetheless some true parts, those necessary to derive their novel predictions. According to Doppelt this account cannot withstand the (...) antirealist objections based on the “pessimistic meta-induction” and Laudan’s historical counterexamples. Moreover it is incomplete, as it purports to explain the predictive success of theories, but overlooks the necessity to explain also their explanatory success. Accordingly, he proposes a new version of realism, presented as the best explanation of both predictive and explanatory success, and committed only to the truth of best current theories, not of the discarded ones. Here I argue that Doppelt has not shown that deployment realism as it stands cannot solve the problems raised by the history of science, explaining explanatory success does not add much to explaining novel predictive success, and a realism confined to current theories is implausible, and actually the easiest prey to the pessimistic meta-induction argument. (shrink)
On the basis of Levin’s claim that truth is not a scientific explanatory factor, Michel Ghins argues that the “no miracle” argument (NMA) is not scientific, therefore scientific realism is not a scientific hypothesis, and naturalism is wrong. I argue that there are genuine senses of ‘scientific’ and ‘explanation’ in which truth can yield scientific explanations. Hence, the NMA can be considered scientific in the sense that it hinges on a scientific explanation, it follows a typically scientific inferential pattern (IBE), (...) and it is based on an empirical fact (the success of science). Scientific realism, in turn, is scientific in the sense that it is supported both by a meta-level scientific argument (the NMA), and by first level scientific arguments through semantic ascent and generalization. However, both the NMA and scientific realism are not purely scientific, since they go beyond properly scientific concerns, and require additional philosophical reasoning. In turn, naturalism is correct in the sense that philosophy is continuous with science, partly based on it, and potentially equally well warranted. Beside denying the scientific nature of the NMA, Ghins raises some objections to its cogency , to which I reply in the final section. (shrink)
Epistemologists have debated at length whether scientific discovery is a rational and logical process. If it is, according to the Artificial Intelligence hypothesis, it should be possible to write computer programs able to discover laws or theories; and if such programs were written, this would definitely prove the existence of a logic of discovery. Attempts in this direction, however, have been unsuccessful: the programs written by Simon's group, indeed, infer famous laws of physics and chemistry; but having found no new (...) law, they cannot properly be considered discovery machines. The programs written in the Turing tradition, instead, produced new and useful empirical generalization, but no theoretical discovery, thus failing to prove the logical character of the most significant kind of discoveries. A new cognitivist and connectionist approach by Holland, Holyoak, Nisbett and Thagard, looks more promising. Reflection on their proposals helps to understand the complex character of discovery processes, the abandonment of belief in the logic of discovery by logical positivists, and the necessity of a realist interpretation of scientific research. (shrink)
Gerald Doppelt claims that Deployment Realism cannot withstand the antirealist objections based on the “pessimistic meta-induction” and Laudan’s historical counterexamples. Moreover it is incomplete, as it purports to explain the predictive success of theories, but overlooks the necessity to explain also their explanatory success. Accordingly, he proposes a new version of realism, presented as the best explanation of both predictive and explanatory success, and committed only to the truth of best current theories, not of the discarded ones. Elsewhere I criticized (...) his new brand of realism. Here instead I argue that Doppelt has not shown that Deployment Realism cannot solve the problems raised by the history of science, explaining explanatory success does not add much to explaining novel predictive success, and Doppelt is right that truth is not a sufficient explanans, but for different reasons, and this does not refute Deployment Realism, but helps to detail it better. In a more explicit formulation, the realist IBE concludes not only to the truth of theories, but also to the reliability of scientists and scientific method, the order and simplicity of nature, and the approximate truth of background theories. (shrink)
This is an attempt to sort out what is it that makes many of us uncomfortable with the perdurantist solution to the problem of change. Lewis argues that only perdurantism can reconcile change with persistence over time, while neither presentism nor endurantism can. So, first, I defend the endurantist solution to the problem of change, by arguing that what is relative to time are not properties, but their possession. Second, I explore the anti-perdurantist strategy of arguing that Lewis cannot solve (...) the problem of change, for he cannot account for how some properties are possessed by objects in time. However, I argue that this strategy fails, for if by saying that objects in time can have properties ‘timelessly’ we mean “at no particular time” and “tenselessly”, only objects outside time can have properties in that way; but if we mean “for all the time they exist”, or “essentially”, perdurantists can account for this. Finally, I argue that actually perdurantism cannot solve the problem, but for different reasons: for either it sweeps the problem under the carpet, denying change, and in general subverting our conceptual scheme in a dangerous way, or it becomes equivalent to the endurantist picture that properties are had at times. Nor perdurantism is justified by the Relativity Theory or the B-theory of time, because while endurantism is certainly comfortable with presentism, it need not be committed to it; and even if it were, presentism need not be refuted by the Relativity Theory. (shrink)
Il problema apparentemente insolubile di una giustificazione non circolare dell’induzione diverrebbe più abbordabile se invece di chiederci solo cosa ci assicura che un fenomeno osservato si riprodurrà in modo uguale in un numero potenzialmente infinito di casi futuri, ci chiedessimo anche come si spiega che esso si sia manifestato fin qui in modo identico e senza eccezioni in un numero di casi finito ma assai alto. E’ questa l’idea della giustificazione abduttiva dell’induzione, avanzata in forme diverse da Armstrong, Foster e (...) BonJour: serie talmente regolari di fenomeni sono da un punto di vista logico talmente improbabili che se il mondo fosse puramente casuale (se cioè gli eventi si presentassero con frequenza proporzionale alla loro probabilità logica) esse non potrebbero verificarsi se non per una coincidenza miracolosa. Pertanto, tali regolarità si spiegano solo assumendo che siano prodotte da meccanismi o necessità nomiche; ma se questo è il caso, è corretto concludere che tali regolarità persisteranno anche in futuro senza eccezioni, e dunque le inferenze induttive su di esse sono giustificate. Anche Kornblith argomenta dall’effettivo successo delle inferenze induttive all’esistenza di regolarità oggettive in natura; mentre Sankey parte dall’ uniformità della natura per giustificare l’induzione. Congiungendo le due argomentazioni, dunque, si ottiene di nuovo una giustificazione dell’induzione in base all’argomento del “se non è un miracolo …”. Un ostacolo su questa via è specificare in che senso la natura sia uniforme (dato che ovviamente non lo è in ogni suo aspetto) e di quali regolarità possiamo aspettarci che persistano senza eccezione anche in futuro (dato che evidentemente non tutte lo fanno: l’acqua non bolle sempre a 100°, la pressione non è sempre funzione della temperatura, e così via). Entrano qui in gioco le conoscenze di sfondo e la ripetizione delle osservazioni in condizioni diverse, che ci mostrano quali circostanze siano rilevanti al verificarsi del fenomeno dato. In tal modo, le nostre descrizioni delle regolarità naturali convergono su descrizioni che specificano sia le condizioni individualmente necessarie e congiuntamente sufficienti, sia tutti gli argomenti delle funzioni che costituiscono tali regolarità. Ciò consente di formulare i principi di Uniformità della Natura e di Induzione in modo non generico (e dunque, a seconda della formulazione, vuoto o eccessivo), ma circostanziato: così essi asseriscono, rispettivamente, che la natura è uniforme negli aspetti evidenziati dalle descrizioni su cui convergiamo grazie alle osservazioni ripetute, e che solo queste descrizioni possono essere induttivamente generalizzate. Ciò consente anche di risolvere enigmi alla Goodman, quali: poiché le osservazioni ci hanno mostrato (solo) il verificarsi di una data regolarità fino al momento presente t, come possiamo presumere che essa si verificherà anche dopo t? La risposta è che osservazioni e conoscenza di sfondo non ci dicono che vi sia alcun limite temporale tra le condizioni necessarie della regolarità data. (shrink)
In an earlier article on this journal I argued that the problem of empirical underdetermination can for the largest part be solved by theoretical virtues, and for the remaining part it can be tolerated. Here I confront two further challenges to scientific realism based on underdetermination. First, there are four classes of theories which may seem to be underdetermined even by theoretical virtues. Concerning them I argue that (i) theories produced by trivial permutations and (ii) “equivalent descriptions” are compatible with (...) the truth of standard theories; instead (iii) “as if” versions of standard theories are much worse from the point of view of theoretical virtues; finally (iv) mathematically intertranslatable theories either may become empirically decidable in the future, or can be discriminated by theoretical virtues, or realists may simply plead ignorance about their claims. Secondly, I consider Stanford’s underdetermination with respect unconceived alternatives, arguing that it essentially relies on the pessimistic meta-induction from the falsity of all past theories. Therefore, it can be resisted by (a) considering the radical advancement of present with respect to past science, and (b) arguing with selective realism that past successful theories, even if false, always included some true components. (shrink)
Many philosophers have shown great interest in the recent anti-realist turn in Hilary Putnam's thought, whereby he rejects "meta-physical realism" in favor of "internal realism". However, many have also found it difficult to gain an exact understanding, and hence a correct assessment of Putnam's ideas. This work strives for some progress on both of these accounts. ;Part one explicates what Putnam understands by "metaphysical realism" and considers to what extent Putnam himself formerly adhered to it. It reconstructs Putnam's arguments for (...) the indeterminacy of reference and for the rejection of reference and of truth as correspondence, and it shows how such arguments hinge both on considerations in the theory of reference and in metaphysics. It suggests that commentators have often missed the actual structure of Putnam's argumentation, e.g. by simply identifying it with the so called "model-theoretic" argument. Finally, Part One examines Putnam's "internal realism", stressing its ties to such authors as Kant, Goodman and Dummett, and explaining in what senses it is really a strong kind of anti-realism. Basically, Putnam does not deny that a mind-independent world exists, but he denies that we may refer to it, and claims that the world we know is thoroughly mind-dependent. ;Part two criticizes Putnam's arguments for indeterminacy along with some similar indeterminacy arguments such as Goodman's argument on confirmation, and the Kripke-Wittgenstein argument on rules. This is done by vindicating the notion of objective similarity, and by relying on it to fix reference. Putnam's claim that we have no theory showing how reference could possibly be determinate is countered by sketching a possible account of reference--distinctly owing to functionalism--which might answer such question. Putnam's notorious "brains in the vat" argument is also discussed and criticized. ;Putnam's metaphysical picture, by which the mind-independent world is not sorted out into objects or properties, is granted. But it is argued that nonetheless we may refer to the mind-independent world, and have beliefs which are true of it. (shrink)
This book offers the most complete and up-to-date overview of the philosophical work of Evandro Agazzi, presently the most important Italian philosopher of science, and one of the most influential in the world. Scholars from seven countries explore his contributions in areas ranging from philosophy of physics and general philosophy of science to bioethics, philosophy of mathematics and logic, epistemology of the social sciences and history of science, philosophy of language and artificial intelligence, education and anthropology, metaphysics, and philosophy of (...) religion. Agazzi developed a complete and coherent philosophical system, anticipating some of the turns in the philosophy of science after the crisis of logical empiricism and exerting an equal influence on continental hermeneutic philosophy. His work is characterized by an original synthesis of contemporary analytic philosophy, phenomenology, and classical philosophy, including the scholastic tradition, and these threads are reflected in the different backgrounds of the contributors to this book. While upholding the epistemological value of science against scepticism and relativism, Agazzi eschews scientism by stressing the equal importance of non-scientific forms of thought, such as metaphysics and religion. While defending the freedom of research as a cognitive enterprise, he argues that as a human and social practice it must nonetheless respect ethical constraints. (shrink)
A discussion of Wolfgang's Stegmüller's ideas on the structuralist conception of theories, especially as presented in his book The Structure and Dynamics of Theories (Springer, 1976).
Evandro Agazzi’s volume Scientific Objectivity and its Contexts is here introduced. First, the genesis and the content of the book are outlined. Secondly, an overview of Agazzi’s philosophy of science is provided. Its main roots are epistemological realism in the Aristotelian/scholastic tradition, and contemporary science-oriented epistemology, especially in Logical Empiricism. As a result, Agazzi’s thought is nicely balanced between empiricism and rationalism, it avoids gnoseologistic dualism by stressing the intentionality of knowledge, and it insists on the operational and referential character (...) of science. Finally, an account is given of Agazzi’s view of the origin and nature of scientific objects, which allows to understand how his sophisticated and “perspectival” realism differs both from naïve realism and constructivism. (shrink)
Strong predictivism, the idea that novel predictions per se confirm theories more than accommodations, is based on a “no miracle” argument from novel predictions to the truth of theories (NMAT). Eric Barnes rejects both: he reconstructs the NMAT as seeking an explanation for the entailment relation between a theory and its novel consequences, and argues that it involves a fallacious application of Occam’s razor. However, he accepts a no miracle argument for the truth of background beliefs (NMABB): scientists endorsed a (...) successful theory because they were guided by largely true background beliefs. This in turn raises the probability that the theory is true; so Barnes embraces a form of weak predictivism, according to which predictions are only indirectly relevant to confirmation. To Barnes I reply that we should also explain how the successful theory was constructed, not just endorsed; background beliefs are not enough to explain success, scientific method must also be considered; Barnes can account for some measure of confirmation of our theories, but not for the practical certainty conferred to them by some astonishing predictions; true background beliefs and reliability by themselves cannot explain novel success, the truth of theories is also required. Hence, the NMAT is sound, and strong predictivism is right. In fact, Barnes misinterprets the NMAT, which does not involve Occam’s razor, takes as explanandum the building of a theory which turned out to predict surprising facts, and successfully concludes that the theory is true. This accounts for the practically certain confirmation of our most successful theories, in accordance with strong predictivism. (shrink)