Some scientific categories seem to correspond to genuine features of the world and are indispensable for successful science in some domain; in short, they are natural kinds. This book gives a general account of what it is to be a natural kind and puts the account to work illuminating numerous specific examples.
The no-miracles argument and the pessimistic induction are arguably the main considerations for and against scientific realism. Recently these arguments have been accused of embodying a familiar, seductive fallacy. In each case, we are tricked by a base rate fallacy, one much-discussed in the psychological literature. In this paper we consider this accusation and use it as an explanation for why the two most prominent `wholesale' arguments in the literature seem irresolvable. Framed probabilistically, we can see very clearly why realists (...) and anti-realists have been talking past one another. We then formulate a dilemma for advocates of either argument, answer potential objections to our criticism, discuss what remains (if anything) of these two major arguments, and then speculate about a future philosophy of science freed from these two arguments. In so doing, we connect the point about base rates to the wholesale/retail distinction; we believe it hints at an answer of how to distinguish profitable from unprofitable realism debates. In short, we offer a probabilistic analysis of the feeling of ennui afflicting contemporary philosophy of science. (shrink)
There is considerable disagreement about the epistemic value of novel predictive success, i.e. when a scientist predicts an unexpected phenomenon, experiments are conducted, and the prediction proves to be accurate. We survey the field on this question, noting both fully articulated views such as weak and strong predictivism, and more nascent views, such as pluralist reasons for the instrumental value of prediction. By examining the various reasons offered for the value of prediction across a range of inferential contexts , we (...) can see that neither weak nor strong predictivism captures all of the reasons for valuing prediction available. A third path is presented, Pluralist Instrumental Predictivism; PIP for short. (shrink)
The Homeostatic Property Cluster (HPC) account of natural kinds has become popular since it was proposed by Richard Boyd in the late 1980s. Although it is often taken as a defining natural kinds as such, it is easy enough to see that something's being a natural kind is neither necessary nor sufficient for its being an HPC. This paper argues that it is better not to understand HPCs as defining what it is to be a natural kind but instead as (...) providing the ontological realization of (some) natural kinds. (shrink)
Some philosophers understand natural kinds to be the categories which are constraints on enquiry. In order to elaborate the metaphysics appropriate to such an account, I consider the complicated history of scurvy, citrus, and vitamin C. It may be tempting to understand these categories in a shallow way (as mere property clusters) or in a deep way (as fundamental properties). Neither approach is adequate, and the case instead calls for middle-range ontology: starting from categories which we identify in the world (...) and elaborating their structure, but not pretending to jump ahead to a complete story about fundamental being. (shrink)
When we ask what natural kinds are, there are two different things we might have in mind. The first, which I’ll call the taxonomy question, is what distinguishes a category which is a natural kind from an arbitrary class. The second, which I’ll call the ontology question, is what manner of stuff there is that realizes the category. Many philosophers have systematically conflated the two questions. The confusion is exhibited both by essentialists and by philosophers who pose their accounts in (...) terms of similarity. It also leads to misreading philosophers who do make the distinction. Distinguishing the questions allows for a more subtle understanding of both natural kinds and their underlying metaphysics. (shrink)
Kyle Stanford has recently claimed to offer a new challenge to scientific realism. Taking his inspiration from the familiar Pessimistic Induction (PI), Stanford proposes a New Induction (NI). Contra Anjan Chakravartty’s suggestion that the NI is a ‘red herring’, I argue that it reveals something deep and important about science. The Problem of Unconceived Alternatives, which lies at the heart of the NI, yields a richer anti-realism than the PI. It explains why science falls short when it falls short, and (...) so it might figure in the most coherent account of scientific practice. However, this best account will be antirealist in some respects and about some theories. It will not be a sweeping antirealism about all or most of science. (shrink)
The accepted narrative treats John Stuart Mill’s Kinds as the historical prototype for our natural kinds, but Mill actually employs two separate notions: Kinds and natural groups. Considering these, along with the accounts of Mill’s nineteenth-century interlocutors, forces us to recognize two distinct questions. First, what marks a natural kind as worthy of inclusion in taxonomy? Second, what exists in the world that makes a category meet that criterion? Mill’s two notions offer separate answers to the two questions: natural groups (...) for taxonomy and Kinds for ontology. This distinction is ignored in many contemporary debates about natural kinds and is obscured by the standard narrative that treats our natural kinds just as a development of Mill’s Kinds. (shrink)
William James’ argument against William Clifford in The Will to Believe is often understood in terms of doxastic efficacy, the power of belief to influence an outcome. Although that is one strand of James’ argument, there is another which is driven by ampliative risk. The second strand of James’ argument, when applied to scientific cases, is tantamount to what is now called the Argument from Inductive Risk. Either strand of James’ argument is sufficient to rebut Clifford's strong evidentialism and show (...) that it is sometimes permissible to believe in the absence of compelling evidence. However, the two considerations have different scope and force. Doxastic efficacy applies in only some cases but allows any values to play a role in determining belief; risk applies in all cases but only allows particular conditional values to play a role. (shrink)
Homeostatic property clusters (HPCs) are offered as a way of understanding natural kinds, especially biological species. I review the HPC approach and then discuss an objection by Ereshefsky and Matthen, to the effect that an HPC qua cluster seems ill-fitted as a description of a polymorphic species. The standard response by champions of the HPC approach is to say that all members of a polymorphic species have things in common, namely dispositions or conditional properties. I argue that this response fails. (...) Instances of an HPC kind need not all be similar in their exhibited properties. Instead, HPCs should instead be understood as unified by the underlying causal mechanism that maintains them. The causal mechanism can both produce and explain some systematic differences between a kind’s members. An HPC kind is best understood not as a single cluster of properties maintained in stasis by causal forces, but as a complex of related property clusters kept in relation by an underlying causal process. This approach requires recognizing that taxonomic systems serve both explanatory and inductive purposes. (shrink)
The Argument from Inductive Risk (AIR) is taken to show that values are inevitably involved in making judgements or forming beliefs. After reviewing this conclusion, I pose cases which are prima facie counterexamples: the unreflective application of conventions, use of black-boxed instruments, reliance on opaque algorithms, and unskilled observation reports. These cases are counterexamples to the AIR posed in ethical terms as a matter of personal values. Nevertheless, it need not be understood in those terms. The values which load a (...) theory choice may be those of institutions or past actors. This means that the challenge of responsibly handling inductive risk is not merely an ethical issue, but is also social, political, and historical. (shrink)
Abstract: There is a long tradition of trying to analyze art either by providing a definition (essentialism) or by tracing its contours as an indefinable, open concept (anti-essentialism). Both art essentialists and art anti-essentialists share an implicit assumption of art concept monism. This article argues that this assumption is a mistake. Species concept pluralism—a well-explored position in philosophy of biology—provides a model for art concept pluralism. The article explores the conditions under which concept pluralism is appropriate, and argues that they (...) obtain for art. Art concept pluralism allows us to recognize that different art concepts are useful for different purposes, and what has been feuding definitions can be seen as characterizations of specific art concepts. (shrink)
There are two senses of ‘what scientists know’: An individual sense (the separate opinions of individual scientists) and a collective sense (the state of the discipline). The latter is what matters for policy and planning, but it is not something that can be directly observed or reported. A function can be defined to map individual judgments onto an aggregate judgment. I argue that such a function cannot effectively capture community opinion, especially in cases that matter to us.
Cover songs are a familiar feature of contemporary popular music. Musicians describe their own performances as covers, and audiences use the category to organize their listening and appreciation. However, until now philosophers have not had much to say about them. This book explores how to think about covers, appreciating covers, and the metaphysics of covers and songs. Along the way, it explores a range of issues raised by covers, from the question of what precisely constitutes a cover, to the history (...) and taxonomy of the category, the various relationships that hold between songs, performances, and tracks, and the appreciation and evaluation of covers. (shrink)
The problem of underdetermination is thought to hold important lessons for philosophy of science. Yet, as Kyle Stanford has recently argued, typical treatments of it offer only restatements of familiar philosophical problems. Following suggestions in Duhem and Sklar, Stanford calls for a New Induction from the history of science. It will provide proof, he thinks, of “the kind of underdetermination that the history of science reveals to be a distinctive and genuine threat to even our best scientific theories” (Stanford 2001, (...) p. S12). This paper examines Stanford’s New Induction and argues that it – like the other forms of underdetermination that he criticizes – merely recapitulates familiar philosophical conundra. (shrink)
Given the fact that many people use Wikipedia, we should ask: Can we trust it? The empirical evidence suggests that Wikipedia articles are sometimes quite good but that they vary a great deal. As such, it is wrong to ask for a monolithic verdict on Wikipedia. Interacting with Wikipedia involves assessing where it is likely to be reliable and where not. I identify five strategies that we use to assess claims from other sources and argue that, to a greater of (...) lesser degree, Wikipedia frustrates all of them. Interacting responsibly with something like Wikipedia requires new epistemic methods and strategies. (shrink)
It is now commonly held that values play a role in scientific judgment, but many arguments for that conclusion are limited. First, many arguments do not show that values are, strictly speaking, indispensable. The role of values could in principle be filled by a random or arbitrary decision. Second, many arguments concern scientific theories and concepts which have obvious practical consequences, thus suggesting or at least leaving open the possibility that abstruse sciences without such a connection could be value-free. Third, (...) many arguments concern the role values play in inferring from evidence, thus taking evidence as given. This paper argues that these limitations do not hold in general. There are values involved in every scientific judgment. They cannot even conceivably be replaced by a coin toss, they arise as much for exotic as for practical sciences, and they are at issue as much for observation as for explicit inference. (shrink)
According to the standard narrative, natural kind is a technical notion that was introduced by John Stuart Mill in the 1840s and the recent craze for natural kinds, launched by Putnam and Kripke, is a continuation of that tradition. I argue that the standard narrative is mistaken. The Millian tradition of kinds was not particularly influential in the 20th-century, and the Putnam-Kripke revolution did not clearly engage with even the remnants that were left of it. The presently active tradition of (...) natural kinds is less than half a century old. Recognizing this might help us better appreciate both Mill and natural kinds. (shrink)
The underdetermination of theory by evidence is supposed to be a reason to rethink science. It is not. Many authors claim that underdetermination has momentous consequences for the status of scientific claims, but such claims are hidden in an umbra of obscurity and a penumbra of equivocation. So many various phenomena pass for `underdetermination' that it's tempting to think that it is no unified phenomenon at all, so I begin by providing a framework within which all these worries can be (...) seen as species of one genus: A claim of underdetermination involves (at least implicitly) a set of rival theories, a standard of responsible judgment, and a scope of circumstances in which responsible choice between the rivals is impossible. Within this framework, I show that one variety of underdetermination motivated modern scepticism and thus is a familiar problem at the heart of epistemology. I survey arguments that infer from underdetermination to some reëvaluation of science: top-down arguments infer a priori from the ubiquity of underdetermination to some conclusion about science; bottom-up arguments infer from specific instances of underdetermination, to the claim that underdetermination is widespread, and then to some conclusion about science. The top-down arguments either fail to deliver underdetermination of any great significance or (as with modern scepticism) deliver some well-worn epistemic concern. The bottom-up arguments must rely on cases. I consider several promising cases and find them to either be so specialized that they cannot underwrite conclusions about science in general or not be underdetermined at all. Neither top-down nor bottom-up arguments can motivate any deep reconsideration of science. (shrink)
This paper gives a characterization of distributed cognition (d-cog) and explores ways that the framework might be applied in studies of science. I argue that a system can only be given a d-cog description if it is thought of as performing a task. Turning our attention to science, we can try to give a global d-cog account of science or local d-cog accounts of particular scientific projects. Several accounts of science can be seen as global d-cog accounts: Robert Merton's sociology (...) of scientific norms, Philip Kitcher's 20th-century account of cognitive labor, and Kitcher's 21st-century notion of well-ordered science. Problems that arise for them arise just because of the way that they attribute a function to science. The paper concludes by considering local d-cog accounts. Here, too, the task is the crux of the matter. (shrink)
It has been over two decades since Miranda Fricker labeled epistemic injustice, in which an agent is wronged in their capacity as a knower. The philosophical literature has proliferated with variants and related concepts. By considering cases in popular music, we argue that it is worth distinguishing a parallel phenomenon of art-interpretive injustice, in which an agent is wronged in their creative capacity as a possible artist. In section 1, we consider the prosecutorial use of rap lyrics in court as (...) a central case of this injustice. In section 2, we distinguish art-interpretive injustice from other categories already discussed in recent literature. In section 3, we discuss the relationship between genre discourse and identity prejudice. The case for recognizing the category of art-interpretive injustice is that it allows one to recognize a class of harms as being importantly related in ways that one would otherwise overlook. (shrink)
According to many philosophers, psychological explanation canlegitimately be given in terms of belief and desire, but not in termsof knowledge. To explain why someone does what they do (so the common wisdom holds) you can appeal to what they think or what they want, but not what they know. Timothy Williamson has recently argued against this view. Knowledge, Williamson insists, plays an essential role in ordinary psychological explanation.Williamson's argument works on two fronts.First, he argues against the claim that, unlike knowledge, (...) belief is``composite'' (representable as a conjunction of a narrow and a broadcondition). Belief's failure to be composite, Williamson thinks, undermines the usual motivations for psychological explanation in terms of belief rather than knowledge.Unfortunately, we claim, the motivations Williamson argues against donot depend on the claim that belief is composite, so what he saysleaves the case for a psychology of belief unscathed.Second, Williamson argues that knowledge can sometimes provide abetter explanation of action than belief can.We argue that, in the cases considered, explanations that cite beliefs(but not knowledge) are no less successful than explanations that citeknowledge. Thus, we conclude that Williamson's arguments fail both coming andgoing: they fail to undermine a psychology of belief, and they fail tomotivate a psychology of knowledge. (shrink)
Cover versions form a loose but identifiable category of tracks and performances. We distinguish four kinds of covers and argue that they mark important differences in the modes of evaluation that are possible or appropriate for each: mimic covers, which aim merely to echo the canonical track; rendition covers, which change the sound of the canonical track; transformative covers, which diverge so much as to instantiate a distinct, albeit derivative song; and referential covers, which not only instantiate a distinct song, (...) but for which the new song is in part about the original song. In order to allow for the very possibility of transformative and referential covers, we argue that a cover is characterized by relation to a canonical track rather than merely by being a new instance of a song that had been recorded previously. (shrink)
This paper offers a general characterization of underdetermination and gives a prima facie case for the underdetermination of the topology of the universe. A survey of several philosophical approaches to the problem fails to resolve the issue: the case involves the possibility of massive reduplication, but Strawson on massive reduplication provides no help here; it is not obvious that any of the rival theories are to be preferred on grounds of simplicity; and the usual talk of empirically equivalent theories misses (...) the point entirely. (If the choice is underdetermined, then the theories are not empirically equivalent!) Yet the thought experiment is analogous to a live scientific possibility, and actual astronomy faces underdetermination of this kind. This paper concludes by suggesting how the matter can be resolved, either by localizing the underdetermination or by defeating it entirely. Introduction A brief preliminary Around the universe in 80 days Some attempts at resolving the problem 4.1 Indexicality 4.2 Simplicity 4.3 Empirical equivalence 4.4 Is this just a philosophers' fantasy? Move along... ...nothing to see here 6.1 Rules of repetition 6.2 Some possible replies Conclusion. (shrink)
Philip Kitcher develops the Galilean Strategy to defend realism against its many opponents. I explore the structure of the Galilean Strategy and consider it specifically as an instrument against constructive empiricism. Kitcher claims that the Galilean Strategy underwrites an inference from success to truth. We should resist that conclusion, I argue, but the Galilean Strategy should lead us by other routes to believe in many things about which the empiricist would rather remain agnostic. 1 Target: empiricism 2 The Galilean Strategy (...) 3 Strengthening the argument 4 Success and truth 5 Conclusion. (shrink)
The underdetermination of theory by data obtains when, inescapably, evidence is insufficient to allow scientists to decide responsibly between rival theories. One response to would-be underdetermination is to deny that the rival theories are distinct theories at all, insisting instead that they are just different formulations of the same underlying theory; we call this the identical rivals response. An argument adapted from John Norton suggests that the response is presumptively always appropriate, while another from Larry Laudan and Jarrett Leplin suggests (...) that the response is never appropriate. Arguments from Einstein for the special and general theories of relativity may fruitfully be seen as instances of the identical rivals response; since Einstein’s arguments are generally accepted, the response is at least sometimes appropriate. But when is it appropriate? We attempt to steer a middle course between Norton’s view and that of Laudan and Leplin: the identical rivals response is appropriate when there is good reason for adopting a parsimonious ontology. Although in simple cases the identical rivals response need not involve any ontological difference between the theories, in actual scientific cases it typically requires treating apparent posits of the various theories as mere verbal ornaments or computational conveniences. Since these would-be posits are not now detectable, there is no perfectly reliable way to decide whether we should eliminate them or not. As such, there is no rule for deciding whether the identical rivals response is appropriate or not. Nevertheless, there are considerations that suggest for and against the response; we conclude by suggesting two of them. (shrink)
If two theory formulations are merely different expressions of the same theory, then any problem of choosing between them cannot be due to the underdetermination of theories by data. So one might suspect that we need to be able to tell distinct theories from mere alternate formulations before we can say anything substantive about underdetermination, that we need to solve the problem of identical rivals before addressing the problem of underdetermination. Here I consider two possible solutions: Quine proposes that we (...) call two theories identical if they are equivalent under a reconstrual of predicates, but this would mishandle important cases. Another proposal is to defer to the particular judgements of actual scientists. Consideration of an historical episodethe alleged equivalence of wave and matrix mechanicsshows that this second proposal also fails. Nevertheless, I suggest, the original suspicion is wrong; there are ways to enquire into underdetermination without having solved the problem of identical rivals. (shrink)
Nelson Goodman's distinction between autographic and allographic arts is appealing, we suggest, because it promises to resolve several prima facie puzzles. We consider and rebut a recent argument that alleges that digital images explode the autographic/allographic distinction. Regardless, there is another familiar problem with the distinction, especially as Goodman formulates it: it seems to entirely ignore an important sense in which all artworks are historical. We note in reply that some artworks can be considered both as historical products and as (...) formal structures. Talk about such works is ambiguous between the two conceptions. This allows us to recover Goodman's distinction: art forms that are ambiguous in this way are allographic. With that formulation settled, we argue that digital images are allographic. We conclude by considering the objection that digital photographs, unlike other digital images, would count as autographic by our criterion; we reply that this points to the vexed nature of photography rather than any problem with the distinction. (shrink)
It seems obvious that a community of one thousand scientists working together to make discoveries and solve puzzles should arrange itself differently than would one thousand scientist-hermits working independently. Because of limited time, resources, and attention, an independent scientist can explore only some of the possible approaches to a problem. Working alone, each hermit would explore the most promising approaches. They would needlessly duplicate the work of others and would be unlikely to develop approaches which look unpromising but really have (...) tremendous potential. Contrariwise, a large community can more rigorously explore the space of possible approaches. Most scientists should work on the most promising approaches, but a smaller number can be committed to approaches that initially look less promising. Exploratory work can reveal if one of those initially unpromising approaches has unrealized potential, and more scientists can adopt it once its potential becomes more apparent. (shrink)
In this paper, I explore and defend the idea that musical works are historical individuals. Guy Rohrbaugh (2003) proposes this for works of art in general. Julian Dodd (2007) objects that the whole idea is outré metaphysics, that it is too far beyond the pale to be taken seriously. Their disagreement could be seen as a skirmish in the broader war between revisionists and reactionaries, a conflict about which of metaphysics and art should trump the other when there is a (...) conflict. That dispute is a matter of philosophical methodology as much as it is a dispute about art. I argue that the ontology of works as individuals need not be dunked in that morass. My primary strategy is to show, contra Dodd's accusation, that historical individuals are familiar parts of the world. Although the ontological details are open to debate, it is the standard opinion of biologists is that biological species are historical individuals. So there is no conflict here between fidelity to art and respectable metaphysics. What suits species will fit musical work as well. (shrink)
Peter Baumann offers the tantalizing suggestion that Thomas Reid is almost, but not quite, a pragmatist. He motivates this claim by posing a dilemma for common sense philosophy: Will it be dogmatism or scepticism? Baumann claims that Reid points to but does not embrace a pragmatist third way between these unsavory options. If we understand `pragmatism' differently than Baumann does, however, we need not be so equivocal in attributing it to Reid. Reid makes what we could call an argument from (...) practical commitment, and this is plausibly an instance of what William James calls the pragmatic method. (shrink)
There are two ways that we might respond to the underdetermination of theory by data. One response, which we can call the agnostic response, is to suspend judgment: "Where scientific standards cannot guide us, we should believe nothing". Another response, which we can call the fideist response, is to believe whatever we would like to believe: "If science cannot speak to the question, then we may believe anything without science ever contradicting us". C.S. Peirce recognized these options and suggested evading (...) the dilemma. It is a Logical Maxim, he suggests, that there could be no genuine underdetermination. This is no longer a viable option in the wake of developments in modern physics, so we must face the dilemma head on. The agnostic and fideist responses to underdetermination represent fundamentally different epistemic viewpoints. Nevertheless, the choice between them is not an unresolvable struggle between incommensurable worldviews. There are legitimate considerations tugging in each direction. Given the balance of these considerations, there should be a modest presumption of agnosticism. This may conflict with Peirce's Logical Maxim, but it preserves all that we can preserve of the Peircean motivation. (shrink)
Introduction 1 P. D. Magnus and Jacob Busch 1. Form-driven vs. Content-driven Arguments for Realism 8 Juha Saatsi 2. Optimism about the Pessimistic Induction 29 Sherrilyn Roush 3. Metaphysics between the Sciences and Philosophies of Science 59 Anjan Chakravartty 4. Nominalism and Inductive Generalizations 78 Jessica Pfeifer 5. Models and Scientific Representations 94 Otávio Bueno 6. The Identical Rivals Response to Underdetermination 112 Gregory Frost-Arnold and P. D. Magnus 7. Scientific Representation and the Semiotics of Pictures 131 Laura Perini 8. (...) Philosophy of the Environmental Sciences 155 Jay Odenbaugh 9. Value Judgements and the Estimation of Uncertainty in Climate Modeling 172 Justin Biddle and Eric Winsberg 10. Feminist Standpoint Empiricism: Rethinking the Terrain in Feminist Philosophy of Science 198 Kristen Intemann 11. Naturalism and the Enlightenment Ideal: Rethinking a Central Debate in the Philosophy of Social Science 226 Daniel Steel 12. New Approaches to the Division of Cognitive Labor 250 Michael Weisberg. (shrink)
It has been common wisdom for centuries that scientific inference cannot be deductive; if it is inference at all, it must be a distinctive kind of inductive inference. According to demonstrative theories of induction, however, important scientific inferences are not inductive in the sense of requiring ampliative inference rules at all. Rather, they are deductive inferences with sufficiently strong premises. General considerations about inferences suffice to show that there is no difference in justification between an inference construed demonstratively or ampliatively. (...) The inductive risk may be shouldered by premises or rules, but it cannot be shirked. Demonstrative theories of induction might, nevertheless, better describe scientific practice. And there may be good methodological reasons for constructing our inferences one way rather than the other. By exploring the limits of these possible advantages, I argue that scientific inference is neither of essence deductive nor of essence inductive. (shrink)
In late 2014, the jazz combo Mostly Other People Do the Killing released Blue—an album that is a note-for-note remake of Miles Davis's 1959 landmark album Kind of Blue. This is a thought experiment made concrete, raising metaphysical puzzles familiar from discussion of indiscernible counterparts. It is an actual album, rather than merely a concept, and so poses the aesthetic puzzle of why one would ever actually listen to it.
Background theories in science are used both to prove and to disprove that theory choice is underdetermined by data. The alleged proof appeals to the fact that experiments to decide between theories typically require auxiliary assumptions from other theories. If this generates a kind of underdetermination, it shows that standards of scientific inference are fallible and must be appropriately contextualized. The alleged disproof appeals to the possibility of suitable background theories to show that no theory choice can be timelessly or (...) noncontextually underdetermined: Foreground theories might be distinguished against different backgrounds. Philosophers have often replied to such a disproof by focussing their attention not on theories but on Total Sciences. If empirically equivalent Total Sciences were at stake, then there would be no background against which they could be differentiated. I offer several reasons to think that Total Science is a philosophers' fiction. No respectable underdetermination can be based on it. (shrink)
Thomas Reid is often misread as defending common sense, if at all, only by relying on illicit premises about God or our natural faculties. On these theological or reliabilist misreadings, Reid makes common sense assertions where he cannot give arguments. This paper attempts to untangle Reid's defense of common sense by distinguishing four arguments: (a) the argument from madness, (b) the argument from natural faculties, (c) the argument from impotence, and (d) the argument from practical commitment. Of these, (a) and (...) (c) do rely on problematic premises that are no more secure than claims of common sense itself. Yet (b) and (d) do not. This conclusion can be established directly by considering the arguments informally, but one might still worry that there is an implicit premise in them. In order to address this concern, I reconstruct the arguments in the framework of subjective Bayesianism. The worry becomes this: Do the arguments rely on specific values for the prior probability of some premises? Reid's appeals to our prior cognitive and practical commitments do not. Rather than relying on specific probability assignments, they draw on things that are part of the Bayesian framework itself, such as the nature of observation and the connection between belief and action. Contra the theological or reliabilist readings, the defense of common sense does not require indefensible premises. (shrink)
This discussion note addresses Caleb Hazelwood’s ‘Practice-Centered Pluralism and a Disjunctive Theory of Art’. Hazelwood advances a disjunctive definition of art on the basis of an analogy with species concept pluralism in the philosophy of biology. We recognize the analogy between species and art, we applaud attention to practice, and we are bullish on pluralism—but it is a mistake to take these as the basis for a disjunctive definition.
Although some authors hold that natural kinds are necessarily relative to disciplinary domains, many authors presume that natural kinds must be absolute, categorical features of the reality —often assuming that without even mentioning the alternative. Recognizing both possibilities, one may ask whether the difference especially matters. I argue that it does. Looking at recent arguments about natural kind realism, I argue that we can best make sense of the realism question by thinking of natural kindness as a relation that holds (...) between a category and a domain. (shrink)
A recording or performance of a song is a cover if there is an earlier, canonical recording of the song. It can seem intuitive to think that properly appreciating a cover requires considering it in relation to the original, or at least that doing so will yield a deeper appreciation. This intuition is supported by some philosophical accounts of covers. And it is complicated by the possibility of hearing in, whereby one hears elements of the original version in the cover. (...) We argue that it can nevertheless be just as legitimate to consider a cover version on its own as it is to consider it in relation to the earlier recording that it is covering. In some cases, these two modes of appreciation will offer distinct rewards. In other cases, one mode will be substantially more rewarding than the other. The details matter, especially in complicated cases like covers of covers, but neither mode is privileged in principle. (shrink)
forall x: Calgary is a full-featured textbook on formal logic. It covers key notions of logic such as consequence and validity of arguments, the syntax of truth-functional propositional logic TFL and truth-table semantics, the syntax of first-order (predicate) logic FOL with identity (first-order interpretations), symbolizing English in TFL and FOL, and Fitch-style natural deduction proof systems for both TFL and FOL. It also deals with some advanced topics such as modal logic, soundness, and functional completeness. Exercises with solutions are available. (...) It is provided in PDF (for screen reading, printing, and a special version for dyslexics), HTML, and in LaTeX source code. (shrink)
Christy Mag Uidhir has recently argued (a) that there is no in principle aesthetic difference between a live performance and a recording of that performance, and (b) that the proper aesthetic object is a type which is instantiated by the performance and potentially repeatable when recordings are played back. This paper considers several objections to (a) and finds them lacking. I then consider improvised music, a subject that Mag Uidhir explicitly brackets in his discussion. Improvisation reveals problems with (b), because (...) the performance-event and the performance-type are distinct but equally proper aesthetic objects. (shrink)
One approach to science treats science as a cognitive accomplishment of individuals and defines a scientific community as an aggregate of individual inquirers. Another treats science as a fundamentally collective endeavor and defines a scientist as a member of a scientific community. Distributed cognition has been offered as a framework that could be used to reconcile these two approaches. Adam Toon has recently asked if the cognitive and the social can be friends at last. He answers that they probably cannot, (...) posing objections to the would-be rapprochement. We clarify both the animosity and the tonic proposed to resolve it, ultimately arguing that worries raised by Toon and others are uncompelling. (shrink)
The primary concern of our 2014 paper was not notation but the autographic/allographic distinction, not representations as such but works of art. As we see it, Zeimbekis's considerations do not ultimately undermine the position we advanced in 2014— but they do challenge an element of Goodman's own theory of notation that derives from his requirement of recoverability. That requirement can be abandoned without losing the explanatory power of the autographic/allographic distinction as we have refined it.
Some philosophers think that there is a gap between is and ought which necessarily makes normative enquiry a different kind of thing than empirical science. This position gains support from our ability to explicate our inferential practices in a way that makes it impermissible to move from descriptive premises to a normative conclusion. But we can also explicate them in a way that allows such moves. So there is no categorical answer as to whether there is or is not a (...) gap. The question of an is-ought gap is a practical and strategic matter rather than a logical one, and it may properly be answered in different ways for different questions or at different times. (shrink)
This paper argues against the common, often implicit view that theories are some specific kind of thing. Instead, I argue for theory concept pluralism: There are multiple distinct theory concepts which we legitimately use in different domains and for different purposes, and we should not expect this to change. The argument goes by analogy with species concept pluralism, a familiar position in philosophy of biology. I conclude by considering some consequences for philosophy of science if theory concept pluralism is correct.
Typical discussions of virtual reality (VR) fixate on technology for providing sensory stimulation of a certain kind. They thus fail to understand reality as the place wherein we live and work, misunderstanding it instead as merely a sort of presentation. The first half of the paper examines popular conceptions of VR. The most common conception is a shallow one according to which VR is a matter of simulating appearances. Yet there is, even in popular depictions, a second, more subtle conception (...) according to which VR is a matter of facilitating new kinds of interaction. The latter half of the paper turns to questions about the contemporary technology of Internet chatrooms. The fact that chatrooms can be used in certain ways suggests something about the prospects for VR. The penultimate section asks whether chatrooms may legitimately be thought of as places. (In a sense, they may.) The final section asks whether cybersex may legitimately be thought of as sex. (Again, yes.) Chatroom technology thus provides an argument for the second conception of VR over its much ballyhooed rival. (shrink)
Wikipedia is a free encyclopedia that is written and edited entirely by visitors to its website. I argue that we are misled when we think of it in the same epistemic category with traditional general encyclopedias. An empirical assessment of its reliability reveals that it varies widely from topic to topic. So any particular claim found in it cannot be relied on based on its source. I survey some methods that we use in assessing specific claims and argue that the (...) structure of the Wikipedia frustrates them. (shrink)
A considerable literature has grown up around the claim of Uniqueness, according to which evidence rationally determines belief. It is opposed to Permissivism, according to which evidence underdetermines belief. This paper highlights an overlooked third possibility, according to which there is no rational doxastic attitude. I call this 'Nihilism'. I argue that adherents of the other two positions ought to reject it but that it might, nevertheless, obtain at least sometimes.
The Bare Theory was offered by David Albert as a way of standing by the completeness of quantum mechanics in the face of the measurement problem. This paper surveys objections to the Bare Theory that recur in the literature: what will here be called the oddity objection, the coherence objection, and the context-of-the-universe objection. Critics usually take the Bare Theory to have unacceptably bizarre consequences, but to be free from internal contradiction. Bizarre consequences need not be decisive against the Bare (...) Theory, but a further objection—dubbed here the calibration objection—has been underestimated. This paper argues that the Bare Theory is not only odd but also inconsistent. We can imagine a successor to the Bare Theory—the Stripped Theory—which avoids the objections and fulfills the original promise of the Bare Theory, but at the cost of amplifying the bizarre consequences. The Stripped Theory is either a stunning development in our understanding of the world or a reductio disproving the completeness of quantum mechanics. The Bare Theory The usual objections The calibration objection Beyond the Bare Theory. (shrink)