Sorensen here offers a unified solution to a large family of philosophical puzzles and paradoxes through a study of "blindspots": consistent propositions that cannot be rationally accepted by certain individuals even though they might by true.
Sorensen presents a general theory of thought experiments: what they are, how they work, what are their virtues and vices. On Sorensen's view, philosophy differs from science in degree, but not in kind. For this reason, he claims, it is possible to understand philosophical thought experiments by concentrating on their resemblance to scientific relatives. Lessons learned about scientific experimentation carry over to thought experiment, and vice versa. Sorensen also assesses the hazards and pseudo-hazards of thought experiments. Although he grants that (...) there are interesting ways in which the method leads us astray, he attacks most scepticism about thought experiments as arbitrary. They should be used, he says, as they generally are used--as part of a diversified portfolio of techniques. All of these devices are individually susceptible to abuse, fallacy, and error. Collectively, however, they provide a network of cross-checks that make for impressive reliability. (shrink)
Roy Sorenson offers a unique exploration of an ancient problem: vagueness. Did Buddha become a fat man in one second? Is there a tallest short giraffe? According to Sorenson's epistemicist approach, the answers are yes! Although vagueness abounds in the way the world is divided, Sorenson argues that the divisions are sharp; yet we often do not know where they are. Written in Sorenson'e usual inventive and amusing style, this book offers original insight on language and logic, the way world (...) is, and our understanding of it. (shrink)
In this book, Sorensen presents the first general theory of the thought experiment. He analyses a wide variety of thought experiments, ranging from aesthetics to zoology, and explores what thought experiments are, how they work, and what their positive and negative aspects are. Sorensen also sets his theory within an evolutionary framework and integrates recent advances in experimental psychology and the history of science.
The aim of this paper is to show how thought experiments help us learn about laws. After providing examples of this kind of nomic illumination in the first section, I canvass explanations of our modal knowledge and opt for an evolutionary account. The basic application is that the laws of nature have led us to develop rough and ready intuitions of physical possibility which are then exploited by thought experimenters to reveal some of the very laws responsible for those intuitions. (...) The good news is that natural selection ensures a degree of reliability for the intuitions. The bad news is that the evolutionary account seems to limit the range of reliable thought experiment to highly practical and concrete contexts. In the fifth section, I provide reasons for thinking that we are not as slavishly limited as a pessimistic construal of natural selection suggests. Nevertheless, I promote the idea that biology is a promising source of predictions and diagnoses of thought experiment failures. (shrink)
This is a defense and extension of Stephen Yablo's claim that self-reference is completely inessential to the liar paradox. An infinite sequence of sentences of the form 'None of these subsequent sentences are true' generates the same instability in assigning truth values. I argue Yablo's technique of substituting infinity for self-reference applies to all so-called 'self-referential' paradoxes. A representative sample is provided which includes counterparts of the preface paradox, Pseudo-Scotus's validity paradox, the Knower, and other enigmas of the genre. I (...) rebut objections that Yablo's paradox is not a genuine liar by constructing a sequence of liars that blend into Yablo's paradox. I rebut objections that Yablo's liar has hidden self-reference with a distinction between attributive and referential self-reference and appeals to Gregory Chaitin's algorithmic information theory. The paper concludes with comments on the mystique of self-reference. (shrink)
The aim of this paper is to show how thought experiments help us learn about laws. After providing examples of this kind of nomic illumination in the first section, I canvass explanations of our modal knowledge and opt for an evolutionary account. The basic application is that the laws of nature have led us to develop rough and ready intuitions of physical possibility which are then exploited by thought experimenters to reveal some of the very laws responsible for those intuitions. (...) The good news is that natural selection ensures a degree of reliability for the intuitions. The bad news is that the evolutionary account seems to limit the range of reliable thought experiment to highly practical and concrete contexts. In the fifth section, I provide reasons for thinking that we are not as slavishly limited as a pessimistic construal of natural selection suggests. Nevertheless, I promote the idea that biology is a promising source of predictions and diagnoses of thought experiment failures. (shrink)
The argument proceeds by exploiting the gradually decreasing vagueness of a certain sequence of predicates. the vagueness of 'vague' is then used to show that the thesis that all vague predicates are incoherent is self-defeating. a second casualty is the view that the probems of vagueness can be avoided by restricting the scope of logic to nonvague predicates.
(1984). Conditional blindspots and the knowledge squeeze: A solution to the prediction paradox. Australasian Journal of Philosophy: Vol. 62, No. 2, pp. 126-135.
An entertaining history of the idea of nothing - including absences, omissions, and shadows - from the Ancient Greeks through the 20th century How can nothing cause something? The absence of something might seem to indicate a null or a void, an emptiness as ineffectual as a shadow. In fact, 'nothing' is one of the most powerful ideas the human mind has ever conceived. This short and entertaining book by Roy Sorensen is a lively tour of the history and philosophy (...) of nothing, explaining how various thinkers throughout history have conceived and grappled with the mysterious power of absence -- and how these ideas about shadows, gaps, and holes have in turned played a very positive role in the development of some of humankind's most important ideas. Filled with Sorensen's characteristically entertaining mix of anecdotes, puzzles, curiosities, and philosophical speculation, the book is ordered chronologically, starting with the Taoists, the Buddhists, and the ancient Greeks, moving forward to the middle ages and the early modern period, then up to the existentialists and present day philosophy. The result is a diverting tour through the history of human thought as seen from a novel and unusual perspective. (shrink)
Stereotypically, computation involves intrinsic changes to the medium of representation: writing new symbols, erasing old symbols, turning gears, flipping switches, sliding abacus beads. Perspectival computation leaves the original inscriptions untouched. The problem solver obtains the output by merely alters his orientation toward the input. There is no rewriting or copying of the input inscriptions; the output inscriptions are numerically identical to the input inscriptions. This suggests a loophole through some of the computational limits apparently imposed by physics. There can be (...) symbol manipulation without inscription manipulation because symbols are complex objects that have manipulatable elements besides their inscriptions. Since a written symbol is an ordered pair of consisting of a shape and the reader's orientation to that inscription, the symbol can be changed by changing the orientation rather than inscription. Although there are the usual physical limits associated with reading the answer, the computation is itself instantaneous. This is true even when the sub-calculations are algorithmically complex, exponentially increasing or even infinite. (shrink)
Stepping into the other guy's shoes works best when you resemble him. After all, the procedure is to use yourself as a model: in goes hypothetical beliefs and desires, out comes hypothetical actions and revised beliefs and desires. If you are structurally analogous to the empathee, then accurate inputs generate accurate outputs-just as with any other simulation. The greater the degree of isomorphism, the more dependable and precise the results. This sensitivity to degrees of resemblance suggests that the method of (...) empathy works best for average people. The advantage of being a small but representative sample of the population will create a bootstrap effect. For as average people prosper, there will be more average descendants and so the degree of resemblance in subsequent generations will snowball. Each increment in like-mindedness further enhances the reliability and validity of mental simulation. With each circuit along the spiral, there is tighter and tighter bunching and hence further empowerment of empathy. The method is self-strengthening and eventually molds a population of hyper-similar individuals-which partly solves the problem of other minds. (shrink)
My thesis is that ‘rational’ is an absolute concept like ‘flat’ and ‘clean’. Absolute concepts are best defined as absences. In the case of flatness, the absence of bumps, curves, and irregularities. In the case of cleanliness, the absence of dirt. Rationality, then, is the absence of irrationalities such as bias, circularity, dogmatism, and inconsistency.
Peter Slezak and William Boos have independently advanced a novel interpretation of Descartes's "cogito". The interpretation portrays the "cogito" as a diagonal deduction and emphasizes its resemblance to Godel's theorem and the Liar. I object that this approach is flawed by the fact that it assigns 'Buridan sentences' a legitimate role in Descartes's philosophy. The paradoxical nature of these sentences would have the peculiar result of undermining Descartes's "cogito" while enabling him to "disprove" God's existence.
Vagueness theorists tend to think that evolutionary theory dissolves the riddle "Which came first, the chicken or the egg?". After all, 'chicken' is vague. The idea is that Charles Darwin demonstrated that the chicken was preceded by borderline chickens and so it is simply indeterminate as to where the pre-chickens end and the chickens begin.
This paper is devoted to a solution to Moore's problem. After explaining what Moore's problem is and after considering the main approaches toward solving the problem, I provide a definition of Moorean sentences in terms of pure Moorean propositions. My solution to Moore's problem essentially involves a description of how one can contradict oneself without uttering a contradiction, and a set of definitions that exactly determines which sentences are Moorean and which are close relatives of Moorean sentences.
THE PURPOSE OF THIS ARTICLE IS TO SHOW HOW HUME’S SCEPTICISM ABOUT MIRACLES GENERATES "EPISTEMOLOGICAL" SCEPTICISM ABOUT TIME TRAVEL. SO THE PRIMARY QUESTION RAISED HERE IS "CAN ONE KNOW THAT TIME TRAVEL HAS OCCURED?" RATHER THAN "CAN TIME TRAVEL OCCUR?" I ARGUE THAT ATTEMPTS TO SHOW THE EXISTENCE OF TIME TRAVEL WOULD FACE THE SAME METHODOLOGICAL PROBLEMS AS THE ONES CONFRONTING ATTEMPTS TO DEMONSTRATE THE EXISTENCE OF PARANORMAL EVENTS. SINCE HUMEAN SCEPTICISM EXTENDS TO THE STUDY OF PARANORMAL EVENTS (PARAPSYCHOLOGY), HUMEANS (...) ARE COMMITTED TO SCEPTICISM ABOUT THE STUDY OF TIME TRAVEL ("PARAHISTORY"). (shrink)
Stepping into the other guy's shoes works best when you resemble him. After all, the procedure is to use yourself as a model: in goes hypothetical beliefs and desires, out comes hypothetical actions and revised beliefs and desires. If you are structurally analogous to the empathee, then accurate inputs generate accurate outputs-just as with any other simulation. The greater the degree of isomorphism, the more dependable and precise the results. This sensitivity to degrees of resemblance suggests that the method of (...) empathy works best for average people. The advantage of being a small but representative sample of the population will create a bootstrap effect. For as average people prosper, there will be more average descendants and so the degree of resemblance in subsequent generations will snowball. Each increment in like-mindedness further enhances the reliability and validity of mental simulation. With each circuit along the spiral, there is tighter and tighter bunching and hence further empowerment of empathy. The method is self-strengthening and eventually molds a population of hyper-similar individuals-which partly solves the problem of other minds. (shrink)