How should you take into account the opinions of an advisor? When you completely defer to the advisor's judgment, then you should treat the advisor as a guru. Roughly, that means you should believe what you expect she would believe, if supplied with your extra evidence. When the advisor is your own future self, the resulting principle amounts to a version of the Reflection Principle---a version amended to handle cases of information loss. When you count an advisor as an epistemic (...) peer, you should give her conclusions the same weight as your own. Denying that view---call it the ``equal weight view''---leads to absurdity: the absurdity that you could reasonably come to believe yourself to be an epistemic superior to an advisor simply by noting cases of disagreement with her, and taking it that she made most of the mistakes. Accepting the view seems to lead to another absurdity: that one should suspend judgment about everything that one's smart and well-informed friends disagree on, which means suspending judgment about almost everything interesting. But despite appearances, the equal weight view does not have this absurd consequence. Furthermore, the view can be generalized to handle cases involving not just epistemic peers, but also epistemic superiors and inferiors. (shrink)
It would be good to have a Bayesian decision theory that assesses our decisions and thinking according to everyday standards of rationality — standards that do not require logical omniscience (Garber 1983, Hacking 1967). To that end we develop a “fragmented” decision theory in which a single state of mind is represented by a family of credence functions, each associated with a distinct choice condition (Lewis 1982, Stalnaker 1984). The theory imposes a local coherence assumption guaranteeing that as an agent's (...) attention shifts, successive batches of "obvious" logical information become available to her. A rule of expected utility maximization can then be applied to the decision of what to attend to next during a train of thought. On the resulting theory, rationality requires ordinary agents to be logically competent and to often engage in trains of thought that increase the unification of their states of mind. But rationality does not require ordinary agents to be logically omniscient. (shrink)
In addition to being uncertain about what the world is like, one can also be uncertain about one’s own spatial or temporal location in the world. My aim is to pose a problem arising from the interaction between these two sorts of uncertainty, solve the problem, and draw two lessons from the solution.
Many have claimed that unspecific evidence sometimes demands unsharp, indeterminate, imprecise, vague, or interval-valued probabilities. Against this, a variant of the diachronic Dutch Book argument shows that perfectly rational agents always have perfectly sharp probabilities.
The “puzzle of the unmarked clock” derives from a conflict between the following: (1) a plausible principle of epistemic modesty, and (2) “Rational Reflection”, a principle saying how one’s beliefs about what it is rational to believe constrain the rest of one’s beliefs. An independently motivated improvement to Rational Reflection preserves its spirit while resolving the conflict.
When one encounters disagreement about the truth of a factual claim from a trusted advisor who has access to all of one's evidence, should that move one in the direction of the advisor's view? Conciliatory views on disagreement say "yes, at least a little." Such views are extremely natural, but they can give incoherent advice when the issue under dispute is disagreement itself. So conciliatory views stand refuted. But despite first appearances, this makes no trouble for *partly* conciliatory views: views (...) that recommend giving ground in the face of disagreement about many matters, but not about disagreement itself. (shrink)
(1) Suppose that you care only about speaking the truth, and are confident that some particular deterministic theory is true. If someone asks you whether that theory is true, are you rationally required to answer "yes"? -/- (2) Suppose that you face a problem in which (as in Newcomb's problem) one of your options---call it "taking two boxes"---causally dominates your only other option. Are you rationally required to take two boxes? -/- Those of us attracted to causal decision theory are (...) under pressure to answer "yes" to both questions. However, it has been shown that many existing decision theories are inconsistent with doing so (Ahmed 2014). A simple proof shows that the same goes for an even wider class of theories: all "suppositional" decision theories. The moral is that causal decision theorists must either answer "no" to one of the above questions, or else abandon suppositional decision theories. (shrink)
We pose and resolve several vexing decision theoretic puzzles. Some are variants of existing puzzles, such as 'Trumped' (Arntzenius and McCarthy 1997), 'Rouble trouble' (Arntzenius and Barrett 1999), 'The airtight Dutch book' (McGee 1999), and 'The two envelopes puzzle' (Broome 1995). Others are new. A unified resolution of the puzzles shows that Dutch book arguments have no force in infinite cases. It thereby provides evidence that reasonable utility functions may be unbounded and that reasonable credence functions need not be countably (...) additive. The resolution also shows that when infinitely many decisions are involved, the difference between making the decisions simultaneously and making them sequentially can be the difference between riches and ruin. Finally, the resolution reveals a new way in which the ability to make binding commitments can save perfectly rational agents from sure losses. (shrink)
In "Counterfactual Dependence and Time's Arrow", David Lewis defends an analysis of counterfactuals intended to yield the asymmetry of counterfactual dependence: that later affairs depend counterfactually on earlier ones, and not the other way around. I argue that careful attention to the dynamical properties of thermodynamically irreversible processes shows that in many ordinary cases, Lewis's analysis fails to yield this asymmetry. Furthermore, the analysis fails in an instructive way: it teaches us something about the connection between the asymmetry of overdetermination (...) and the asymmetry of entropy. (shrink)
Dr. Evil learns that a duplicate of Dr. Evil has been created. Upon learning this, how seriously should he take the hypothesis that he himself is that duplicate? I answer: very seriously. I defend a principle of indifference for self-locating belief which entails that after Dr. Evil learns that a duplicate has been created, he ought to have exactly the same degree of belief that he is Dr. Evil as that he is the duplicate. More generally, the principle shows that (...) there is a sharp distinction between ordinary skeptical hypotheses, and self-locating skeptical hypotheses. (shrink)
The 'best-system' analysis of lawhood [Lewis 1994] faces the 'zero-fit problem': that many systems of laws say that the chance of history going actually as it goes--the degree to which the theory 'fits' the actual course of history--is zero. Neither an appeal to infinitesimal probabilities nor a patch using standard measure theory avoids the difficulty. But there is a way to avoid it: replace the notion of 'fit' with the notion of a world being typical with respect to a theory.
Fred comes to realize that if his parents had settled in a more conservative neighborhood, he would have—on the basis of essentially the same evidence—arrived at political views quite different from his actual views. Furthermore, his parents chose between liberal and conservative neighborhoods by tossing a coin. (Sher 2001).
In order to predict and explain behavior, one cannot specify the mental state of an agent merely by saying what information she possesses. Instead one must specify what information is available to an agent relative to various purposes. Specifying mental states in this way allows us to accommodate cases of imperfect recall, cognitive accomplishments involved in logical deduction, the mental states of confused or fragmented subjects, and the difference between propositional knowledge and know-how .
When it comes to evaluating our own abilities and prospects, most people are subject to a distorting bias. We think that we are better – friendlier, more well-liked, better leaders, and better drivers – than we really are. Once we learn about this bias, we should ratchet down our self-evaluations to correct for it. But we don’t. That leaves us with an uncomfortable tension in our beliefs: we knowingly allow our beliefs to differ from the ones that we think are (...) supported by our evidence. We can mitigate the tension by waffling between two belief states: a reflective state that has been recalibrated to take into account our tendency to overrate ourselves, and a non-reflective state that has not. (shrink)
It is bad news to find out that one's cognitive or perceptual faculties are defective. Furthermore, it’s not always transparent how one ought to revise one's beliefs in light of such news. Two sorts of news should be distinguished. On the one hand, there is news that a faculty is unreliable -- that it doesn't track the truth particularly well. On the other hand, there is news that a faculty is anti-reliable -- that it tends to go positively wrong. These (...) two sorts of news call for extremely different responses. We provide accounts of these responses, and prove bounds on the degree to which one can reasonably count oneself as mistaken about a given subject matter. (shrink)
Say that an agent is "epistemically humble" if she is less than certain that her opinions will converge to the truth, given an appropriate stream of evidence. Is such humility rationally permissible? According to the orgulity argument : the answer is "yes" but long-run convergence-to-the-truth theorems force Bayesians to answer "no." That argument has no force against Bayesians who reject countable additivity as a requirement of rationality. Such Bayesians are free to count even extreme humility as rationally permissible.
There are 1,000 of us and one victim. We each increase the level at which a "discomfort machine" operates on the victim---leading to great discomfort. Suppose that consecutive levels of the machine are so similar that the victim cannot distinguish them. Have we acted permissibly? According to the "no-difference argument" the answer is "yes" because each of our actions was guaranteed to make the victim no worse off. This argument is of interest because if it is sound, similar arguments threaten (...) intuitive moral verdicts about many cases in which a large number of individual choices cumulatively make a great difference, while each choice seems to make no difference on its own. But the argument is not sound, as is shown by a simple objection based on a plausible dominance principle---an objection that avoids challenges that have been brought against previous criticisms of the no difference argument. (shrink)
There is a huge chasm between the notion of lawful determination that figures in fundamental physics, and the notion of causal determination that figures in the "folk physics" of everyday objects. In everyday life, we think of the behavior of an ordinary object as being determined by a small set of simple conditions. But in fundamental physics, no such conditions suffice to determine an ordinary object's behavior. What bridges the chasm is that fundamental physical laws make the folk picture of (...) the world approximately true in certain domains. How? In part, by entailing that many objects are approximately isolated from most of their environments. Dynamical laws yield this result only in conjunction with appropriate statistical assumptions about initial conditions. (shrink)
Head direction (HD) cells, abundant in the rat postsubiculum and anterior thalamic nuclei, ﬁre maximally when the rat’s head is facing a particular direction. The activity of a population of these cells forms a distributed representation of the animal’s current heading. We describe a neural network model that creates a stable, distributed representation of head direction and updates that representation in response to angular velocity information. In contrast to earlier models, our model of the head direction system accurately tracks a (...) series of actual rat head rotations, and, using biologically plausible neurons, it ﬁts the single-cell tuning curves of real HD cells recorded from rats executing those same rotations. The model makes neurophysiological predictions that can be tested using current technologies. (shrink)
Counter-intuitive consequences of both causal decision theory and evidential decision theory are dramatized. Each of those theories is thereby put under some pressure to supply an error theory to explain away intuitions that seem to favour the other. Because trouble is stirred up for both sides, complacency about Newcomb’s problem is discouraged.
1. A particle moves back and forth along a line, increasing in speed. Graph. 2. How many equivalence classes in Galilean spacetime are there for a particle that is at rest? A particle that is moving at a constant speed? Why are the previous two questions trick questions? 3. In Galilean spacetime, there is no such thing as absolute velocity. Is there such a thing as absolute acceleration? If not, why not? If so, describe a spacetime in which there is (...) no notion of absolute acceleration. Hint: to move from Aristotelian spacetime to Galilean spacetime, we got rid of the notion of absolute velocity by counting two graphs as equivalent if they differed by a shear transformation. Perhaps we can get rid of absolute acceleration with an analogous move? 4. Draw a two-dimensional Cartesian grid. Label the axes x and t, and mark a scale on these axes. Make the x axis the horizontal axis, and the t axis the vertical one. Pick two points that are not on the same vertical line. Name them Ann and Bob. Label each point with its x and t coordinates. (shrink)
The ability to analyze arguments is critical for higher-level reasoning, yet previous research suggests that standard university education provides at best modest improvements in students’ analytical reasoning abilities. What techniques are most effective for cultivating these skills? Here we investigate the effectiveness of a 12-week undergraduate seminar in which students practice a software-based technique for visualizing the logical structures implicit in argumen- tative texts. Seminar students met weekly to analyze excerpts from contemporary analytic philosophy papers, completed argument visualization problem sets, (...) and received individualized feedback on a weekly basis. We found that Seminar students improved substantially more on LSAT Logical Reasoning test forms than Control students (d = .71, p < .001), suggesting that learning how to visualize arguments in the seminar led to large generalized improvements in students’ analytical reasoning skills. Moreover, blind scoring of final essays from Seminar students and Control students, drawn from a parallel lecture course, revealed large differences in favor of seminar students (d = 0.87, p = .005). Seminar students understood the arguments better, and their essays were more accurate and effectively structured. Taken together, these findings deepen our understanding of how visualizations support logical reasoning, and provide a model for improving analytical reasoning pedagogy. (shrink)
According to a typical skeptical hypothesis, the evidence of your senses has been massively deceptive. Venerable skeptical hypotheses include the hypotheses that you have been deceived by a powerful evil demon, that you are now having an incredibly detailed dream, and that you are a brain in a vat. It is obviously reasonable for you now to be conﬁdent that neither of the above hypotheses is true. Epistemologists have proposed many stories to explain why that is reasonable. One theory is (...) that those hypotheses are inherently much less plausible than the hypothesis that your senses are basically reliable. Another theory is that you currently don’t believe any of those hypotheses, and that you need no justiﬁcation to continue in that disbelief, given that your current beliefs cohere properly. A third theory is that —given that your senses are working properly—your sensory experiences themselves justify you in believing propositions such as the proposition that you have hands. Neo is given very good evidence that some skeptical hypothesis is true. He rightly becomes doubtful that his senses are or have been trustworthy. In fact, he becomes conﬁdent of a particular hypothesis: that AI’s created the Matrix, etc. But this is just one of many types of hypotheses that might account for his experiences. These types include. (shrink)
Turing machine An idealized computing device attached to a tape, each square of which is capable of holding a symbol. We write a program (a nite binary string) on the tape, and start the machine. If the machine halts with string o written at a designated place on the tape.
Poza niepewnością co do tego, jaki jest świat, można być także niepewnym swojego przestrzennego lub czasowgo położenia w świecie. Celem artykułu jest postawienie problemu wynikającego z połączenia tych dwóch rodzajów niepewności, a następnie rozwiązanie go i wyciągnięcie dwóch lekcji z tego rozwiązania.
Explain how to represent claims about Turing machines (for example, claims of the form ”machine m halts on input i”) in the above language. The goal is a mechanical method for translating claims about TMs into arithmetical claims in a way that preserves truth value.