I argue for patternism, a new answer to the question of when some objects compose a whole. None of the standard principles of composition comfortably capture our natural judgments, such as that my cat exists and my table exists, but there is nothing wholly composed of them. Patternism holds, very roughly, that some things compose a whole whenever together they form a “real pattern”. Plausibly we are inclined to acknowledge the existence of my cat and my table but not of (...) their fusion, because the first two have a kind of internal organizational coherence that their putative fusion lacks. Kolmogorov complexity theory supplies the needed rigorous sense of “internal organizational coherence”. (shrink)
I argue that, contrary to intuition, it would be both possible and permissible to design people - whether artificial or organic - who by their nature desire to do tasks we find unpleasant.
Standard epistemology takes it for granted that there is a special kind of value: epistemic value. This claim does not seem to sit well with act utilitarianism, however, since it holds that only welfare is of real value. I first develop a particularly utilitarian sense of “epistemic value”, according to which it is closely analogous to the nature of financial value. I then demonstrate the promise this approach has for two current puzzles in the intersection of epistemology and value theory: (...) first, the problem of why knowledge is better than mere true belief, and second, the relation between epistemic justification and responsibility. (shrink)
The idea that humans should abandon their individuality and use technology to bind themselves together into hivemind societies seems both farfetched and frightening – something that is redolent of the worst dystopias from science fiction. In this article, we argue that these common reactions to the ideal of a hivemind society are mistaken. The idea that humans could form hiveminds is sufficiently plausible for its axiological consequences to be taken seriously. Furthermore, far from being a dystopian nightmare, the hivemind society (...) could be desirable and could enable a form of sentient flourishing. Consequently, we should not be so quick to deny it. We provide two arguments in support of this claim – the axiological openness argument and the desirability argument – and then defend it against three major objections. (shrink)
Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...) be as bad as Bostrom suggests. If the superintelligence must *learn* complex final goals, then this means such a superintelligence must in effect *reason* about its own goals. And because it will be especially clear to a superintelligence that there are no sharp lines between one agent's goals and another's, that reasoning could therefore automatically be ethical in nature. (shrink)
Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: first, (...) it is unclear how any intelligent system could learn its final values, since to judge one supposedly "final" value against another seems to require a further background standard for judging. Second, it is unclear how to determine the content of a system's values based on its physical or computational structure. Finally, there is the distinctly ethical question of which values we should best aim for the system to learn. I outline a potential answer to these interrelated puzzles, centering on a "miktotelic" proposal for blending a complex, learnable final value out of many simpler ones. (shrink)
In this chapter I'd like to focus on a small corner of sexbot ethics that is rarely considered elsewhere: the question of whether and when being a sexbot might be good---or bad---*for the sexbot*. You might think this means you are in for a dry sermon about the evils of robot slavery. If so, you'd be wrong; the ethics of robot servitude are far more complicated than that. In fact, if the arguments here are right, designing a robot to serve (...) humans sexually may be very good for the robots themselves. (shrink)
There are writers in both metaphysics and algorithmic information theory (AIT) who seem to think that the latter could provide a formal theory of the former. This paper is intended as a step in that direction. It demonstrates how AIT might be used to define basic metaphysical notions such as *object* and *property* for a simple, idealized world. The extent to which these definitions capture intuitions about the metaphysics of the simple world, times the extent to which we think the (...) simple world is analogous to our own, will determine a lower bound for basing a metaphysics for *our* world on AIT. (shrink)
Naturalism is normally taken to be an ideology, censuring non-naturalistic alternatives. But as many critics have pointed out, this ideological stance looks internally incoherent, since it is not obviously endorsed by naturalistic methods. Naturalists who have addressed this problem universally foreswear the normative component of naturalism by, in effect, giving up science’s exclusive claim to legitimacy. This option makes naturalism into an empty expression of personal preference that can carry no weight in the philosophical or political spheres. In response to (...) this dilemma, I argue that on a popular construal of naturalism as a commitment to inference to the best explanation, methodological naturalism can be both normative and internally coherent. (shrink)
In Naming and Necessity, Saul Kripke employs a handy philosophical trick: he invents the term ‘schmidentity’ to argue indirectly for his favored account of identity. Kripke says in a footnote that he wishes someday “to elaborate on the utility of this device”. In this paper, I first take up a general elaboration on his behalf. I then apply the trick to support an attractive but somewhat unorthodox picture of conceptual analysis—one according to which it is a process of forming intentions (...) for word use. This picture can recover a naturalistically respectable notion of the philosopher’s task, and can help resolve current debates that turn on the place of conceptual analysis. (shrink)
What is “property”? Property Roughly, thing x is the (private) property of agent A if and only if A has exclusive and extensive legal rights of access and / or use for x.
Arthur Falk has proposed a new construal of faith according to which it is not a mere species of belief, but has essential components in action. This twist on faith promises to resurrect Pascal’s Wager, making faith compatible with reason by believing as the scientist but acting as the theist. I argue that Falk’s proposal leaves religious faith in no better shape; in particular, it merely reframes the question in terms of rational desires rather than rational beliefs.
Methodological naturalism states (roughly speaking) that only science can be a route to knowledge. This purported piece of knowledge looks self-condemning, however; after all, it was formulated in the armchair, and not in the laboratory. I argue that on a popular (if largely unarticulated) construal of naturalism as inference to the best explanation, methodological naturalism escapes this charge of internal incoherence, and in fact is self-endorsing rather than self-condemning.
Jeffrey conditioning allows updating in Bayesian style when the evidence is uncertain. A weighted average, essentially, over classically updating on the alternatives. Unlike classical Bayesian conditioning, this allows learning to be unlearned.
The simplicity of a theory seems closely related to how well the theory summarizes individual data points. Think, for example, of classic curve-fitting. It is easy to get perfect data-fit with a ‘‘theory’’ that simply lists each point of data, but such a theory is maximally unsimple (for the data-fit). The simple theory suggests instead that there is one underlying curve that summarizes this data, and we usually prefer such a theory even at some expense in data-fit. In general, it (...) seems, theorizing involves looking for regularities or patterns in our experience, and such regularities are interesting to us because they summarize how our experience goes. We could list all the ravens we’ve encountered, and their colors, or we could summarize.. (shrink)
Running head: Functional neuroimaging Abstract Several recently developed techniques enable the investigation of the neural basis of cognitive function in the human brain. Two of these, PET and fMRI, yield whole-brain images reflecting regional neural activity associated with the performance of specific tasks. This article explores the spatial and temporal capabilities and limitations of these techniques, and discusses technical, biological, and cognitive issues relevant to understanding the goals and methods of neuroimaging studies. The types of advances in understanding cognitive and (...) brain function made possible with these methods are illustrated with examples from the neuroimaging literature. (shrink)
given at the 2007 Formal Epistemology Workshop at Carnegie Mellon June 2nd. Good compression must track higher vs lower probability of inputs, and this is one way to approach how simplicity tracks truth.
Tradition compels me to write dissertation acknowledgements that are long, effusive, and unprofessional. Fortunately for me, I heartily endorse that tradition.