We propose that children employ specialized cognitive systems that allow them to recover an accurate “causal map” of the world: an abstract, coherent, learned representation of the causal relations among events. This kind of knowledge can be perspicuously understood in terms of the formalism of directed graphical causal models, or “Bayes nets”. Children’s causal learning and inference may involve computations similar to those for learning causal Bayes nets and for predicting with them. Experimental results suggest that 2- to 4-year-old children (...) construct new causal maps and that their learning is consistent with the Bayes net formalism. (shrink)
Clark Glymour, Richard Scheines, Peter Spirtes and Kevin Kelly. Discovering Causal Structure: Artifical Intelligence, Philosophy of Science and Statistical Modeling.
Halvorson argues through a series of examples and a general result due to Myers that the “semantic view” of theories has no available account of formal theoretical equivalence. De Bouvere provides criteria overlooked in Halvorson’s paper that are immune to his counterexamples and to the theorem he cites. Those criteria accord with a modest version of the semantic view that rejects some of Van Fraassen’s apparent claims while retaining the core of Patrick Suppes’s proposal. I do not endorse any version (...) of the semantic view of theories. (shrink)
We consider the dispute between causal decision theorists and evidential decision theorists over Newcomb-like problems. We introduce a framework relating causation and directed graphs developed by Spirtes et al. (1993) and evaluate several arguments in this context. We argue that much of the debate between the two camps is misplaced; the disputes turn on the distinction between conditioning on an event E as against conditioning on an event I which is an action to bring about E. We give the essential (...) machinery for calculating the effect of an intervention and consider recent work which extends the basic account given here to the case where causal Knowledge is incomplete. (shrink)
Your use of the JSTOR archive indicates your acceptance of J STOR’s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. J STOR’s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non—commercial use.
We argue that current discussions of criteria for actual causation are ill-posed in several respects. (1) The methodology of current discussions is by induction from intuitions about an infinitesimal fraction of the possible examples and counterexamples; (2) cases with larger numbers of causes generate novel puzzles; (3) "neuron" and causal Bayes net diagrams are, as deployed in discussions of actual causation, almost always ambiguous; (4) actual causation is (intuitively) relative to an initial system state since state changes are relevant, but (...) most current accounts ignore state changes through time; (5) more generally, there is no reason to think that philosophical judgements about these sorts of cases are normative; but (6) there is a dearth of relevant psychological research that bears on whether various philosophical accounts are descriptive. Our skepticism is not directed towards the possibility of a correct account of actual causation; rather, we argue that standard methods will not lead to such an account. A different approach is required. (shrink)
Your use of the JSTOR archive indicates your acceptance of J STOR’s Terms and Conditions of Use, available at http://www.jstor.org/about/tenns.htm1. J STOR’s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non—commercial use.
Recent literature in philosophy of science has addressed purported notions of explanatory virtues—‘explanatory power’, ‘unification’, and ‘coherence’. In each case, a probabilistic relation between a theory and data is said to measure the power of an explanation, or degree of unification, or degree of coherence. This essay argues that the measures do not capture cases that are paradigms of scientific explanation, that the available psychological evidence indicates that the measures do not capture judgements of explanatory power, and, finally, that the (...) measures do not provide useful methods for selecting hypotheses. 1. Introduction2. Some Proposed Measures of Explanatory Virtues3. Descriptive Inadequacy3.1 Excellent but false explanations3.2 Causal explanation4. Psychological Inadequacy5. Finding the Truth6. Conclusion. (shrink)
I argue that psychologists interested in human causal judgment should understand and adopt a representation of causal mechanisms by directed graphs that encode conditional independence (screening off) relations. I illustrate the benefits of that representation, now widely used in computer science and increasingly in statistics, by (i) showing that a dispute in psychology between ‘mechanist’ and ‘associationist’ psychological theories of causation rests on a false and confused dichotomy; (ii) showing that a recent, much-cited experiment, purporting to show that human subjects, (...) incorrectly let large causes ‘overshadow’ small causes, misrepresents the most likely, and warranted, causal explanation available to the subjects, in the light of which their responses were normative; (iii) showing how a recent psychological theory (due to P. Cheng) of human judgment of causal power can be considerably generalized: and (iv) suggesting a range of possible experiments comparing human and computer abilities to extract causal information from associations. (shrink)
Reverse inference in cognitive neuropsychology has been characterized as inference to ‘psychological processes’ from ‘patterns of activation’ revealed by functional magnetic resonance or other scanning techniques. Several arguments have been provided against the possibility. Focusing on Machery’s presentation, we attempt to clarify the issues, rebut the impossibility arguments, and propose and illustrate a strategy for reverse inference. 1 The Problem of Reverse Inference in Cognitive Neuropsychology2 The Arguments2.1 The anti-Bayesian argument3 Patterns of Activation4 Reverse Inference Practiced5 Seek and Ye Shall (...) Find, Maybe6 Conclusion. (shrink)
S CIENTISTS often claim that an experiment or observation tests certain hypotheses within a complex theory but not others. Relativity theorists, for example, are unanimous in the judgment that measurements of the gravitational red shift do not test the field equations of general relativity; psychoanalysts sometimes complain that experimental tests of Freudian theory are at best tests of rather peripheral hypotheses; astronomers do not regard observations of the positions of a single planet as a test of Kepler's third law, even (...) though those observations may test Kepler's first and second laws. Observations are regarded as relevant to some hypotheses in a theory but not relevant to others in that same theory. There is another kind of scientific judgment that may or may not be related to such judgments of relevance: determinations of the accuracy of the predictions of some theories are not held to provide tests of those theories, or, at least, positive results are not held to support or confirm the theories in question. There are, for example, special relativistic theories of gravity that predict the same phenomena as does general relativity, yet the theories are regarded as.. (shrink)
"Goodness of Fit": Clinical Applications from Infancy through Adult Life. By Stella Chess & Alexander Thomas. Brunner/Mazel, Philadelphia, PA, 1999. pp. 229. pound24.95 (hb). Chess and Thomas's pioneering longitudinal studies of temperamental individuality started over 40 years ago (Thomas et al., 1963). Their publications soon became and remain classics. Their concept of "goodness of fit" emerges out of this monumental work but has had a long gestation period. In their new book, the authors distinguish between behaviour disorders that are reactive (...) to the child's life circumstances, including life events, and which are self-correcting or responsive to the relevant changes in their environment, and more serious disorders. (shrink)
Some Philosophical Prehistory of General Relativity As history, my remarks will form rather a medley. If they can claim any sort of unity (apart from a ...
The ultimate focus of the current essay is on methods of “creative abduction” that have some guarantees as reliable guides to the truth, and those that do not. Emphasizing work by Richard Englehart using data from the World Values Survey, Gerhard Schurz has analyzed literature surrounding Samuel Huntington’s well-known claims that civilization is divided into eight contending traditions, some of which resist “modernization” – democracy, civil rights, equality of rights of women and minorities, secularism. Schurz suggests an evolutionary model of (...) modernization and identifies opposing social forces. In a later essay, citing Englehart’s work as an example, Schurz identifies factor analysis as an example of “creative abduction”. The theories of Englehart and his collaborators are reviewed again in the current essay. Published simulations and standard statistical desiderata for causal inference show the methods Englehart used, factor analysis in particular, are not guides to truth for the kind of data Schurz recognizes as common in political science. Recent work in statistics, philosophy and computer science that makes advances towards such methods is briefly reviewed. (shrink)
A reprint of the Prentice-Hall edition of 1992. Prepared by nine distinguished philosophers and historians of science, this thoughtful reader represents a cooperative effort to provide an introduction to the philosophy of science focused on cultivating an understanding of both the workings of science and its historical and social context. Selections range from discussions of topics in general methodology to a sampling of foundational problems in various physical, biological, behavioral, and social sciences. Each chapter contains a list of suggested readings (...) and study questions. (shrink)
Your use of the JSTOR archive indicates your acceptance of J STOR’s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. J STOR’s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non—commercial use.
& Carnegie Mellon University Abstract The rationality of human causal judgments has been the focus of a great deal of recent research. We argue against two major trends in this research, and for a quite different way of thinking about causal mechanisms and probabilistic data. Our position rejects a false dichotomy between "mechanistic" and "probabilistic" analyses of causal inference -- a dichotomy that both overlooks the nature of the evidence that supports the induction of mechanisms and misses some important probabilistic (...) implications of mechanisms. This dichotomy has obscured an alternative conception of causal learning: for discrete events, a central adaptive task is to induce causal mechanisms in the environment from probabilistic data and prior knowledge. Viewed from this perspective, it is apparent that the probabilistic norms assumed in the human causal judgment literature often do not map onto the mechanisms generating the probabilities. Our alternative conception of causal judgment is more congruent with both scientific uses of the notion of causation and observed causal judgments of untutored reasoners. We illustrate some of the relevant variables under this conception, using a framework for causal representation now widely adopted in computer science and, increasingly, in statistics. We also review the formulation and evidence for a theory of human causal induction (Cheng, 1997) that adopts this alternative conception. (shrink)
Contemporary cognitive neuropsychology attempts to infer unobserved features of normal human cognition, or ?cognitive architecture?, from experiments with normals and with brain-damaged subjects in whom certain normal cognitive capacities are altered, diminished, or absent. Fundamental methodological issues about the enterprise of cognitive neuropsychology concern the characterization of methods by which features of normal cognitive architecture can be identified from such data, the assumptions upon which the reliability of such methods are premised, and the limits of such methods?even granting their assumptions?in (...) resolving uncertainties about that architecture. With some idealization, the question of the capacities of various experimental designs in cognitive neuropsychology to uncover cognitive architecture can be reduced to comparatively simple questions about the prior assumptions investigators are willing to make. This paper presents some of simplest of those reductions. 1Research for this paper was made possible by a fellowship from the John Simon Guggenheim Memorial Foundation and by grant number SBE-9212264 from the National Science Foundation. I thank Martha Farah for teaching me what little I know of cognitive neuropsychology, Jeffrey Bub for stimulating me to think about these issues and for commenting on drafts of this paper, and Peter Slezak for additional comments. Alfonso Caramazza and Michael McCloskey provided very helpful comments on a second draft. (shrink)
Your use of the JSTOR archive indicates your acceptance of J STOR’s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. J STOR’s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non—commercial use.
Few people have thought so hard about the nature of the quantum theory as has Jeff Bub,· and so it seems appropriate to offer in his honor some reflections on that theory. My topic is an old one, the consistency of our microscopic theories with our macroscopic theories, my example, the Aspect experiments (Aspect et al., 1981, 1982, 1982a; Clauser and Shimony, l978;_Duncan and Kleinpoppen, 199,8) is familiar, and my sirnplrcation of it is borrowed. All that is new here is (...) a kind of diagonalization: an argument that the fundamental principles found to be violated by the quantum theory must be assumed to be true of the experimental apparatus used in the experiments.. (shrink)
Halpern's Actual Causality is an extended development of an account of causal relations among individual events in the tradition that analyzes causation as difference making. The book is notable for its efforts at formal clarity, its exploration of "normality" conditions, and the wealth of examples it uses and whose provenance it traces. Unfortunately, the various normality conditions considered undermine the capacity of the basic theory to plausibly treat various cases Halpern considers, and the unalloyed basic theory yields implausible results in (...) simple cases of overdetermination, which are not remedied by Halpern's probabilistic version of his theory or unambiguously by the variety of normality conditions Actual Causality entertains. (shrink)
The notion of reduction in the natural sciences has been assimilated to the notion of inter-theoretical explanation. Many philosophers of science (following Nagel) have held that the apparently ontological issues involved in reduction should be replaced by analyses of the syntactic and semantic connections involved in explaining one theory on the basis of another. The replacement does not seem to have been especially successful, for we still lack a plausible account of inter-theoretical explanation. I attempt to provide one.
Twenty years ago, Nancy Cartwright wrote a perceptive essay in which she clearly distinguished causal relations from associations, introduced philosophers to Simpson’s paradox, articulated the difficulties for reductive probabilistic analyses of causation that flow from these observations, and connected causal relations with strategies of action (Cartwright 1979). Five years later, without appreciating her essay, I and my (then) students began to develop formal representations of causal and probabilistic relations, which, subsequently informed by the work of computer scientists and statisticians, led (...) eventually to a practical theory of causal inference and prediction, a theory incorporating some of the sensibilities Cartwright had voiced (Glymour et al. 1987; Spirtes et al. 1993). That theory, and ideas related to it, have become a subfield of computer science with contributions far deeper than mine from many sources, and its inferential and predictive techniques have been successfully applied in biology, economics, educational research, geology and space physics. (shrink)
Really statistical explanation is a hitherto neglected form of noncausal scientific explanation. Explanations in population biology that appeal to drift are RS explanations. An RS explanation supplies a kind of understanding that a causal explanation of the same result cannot supply. Roughly speaking, an RS explanation shows the result to be mere statistical fallout.
One construal of convergent realism is that for each clear question, scientific inquiry eventually answers it. In this paper we adapt the techniques of formal learning theory to determine in a precise manner the circumstances under which this ideal is achievable. In particular, we define two criteria of convergence to the truth on the basis of evidence. The first, which we call EA convergence, demands that the theorist converge to the complete truth "all at once". The second, which we call (...) AE convergence, demands only that for every sentence in the theorist's language, there is a time at which the theorist settles the status of the sentence. The relative difficulties of these criteria are compared for effective and ineffective agents. We then examine in detail how the enrichment of an agent's hypothesis language makes the task of converging to the truth more difficult. In particular, we parametrize first-order languages by predicate and function symbol arity, presence or absence of identity, and quantifier prefix complexity. For nearly each choice of values of these parameters, we determine the senses in which effective and ineffective agents can converge to the complete truth on an arbitrary structure for the language. Finally, we sketch directions in which our learning theoretic setting can be generalized or made more realistic. (shrink)
Your use of the JSTOR archive indicates your acceptance of J STOR’s Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. J STOR’s Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non—commercial use.
Using Gebharter’s representation, we consider aspects of the problem of discovering the structure of unmeasured submechanisms when the variables in those submechanisms have not been measured. Exploiting an early insight of Sober’s, we provide a correct algorithm for identifying latent, endogenous structure—submechanisms—for a restricted class of structures. The algorithm can be merged with other methods for discovering causal relations among unmeasured variables, and feedback relations between measured variables and unobserved causes can sometimes be learned.
Taking seriously the arguments of Earman, Roberts and Smith that ceteris paribus laws have no semantics and cannot be tested, I suggest that ceteris paribus claims have a kind of formal pragmatics, and that at least some of them can be verified or refuted in the limit.