Computer-based argument mapping greatly enhances student critical thinking, more than tripling absolute gains made by other methods. I describe the method and my experience as an outsider. Argument mapping often showed precisely how students were erring (for example: confusing helping premises for separate reasons), making it much easier for them to fix their errors.
This paper presents an attempt to integrate theories of causal processes—of the kind developed by Wesley Salmon and Phil Dowe—into a theory of causal models using Bayesian networks. We suggest that arcs in causal models must correspond to possible causal processes. Moreover, we suggest that when processes are rendered physically impossible by what occurs on distinct paths, the original model must be restricted by removing the relevant arc. These two techniques suffice to explain cases of late preëmption and other cases (...) that have proved problematic for causal models. (shrink)
We present a probabilistic extension to active path analyses of token causation (Halpern & Pearl 2001, forthcoming; Hitchcock 2001). The extension uses the generalized notion of intervention presented in (Korb et al. 2004): we allow an intervention to set any probability distribution over the intervention variables, not just a single value. The resulting account can handle a wide range of examples. We do not claim the account is complete --- only that it fills an obvious gap in previous active-path approaches. (...) It still succumbs to recent counterexamples by Hiddleston (2005), because it does not explicitly consider causal processes. We claim three benefits: a detailed comparison of three active-path approaches, a probabilistic extension for each, and an algorithmic formulation. (shrink)
The investigation of probabilistic causality has been plagued by a variety of misconceptions and misunderstandings. One has been the thought that the aim of the probabilistic account of causality is the reduction of causal claims to probabilistic claims. Nancy Cartwright (1979) has clearly rebutted that idea. Another ill-conceived idea continues to haunt the debate, namely the idea that contextual unanimity can do the work of objective homogeneity. It cannot. We argue that only objective homogeneity in combination with a causal interpretation (...) of Bayesian networks can provide the desired criterion of probabilistic causality. (shrink)
James McAllister’s 2003 article, “Algorithmic randomness in empirical data” claims that empirical data sets are algorithmically random, and hence incompressible. We show that this claim is mistaken. We present theoretical arguments and empirical evidence for compressibility, and discuss the matter in the framework of Minimum Message Length (MML) inference, which shows that the theory which best compresses the data is the one with highest posterior probability, and the best explanation of the data.
We present a minimum message length (MML) framework for trajectory partitioning by point selection, and use it to automatically select the tolerance parameter ε for Douglas-Peucker partitioning, adapting to local trajectory complexity. By examining a range of ε for synthetic and real trajectories, it is easy to see that the best ε does vary by trajectory, and that the MML encoding makes sensible choices and is robust against Gaussian noise. We use it to explore the identification of micro-activities within a (...) longer trajectory. This MML metric is comparable to the TRACLUS metric – and shares the constraint of abstracting only by omission of points – but is a true lossless encoding. Such encoding has several theoretical advantages – particularly with very small segments (high frame rates) – but actual performance interacts strongly with the search algorithm. Both differ from unconstrained piecewise linear approximations, including other MML formulations. (shrink)
There is a need to rapidly assess the impact of new technology initiatives on the Counter Improvised Explosive Device battle in Iraq and Afghanistan. The immediate challenge is the need for rapid decisions, and a lack of engineering test data to support the assessment. The rapid assessment methodology exploits available information to build a probabilistic model that provides an explicit executable representation of the initiative’s likely impact. The model is used to provide a consistent, explicit, explanation to decision makers on (...) the likely impact of the initiative. Sensitivity analysis on the model provides analytic information to support development of informative test plans. (shrink)
We present a probabilistic extension to active path analyses of token causation. The extension uses the generalized notion of intervention presented in : we allow an intervention to set any probability distribution over the intervention variables, not just a single value. The resulting account can handle a wide range of examples. We do not claim the account is complete --- only that it fills an obvious gap in previous active-path approaches. It still succumbs to recent counterexamples by Hiddleston, because it (...) does not explicitly consider causal processes. We claim three benefits: a detailed comparison of three active-path approaches, a probabilistic extension for each, and an algorithmic formulation. (shrink)
Paper presented to the Twenty-seventh Hume Society Conference, 26 July 2000, Williamsburg, Virginia. -/- At the time I thought there was a stronger link between Maclaurin and Hume, but in discussions at and after the meeting, decided Hume was not taking his mechanics out of Maclaurin’s Account. Although I still have found Maclaurin useful in interpreting Hume -- see Sapadin 1997 for a discussion of popular Newtonianism in Hume's day -- I suspect my draft suffers somewhat from ambivalence. There are (...) still similarities, and possible avenues of influence, arguing that Hume was not ignorant of the new mechanics, but it also becomes clear that he did not understand it: although he adopts the Newtonian measure of force, he misapplies it. (shrink)
Artificial Intelligence (AI) and Philosophy of Science share a fundamental problem—that of understanding causality. Bayesian network techniques have recently been used by Judea Pearl in a new approach to understanding causality and causal processes (Pearl, 2000). Pearl’s approach has great promise, but needs to be supplemented with an explicit account of causal interaction. Thus far, despite considerable interest, philosophy has provided no useful account of causal interaction. Here we provide one, employing the concepts of Bayesian networks. With it we demonstrate (...) the failure of one of philosophy’s more sophisticated attempts to deal with the concept of causal interaction, that of Ellery Eells’ Probabilistic Causality (1991). (shrink)
Part of our fascination with the Maya can be attributed to the fact that they were literate . . . that is, the Classic Maya possessed a visible language that consisted of letters and a grammar, and one of the products of their literacy was the book. (Aveni 1992b, p.3).
Artificial Intelligence (AI) and Philosophy of Science share a fundamental problem—understanding causality. Bayesian networks have recently been used by Judea Pearl in a new approach to understanding causality (Pearl, 2000). Part of understanding causality is understanding causal interaction. Bayes nets can represent any degree of causal interaction, and researchers normally try to limit interactions, usually by replacing the full CPT with a noisy-OR function. But we show that noisy-OR and another common model are merely special cases of the general linear (...) systems definition of noninteraction. However, they apply in different situations, and we can measure the degree of causal interaction relative to any such model. (shrink)
Using Bayesian network causal models, we provide a simple general account of probabilistic causal interaction. We also detail problems in the leading accounts by Ellery Eells, and any others which require valence reversals, contextual unanimity, or average effects.