The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could (...) then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. (shrink)
Given its non-invasive nature, there is increasing interest in the use of transcutaneous vagus nerve stimulation across basic, translational and clinical research. Contemporaneously, tVNS can be achieved by stimulating either the auricular branch or the cervical bundle of the vagus nerve, referred to as transcutaneous auricular vagus nerve stimulation and transcutaneous cervical VNS, respectively. In order to advance the field in a systematic manner, studies using these technologies need to adequately report sufficient methodological detail to enable comparison of results between (...) studies, replication of studies, as well as enhancing study participant safety. We systematically reviewed the existing tVNS literature to evaluate current reporting practices. Based on this review, and consensus among participating authors, we propose a set of minimal reporting items to guide future tVNS studies. The suggested items address specific technical aspects of the device and stimulation parameters. We also cover general recommendations including inclusion and exclusion criteria for participants, outcome parameters and the detailed reporting of side effects. Furthermore, we review strategies used to identify the optimal stimulation parameters for a given research setting and summarize ongoing developments in animal research with potential implications for the application of tVNS in humans. Finally, we discuss the potential of tVNS in future research as well as the associated challenges across several disciplines in research and clinical practice. (shrink)
Given its non-invasive nature, there is increasing interest in the use of transcutaneous vagus nerve stimulation across basic, translational and clinical research. Contemporaneously, tVNS can be achieved by stimulating either the auricular branch or the cervical bundle of the vagus nerve, referred to as transcutaneous auricular vagus nerve stimulation and transcutaneous cervical VNS, respectively. In order to advance the field in a systematic manner, studies using these technologies need to adequately report sufficient methodological detail to enable comparison of results between (...) studies, replication of studies, as well as enhancing study participant safety. We systematically reviewed the existing tVNS literature to evaluate current reporting practices. Based on this review, and consensus among participating authors, we propose a set of minimal reporting items to guide future tVNS studies. The suggested items address specific technical aspects of the device and stimulation parameters. We also cover general recommendations including inclusion and exclusion criteria for participants, outcome parameters and the detailed reporting of side effects. Furthermore, we review strategies used to identify the optimal stimulation parameters for a given research setting and summarize ongoing developments in animal research with potential implications for the application of tVNS in humans. Finally, we discuss the potential of tVNS in future research as well as the associated challenges across several disciplines in research and clinical practice. (shrink)
Do philosophic views affect job performance? The authors found that possessing a belief in free will predicted better career attitudes and actual job performance. The effect of free will beliefs on job performance indicators were over and above well-established predictors such as conscientiousness, locus of control, and Protestant work ethic. In Study 1, stronger belief in free will corresponded to more positive attitudes about expected career success. In Study 2, job performance was evaluated objectively and independently by a supervisor. Results (...) indicated that employees who espoused free will beliefs were given better work performance evaluations than those who disbelieve in free will, presumably because belief in free will facilitates exerting control over one’s actions. (shrink)
This paper addresses and proposes to resolve a longstanding problem in the philosophy of physics: whether and in what sense Albert Einstein’s Chasing the Light thought experiment was significant in the development of the theory of special relativity. Although Einstein granted this thought experiment pride of place in his 1949 Autobiographical Notes, philosophers and physicists continue to debate about what, if anything, the experiment establishes. I claim that we ought to think of Chasing the Light as Einstein’s first attempt to (...) problematize the very idea of the electromagnetic ether frame, and that it thereby contributed to his eventual adoption of one of special relativity’s two foundational axioms: the “light postulate”. This interpretation requires the assumption that Einstein had presupposed special relativity’s other axiom, the “principle of relativity”, when initially considering Chasing the Light. This argument is novel insofar as it provides evidence that such a presupposition by Einstein is both conceptually and historically plausible. Moreover, this paper directly challenges John D. Norton’s compelling claim that Chasing the Light is best understood as a refutation of emission theories of light propagation; while both interpretations of the experiment are conceptually coherent, I argue that the interpretation found in this paper is supported more straightforwardly by historical evidence. (shrink)
Humean promotionalists about reasons think that whether there is a reason for an agent to ϕ depends on whether her ϕ-ing promotes the satisfaction of at least one of her desires. Several authors have recently defended probabilistic accounts of promotion, according to which an agent’s ϕ-ing promotes the satisfaction of one of her desires just in case her ϕ-ing makes the satisfaction of that desire more probable relative to some baseline. In this paper I do three things. First, I formalize (...) an argument, due to Jeff Behrends and Joshua DiPaolo, to the effect that Mark Schroeder’s and Stephen Finlay’s probabilistic accounts of promotion cannot be correct. Next, I extend this argument to a recent alternative offered by D. Justin Coates and show how Coates’ attempt to avoid the argument by introducing a distinction between ‘intrinsic’ and ‘extrinsic’ probability doesn’t help. Finally, I suggest an alternative way of understanding promotion in terms of increase in degree of fit between the causal upshot of an action and the content of a desire. I show how this view, disjunctively paired with probabilism about promotion, solves the problems with previous accounts. (shrink)
RésuméLa parution récente du troisième recueil d'articles de Donald Davidson, lequel devrait être suivi de deux autres, incite à examiner les thèmes qui traversent tousses travaux. Parmices thèmes se trouve leprincipe de charité. Considerant tout le parti que Davidson a tiré du PC, je me propose d'en faire un examen attentif. Dans la première partie, j'examine diverses formulations du PC par Davidson. Dans la seconde partie, je montre que la formulation qu'exigent ses travaux d'epistémologie est intenable étant donné ce qu'il (...) en dit dans ses travaux de sémantique. De là, je conclus que Davidson ne peut se servir du PC que dans ses travaux de sémantique ou pas du tout. (shrink)
Summary This paper investigates the fate of Thomas Harriot's algebra after his death in 1621 and, in particular, the largely unsuccessful efforts of seventeenth-century mathematicians to promote it. The little known surviving manuscripts of Nathaniel Torporley have been used to elucidate the roles of Torporley and Walter Warner in the preparation of the Praxis, and a partial translation of Torporley's important critique of the Praxis is offered here for the first time. The known whereabouts of Harriot's mathematical papers, both (...) originals and copies, during the seventeenth century and later are summarised. John Wallis's controversial 1685 account of Harriot's algebra is examined in detail and it is argued that John Pell's influence on Wallis was far more significant than has previously been realised. The paper ends with a reassessment of Harriot's underrated and important contribution to the development of modern algebra. (shrink)
Some characteristics of the Cambridge Platonists -- Benjamin Whichcote (1609-1683) -- John Smith (1616-1652) -- Ralph Cudworth (1617-1685) -- Nathaniel Culverwel (1618?-1651) -- Henry More (1614-1687) -- Peter Sterry (d. 1672).
Prologue.--Some characteristics of the Cambridge Platonists.--Benjamin Whichcote (1609-1683)--John Smith (1616-1652)--Ralph Cudworth (1617-1685)--Nathaniel Culverwel (1618?-1651)--Henry More (1614-1687)--Peter Sterry (d. 1672)--Epilogue.
Entre 1829 et 1839, le mathématicien américain Nathaniel Bowditch publie quatre volumes de la Mécanique Céleste by the Marquis de Laplace, translated with a commentary. Il s’agit d’une traduction de l’astronomie mathématique de Pierre Simon de Laplace parue en France entre 1799 et 1825 et assortie d’un commentaire explicatif. L’ouvrage américain est alors distribué en France comme aucune autre production savante américaine ne l’est au cours du xixe siècle. Cet article cherche à redonner à ce texte la place qui (...) est la sienne dans le cadre des échanges mathématiques franco-américains au xixe siècle orientés des États-Unis vers la France, un sens de transfert des savoirs négligé ou minoré par l’historiographie. Sur le plan intellectuel, il étudie la réception de l’ouvrage auprès des savants français et souligne combien l’auteur américain répond aux manques et aux difficultés du texte d’origine. Sur le terrain matériel, il montre comment les passeurs de sciences, intermédiaires non scientifiques, essentiels dans la transmission du texte entre les mondes savants de Boston et Paris, sont progressivement remplacés par les professionnels de l’édition. (shrink)
In this article, we develop an approach for the moral assessment of research and development networks on the basis of the reflective equilibrium approach proposed by Rawls and Daniels. The reflective equilibrium approach aims at coherence between moral judgments, principles, and background theories. We use this approach because it takes seriously the moral judgments of the actors involved in R & D, whereas it also leaves room for critical reflection about these judgments. It is shown that two norms, namely reflective (...) learning and openness and inclusiveness, which are used in the literature on policy and technological networks, contribute to achieving a justified overlapping consensus. We apply the approach to a case study about the development of an innovative sewage treatment technology and show how in this case the two norms are or could be instrumental in achieving a justified overlapping consensus on relevant moral issues. (shrink)
Neuroeconomics illustrates our deepening descent into the details of individual cognition. This descent is guided by the implicit assumption that “individual human” is the important “agent” of neoclassical economics. I argue here that this assumption is neither obviously correct, nor of primary importance to human economies. In particular I suggest that the main genius of the human species lies with its ability to distribute cognition across individuals, and to incrementally accumulate physical and social cognitive artifacts that largely obviate the innate (...) biological limitations of individuals. If this is largely why our economies grow, then we should be much more interested in distributed cognition in human groups, and correspondingly less interested in individual cognition. We should also be much more interested in the cultural accumulation of cognitive artefacts: computational devices and media, social structures and economic institutions. (shrink)
Epistemic feelings like tip-of-the-tongue experiences, feelings of knowing, and feelings of confidence tell us when a memory can be recalled and when a judgment was correct. Thus, they appear to be a form of metacognition, but a curious one: they tell us about content we cannot access, and the information is supplied by a feeling. Evaluativism is the claim that epistemic feelings are components of a distinct, primitive metacognitive mechanism that operates on its own set of inputs. These inputs are (...) heuristics that correlate with the presence of mental content that can’t be accessed directly. I will argue that evaluativism is unmotivated, unsupported, and ill-conceived. I will critique the philosophical and empirical arguments for evaluativism and conclude that there is no reason to posit a distinct mechanism to explain epistemic feelings. I will conclude, however, that epistemic feelings may constitute a nonconceptual form of metacognition, which if true is a significant claim. (shrink)
The author details his relationship with Ayn Rand, illuminating the tremendous influence of Objectivism on his life and work and the twenty-five year intimate relationship they shared.
An appropriate kind of curved Hilbert space is developed in such a manner that it admits operators of $\mathcal{C}$ - and $\mathfrak{D}$ -differentiation, which are the analogues of the familiar covariant and D-differentiation available in a manifold. These tools are then employed to shed light on the space-time structure of Quantum Mechanics, from the points of view of the Feynman ‘path integral’ and of canonical quantisation. (The latter contains, as a special case, quantisation in arbitrary curvilinear coordinates when space is (...) flat.) The influence of curvature is emphasised throughout, with an illustration provided by the Aharonov-Bohm effect. (shrink)
Twentieth-century developments in logic and mathematics have led many people to view Euclid’s proofs as inherently informal, especially due to the use of diagrams in proofs. In _Euclid and His Twentieth-Century Rivals_, Nathaniel Miller discusses the history of diagrams in Euclidean Geometry, develops a formal system for working with them, and concludes that they can indeed be used rigorously. Miller also introduces a diagrammatic computer proof system, based on this formal system. This volume will be of interest to mathematicians, (...) computer scientists, and anyone interested in the use of diagrams in geometry. (shrink)
The Morality of Spin explores the ethics of political rhetoric crafted to persuade and possibly manipulate potential voters. Based on extensive insider interviews with leaders of Focus on the Family, one of the most powerful Christian right organizations in America, Nathaniel Klemp asks whether the tactic of tailoring a message to a particular audience is politically legitimate or amounts to democratic malpractice. Klemp’s nuanced assessment, highlighting both democratic vices and virtues of the political rhetoric, provides a welcome contribution to (...) recent scholarship on deliberative democracy, rhetoric, and the growing empirical literature on the American Christian right. (shrink)
The fact that someone is generous is a reason to admire them. The fact that someone will pay you to admire them is also a reason to admire them. But there is a difference in kind between these two reasons: the former seems to be the ‘right’ kind of reason to admire, whereas the latter seems to be the ‘wrong’ kind of reason to admire. The Wrong Kind of Reasons Problem is the problem of explaining the difference between the ‘right’ (...) and the ‘wrong’ kind of reasons wherever it appears. In this article I argue that two recent proposals for solving the Wrong Kind of Reasons Problem do not work. I then offer an alternative solution that provides a unified, systematic explanation of the difference between the two kinds of reasons. (shrink)
Recent defenders of metaethical constructivism (like Christine Korsgaard, Sharon Street, Aaron James, and Carla Bagnoli) argue that this view can be shown to represent a new, free-standing alternative to familiar approaches in metaethics. If they are correct, traditional discussions in metaethics have overlooked an important position, one that is supposed to adequately explain the nature of our ethical thinking and practice while avoiding the kinds of objections that traditional views struggle with. However, what form constructivism should take and whether constructivists (...) can make good on this ambitious claim remains extremely controversial. This article starts out with a brief account of the origins of contemporary discussions of constructivism. It then moves on to canvass the main motivations and arguments for constructivism along with the various ways in which the view has been interpreted. In the second half of the article, it introduces a serious challenge to the ambitious claim that constructivism represents a new, free-standing approach in metaethics and concludes by entertaining a very recent proposal that this challenge might overlook. (shrink)
A revised version of Nathaniel Chipman's Sketches of the Principles of Government (1793), this early treatise on the underlying principles of American government addresses civil laws and obligations, the social state, rights of property, sovereignty and political power. An important early contribution to American constitutional law, it is also interesting for its Federalist perspective on the evolutions of political institutions from Washington to Jackson.Nathaniel Chipman [1752-1843] was a leading Vermont Federalist who was instrumental in that state's admission to (...) the Union. He became Vermont's chief Justice and went on to represent Vermont in the U.S. Senate. He was also one of America's first significant legal writers.One of his books, the Reports and Dissertations (1793) is included in Warren's list of "the four general works on the Common Law... [of] permanent value in American Legal Literature.": Warren, A History of the American Bar 335-336. See Cohen, Bibliography of Early American Law 5752, Sabin, A Dictionary of Books Relating to America 12824, Dictionary of American Biography II:73-74. (shrink)
Mark Schroeder has recently offered a solution to the problem of distinguishing between the so-called " right " and " wrong " kinds of reasons for attitudes like belief and admiration. Schroeder tries out two different strategies for making his solution work: the alethic strategy and the background-facts strategy. In this paper I argue that neither of Schroeder's two strategies will do the trick. We are still left with the problem of distinguishing the right from the wrong kinds of reasons.