Citations of:
Vision as Bayesian inference: analysis by synthesis?
Trends in Cognitive Sciences 10 (7):301-308 (2006)
Add citations
You must login to add citations.
|
|
A theoretical pillars of vision science in the information-processing tradition is that perception involves unconscious inference. The classic support for this claim is that, since retinal inputs underdetermine their distal causes, visual perception must be the conclusion of a process that starts with premises representing both the sensory input and previous knowledge about the visible world. Focus on this “argument from underdetermination” gives the impression that, if it fails, there is little reason to think that visual processing involves unconscious inference. (...) No categories |
|
|
|
Philosophers interested in the theoretical consequences of predictive processing often assume that predictive processing is an inferentialist and representationalist theory of cognition. More specifically, they assume that predictive processing revolves around approximated Bayesian inferences drawn by inverting a generative model. Generative models, in turn, are said to be structural representations: representational vehicles that represent their targets by being structurally similar to them. Here, I challenge this assumption, claiming that, at present, it lacks an adequate justification. I examine the only argument (...) |
|
|
|
|
|
|
|
In this paper we discuss the new Tweety puzzle. The original Tweety puzzle was addressed by approaches in non-monotonic logic, which aim to adequately represent the Tweety case, namely that Tweety is a penguin and, thus, an exceptional bird, which cannot fly, although in general birds can fly. The new Tweety puzzle is intended as a challenge for probabilistic theories of epistemic states. In the first part of the paper we argue against monistic Bayesians, who assume that epistemic states can (...) |
|
The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed "consumption analysis" is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual (...) |
|
|
|
|
|
This commentary gives a personal perspective on modeling and modeling developments in cognitive science, starting in the 1950s, but focusing on the author’s personal views of modeling since training in the late 1960s, and particularly focusing on advances since the official founding of the Cognitive Science Society. The range and variety of modeling approaches in use today are remarkable, and for many, bewildering. Yet to come to anything approaching adequate insights into the infinitely complex fields of mind, brain, and intelligent (...) |
|
This paper presents a version of neurophenomenology based on generative modelling techniques developed in computational neuroscience and biology. Our approach can be described as _computational phenomenology_ because it applies methods originally developed in computational modelling to provide a formal model of the descriptions of lived experience in the phenomenological tradition of philosophy (e.g., the work of Edmund Husserl, Maurice Merleau-Ponty, etc.). The first section presents a brief review of the overall project to naturalize phenomenology. The second section presents and evaluates (...) |
|
There are issues in Reid scholarship as well as the primary texts that seem to suggest that Reid is not a direct realist about visual perception. In this paper, I examine two key issues ? colour perception and visible figure ? and attempt to defend the direct realism of Reid's theory through an interpretation of ?directness? as well as what Reid calls ?acquired perception?, which is ?mediate? in that it requires prior perception of signs, but nonetheless constitutes direct perception. |
|
We propose a Bayesian framework for the attribution of knowledge, and apply this framework to generate novel predictions about knowledge attribution for different types of “Gettier cases”, in which an agent is led to a justified true belief yet has made erroneous assumptions. We tested these predictions using a paradigm based on semantic integration. We coded the frequencies with which participants falsely recalled the word “thought” as “knew” (or a near synonym), yielding an implicit measure of conceptual activation. Our experiments (...) |
|
Severity of Test (SoT) is an alternative to Popper's logical falsification that solves a number of problems of the logical view. It was presented by Popper himself in 1963. SoT is a less sophisticated probabilistic model of hypothesis testing than Oaksford & Chater's (O&C's) information gain model, but it has a number of striking similarities. Moreover, it captures the intuition of everyday hypothesis testing. |
|
|
|
|
|
No categories |
|
Human cognition requires coping with a complex and uncertain world. This suggests that dealing with uncertainty may be the central challenge for human reasoning. In Bayesian Rationality we argue that probability theory, the calculus of uncertainty, is the right framework in which to understand everyday reasoning. We also argue that probability theory explains behavior, even on experimental tasks that have been designed to probe people's logical reasoning abilities. Most commentators agree on the centrality of uncertainty; some suggest that there is (...) |
|
According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic – the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining (...) |
|
Judging similarities among objects, events, and experiences is one of the most basic cognitive abilities, allowing us to make predictions and generalizations. The main assumption in similarity judgment is that people selectively attend to salient features of stimuli and judge their similarities on the basis of the common and distinct features of the stimuli. However, it is unclear how people select features from stimuli and how they weigh features. Here, we present a computational method that helps address these questions. Our (...) |
|
The main thesis of this paper is that two prevailing theories about cognitive penetration are too extreme, namely, the view that cognitive penetration is pervasive and the view that there is a sharp and fundamental distinction between cognition and perception, which precludes any type of cognitive penetration. These opposite views have clear merits and empirical support. To eliminate this puzzling situation, we present an alternative theoretical approach that incorporates the merits of these views into a broader and more nuanced explanatory (...) |
|
Perception purports to help you gain knowledge of the world even if the world is not the way you expected it to be. Perception also purports to be an independent tribunal against which you can test your beliefs. It is natural to think that in order to serve these and other central functions, perceptual representations must not causally depend on your prior beliefs and expectations. In this paper, I clarify and then argue against the natural thought above. All perceptual systems (...) |
|
The goal of perceptual systems is to allow organisms to adaptively respond to ecologically relevant stimuli. Because all perceptual inputs are ambiguous, perception needs to rely on prior knowledge accumulated over evolutionary and developmental time to turn sensory energy into information useful for guiding behavior. It remains controversial whether the guidance of perception extends to cognitive states or is locked up in a “cognitively impenetrable” part of perception. I argue that expectations, knowledge, and task demands can shape perception at multiple (...) |
|
Generalized anxiety disorder is among the world’s most prevalent psychiatric disorders and often manifests as persistent and difficult to control apprehension. Despite its prevalence, there is no integrative, formal model of how anxiety and anxiety disorders arise. Here, we offer a perspective derived from the free energy principle; one that shares similarities with established constructs such as learned helplessness. Our account is simple: anxiety can be formalized as learned uncertainty. A biological system, having had persistent uncertainty in its past, will (...) |
|
In a series of three behavioral experiments, we found a systematic distortion of probability judgments concerning elementary visual stimuli. Participants were briefly shown a set of figures that had two features (e.g., a geometric shape and a color) with two possible values each (e.g., triangle or circle and black or white). A figure was then drawn, and participants were informed about the value of one of its features (e.g., that the figure was a “circle”) and had to predict the value (...) |
|
|
|
No categories |
|
|
|
How humans efficiently operate in a world with massive amounts of data that need to be processed, stored, and recalled has long been an unsettled question. Our physical and social environment needs to be represented in a structured way, which could be achieved by reducing input to latent variables in the form of probability distributions, as proposed by influential, probabilistic accounts of cognition and perception. However, few studies have investigated the neural processes underlying the brain’s potential ability to represent a (...) |
|
|
|
Bayesian models are often criticized for postulating computations that are computationally intractable (e.g., NP-hard) and therefore implausibly performed by our resource-bounded minds/brains. Our letter is motivated by the observation that Bayesian modelers have been claiming that they can counter this charge of “intractability” by proposing that Bayesian computations can be tractably approximated. We would like to make the cognitive science community aware of the problematic nature of such claims. We cite mathematical proofs from the computer science literature that show intractable (...) |
|
|
|
|
|
|
|
The free energy principle says that any self-organising system that is at nonequilibrium steady-state with its environment must minimize its free energy. It is proposed as a grand unifying principle for cognitive science and biology. The principle can appear cryptic, esoteric, too ambitious, and unfalsifiable—suggesting it would be best to suspend any belief in the principle, and instead focus on individual, more concrete and falsifiable ‘process theories’ for particular biological processes and phenomena like perception, decision and action. Here, I explain (...) No categories |
|
|
|
There is surprising evidence that introspection of our phenomenal states varies greatly between individuals and within the same individual over time. This puts pressure on the notion that introspection gives reliable access to our own phenomenology: introspective unreliability would explain the variability, while assuming that the underlying phenomenology is stable. I appeal to a body of neurocomputational, Bayesian theory and neuroimaging findings to provide an alternative explanation of the evidence: though some limited testing conditions can cause introspection to be unreliable, (...) |
|
|
|
|
|
Does perceptual consciousness require cognitive access? Ned Block argues that it does not. Central to his case are visual memory experiments that employ post-stimulus cueing—in particular, Sperling's classic partial report studies, change-detection work by Lamme and colleagues, and a recent paper by Bronfman and colleagues that exploits our perception of ‘gist’ properties. We argue contra Block that these experiments do not support his claim. Our reinterpretations differ from previous critics' in challenging as well a longstanding and common view of visual (...) |
|
People are adept at inferring novel causal relations, even from only a few observations. Prior knowledge about the probability of encountering causal relations of various types and the nature of the mechanisms relating causes and effects plays a crucial role in these inferences. We test a formal account of how this knowledge can be used and acquired, based on analyzing causal induction as Bayesian inference. Five studies explored the predictions of this account with adults and 4-year-olds, using tasks in which (...) |
|
Recent work in rational probabilistic modeling suggests that a kind of propositional reasoning is ubiquitous in cognition and especially in cognitive development. However, there is no reason to believe that this type of computation is necessarily conscious or resource-intensive. |
|
|
|
We distinguish between three philosophical views on the neuroscience of predictive models: predictive coding, predictive processing and predictive engagement. We examine the concept of active inference under each model and then ask how this concept informs discussions of social cognition. In this context we consider Frith and Friston’s proposal for a neural hermeneutics, and we explore the alternative model of enactivist hermeneutics. |
|
Why do brains have so many connections? The principles exposed by Andy Clark provide answers to questions like this by appealing to the notion that brains distil causal regularities in the sensorium and embody them in models of their world. For example, connections embody the fact that causes have particular consequences. This commentary considers the imperatives for this form of embodiment. |
|
|
|
In this paper we argue that awareness comes in degrees, and we propose a novel multi-factor account that spans both subjective experiences and perceptual representations. At the subjective level, we argue that conscious experiences can be degraded by being fragmented, less salient, too generic, or flash-like. At the representational level, we identify corresponding features of perceptual representations—their availability for working memory, intensity, precision, and stability—and argue that the mechanisms that affect these features are what ultimately modulate the degree of awareness. (...) No categories |
|
Do participants bring their own priors to an experiment? If so, do they share the same priors as the researchers who design the experiment? In this article, we examine the extent to which self-generated priors conform to experimenters’ expectations by explicitly asking participants to indicate their own priors in estimating the probability of a variety of events. We find in Study 1 that despite being instructed to follow a uniform distribution, participants appear to have used their own priors, which deviated (...) |