Recognising and representing one’s self as distinct from others is a fundamental component of self-awareness. However, current theories of self-recognition are not embedded within global theories of cortical function and therefore fail to provide a compelling explanation of how the self is processed. We present a theoretical account of the neural and computational basis of self-recognition that is embedded within the free-energy account of cortical function. In this account one’s body is processed in a Bayesian manner as the most likely to be “me”. Such probabilistic representation arises through the integration of information from hierarchically organised unimodal systems in higher-level multimodal areas. This information takes the form of bottom-up “surprise” signals from unimodal sensory systems that are explained away by top-down processes that minimise the level of surprise across the brain. We present evidence that this theoretical perspective may account for the findings of psychological and neuroimaging investigations into self-recognition and particularly evidence that representations of the self are malleable, rather than fixed as previous accounts of self-recognition might suggest.
Constructive theories of brain function such as predictive coding posit that prior knowledge affects our experience of the world quickly and directly. However, it is yet unknown how swiftly prior knowledge impacts the neural processes giving rise to conscious experience. Here we used an experimental paradigm where prior knowledge augmented perception and measured the timing of this effect with magnetoencephalography (MEG). By correlating the perceptual benefits of prior knowledge with the MEG activity, we found that prior knowledge took effect in the time-window 80–95ms after stimulus onset, thus reflecting an early influence on conscious perception. The sources of this effect were localized to occipital and posterior parietal regions. These results are in line with the predictive coding framework.
Bruineberg J., Kiverstein J. & Rietveld E. (2018) The anticipating brain is not a scientist: The free-energy principle from an ecological-enactive perspective. Synthese 195(6): 2417–2444. https://cepa.info/4497
In this paper, we argue for a theoretical separation of the free-energy principle from Helmholtzian accounts of the predictive brain. The free-energy principle is a theoretical framework capturing the imperative for biological self-organization in information-theoretic terms. The free-energy principle has typically been connected with a Bayesian theory of predictive coding, and the latter is often taken to support a Helmholtzian theory of perception as unconscious inference. If our interpretation is right, however, a Helmholtzian view of perception is incompatible with Bayesian predictive coding under the free-energy principle. We argue that the free energy principle and the ecological and enactive approach to mind and life make for a much happier marriage of ideas. We make our argument based on three points. First we argue that the free energy principle applies to the whole animal–environment system, and not only to the brain. Second, we show that active inference, as understood by the free-energy principle, is incompatible with unconscious inference understood as analagous to scientific hypothesis-testing, the main tenet of a Helmholtzian view of perception. Third, we argue that the notion of inference at work in Bayesian predictive coding under the free-energy principle is too weak to support a Helmholtzian theory of perception. Taken together these points imply that the free energy principle is best understood in ecological and enactive terms set out in this paper.
Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this “hierarchical prediction machine” approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.
Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.
This paper considers communication in terms of inference about the behaviour of others (and our own behaviour). It is based on the premise that our sensations are largely generated by other agents like ourselves. This means, we are trying to infer how our sensations are caused by others, while they are trying to infer our behaviour: for example, in the dialogue between two speakers. We suggest that the infinite regress induced by modelling another agent – who is modelling you – can be finessed if you both possess the same model. In other words, the sensations caused by others and oneself are generated by the same process. This leads to a view of communication based upon a narrative that is shared by agents who are exchanging sensory signals. Crucially, this narrative transcends agency – and simply involves intermittently attending to and attenuating sensory input. Attending to sensations enables the shared narrative to predict the sensations generated by another (i.e. to listen), while attenuating sensory input enables one to articulate the narrative (i.e. to speak). This produces a reciprocal exchange of sensory signals that, formally, induces a generalised synchrony between internal (neuronal) brain states generating predictions in both agents. We develop the arguments behind this perspective, using an active (Bayesian) inference framework and offer some simulations (of birdsong) as proof of principle.
This paper combines recent formulations of self-organization and neuronal processing to provide an account of cognitive dynamics from basic principles. We start by showing that inference (and autopoiesis) are emergent features of any (weakly mixing) ergodic random dynamical system. We then apply the emergent dynamics to action and perception in a way that casts action as the fulfillment of (Bayesian) beliefs about the causes of sensations. More formally, we formulate ergodic flows on global random attractors as a generalized descent on a free energy functional of the internal states of a system. This formulation rests on a partition of states based on a Markov blanket that separates internal states from hidden states in the external milieu. This separation means that the internal states effectively represent external states probabilistically. The generalized descent is then related to classical Bayesian (e.g., Kalman-Bucy) filtering and predictive coding-of the sort that might be implemented in the brain. Finally, we present two simulations. The first simulates a primordial soup to illustrate the emergence of a Markov blanket and (active) inference about hidden states. The second uses the same emergent dynamics to simulate action and action observation.
Gallagher S. & Allen M. (2018) Active inference, enactivism and the hermeneutics of social cognition. Synthese 195(6): 2627–2648. https://cepa.info/4222
We distinguish between three philosophical views on the neuroscience of predictive models: predictive coding (associated with internal Bayesian models and prediction error minimization), predictive processing (associated with radical connectionism and ‘simple’ embodiment) and predictive engagement (associated with enactivist approaches to cognition). We examine the concept of active inference under each model and then ask how this concept informs discussions of social cognition. In this context we consider Frith and Friston’s proposal for a neural hermeneutics, and we explore the alternative model of enactivist hermeneutics.
The full scope of enactivist approaches to cognition includes not only a focus on sensory-motor contingencies and physical affordances for action, but also an emphasis on affective factors of embodiment and intersubjective af-fordances for social interaction. This strong conception of embodied cognition calls for a new way to think about the role of the brain in the larger system of brain-body-environment. We ask whether recent work on predictive coding offers a way to think about brain function in an enactive system, and we sug-gest that a positive answer is possible if we interpret predictive coding in a more enactive way, i.e., as involved in the organism’s dynamic adjustments to its environment.
Greve P. F. (2015) The role of prediction in mental processing: A process approach. New Ideas in Psychology 39: 45–52. https://cepa.info/5876
Although prediction plays a prominent role in mental processing, we have only limited understanding of how the brain generates and employs predictions. This paper develops a theoretical framework in three steps. First I propose a process model that describes how predictions are produced and are linked to behavior. Subsequently I describe a generative mechanism, consisting of the selective amplification of neural dynamics in the context of boundary conditions. I hypothesize that this mechanism is active as a process engine in every mental process, and that therefore each mental process proceeds in two stages: (i) the formation of process boundary conditions; (ii) the bringing about of the process function by the operation – within the boundary conditions – of a relatively ‘blind’ generative process. Thirdly, from this hypothesis I derive a strategy for describing processes formally. The result is a multilevel framework that may also be useful for studying mental processes in general.