Recognising and representing one’s self as distinct from others is a fundamental component of self-awareness. However, current theories of self-recognition are not embedded within global theories of cortical function and therefore fail to provide a compelling explanation of how the self is processed. We present a theoretical account of the neural and computational basis of self-recognition that is embedded within the free-energy account of cortical function. In this account one’s body is processed in a Bayesian manner as the most likely to be “me”. Such probabilistic representation arises through the integration of information from hierarchically organised unimodal systems in higher-level multimodal areas. This information takes the form of bottom-up “surprise” signals from unimodal sensory systems that are explained away by top-down processes that minimise the level of surprise across the brain. We present evidence that this theoretical perspective may account for the findings of psychological and neuroimaging investigations into self-recognition and particularly evidence that representations of the self are malleable, rather than fixed as previous accounts of self-recognition might suggest.
Does the material basis of conscious experience extend beyond the boundaries of the brain and central nervous system? In Clark 2009 I reviewed a number of ‘enactivist’ arguments for such a view and found none of them compelling. Ward (2012) rejects my analysis on the grounds that the enactivist deploys an essentially world-involving concept of experience that transforms the argumentative landscape in a way that makes the enactivist conclusion inescapable. I present an alternative (prediction-and-generative-model-based) account that neatly accommodates all the positive evidence that Ward cites on behalf of this enactivist conception, and that (I argue) makes richer and more satisfying contact with the full sweep of human experience.
Clark A. (2013) Perceiving as predicting. In: Mohan M., Bigg S. & Stokes D. (eds.) Perception and its modalities. Oxford University Press, New York: 23–43. https://cepa.info/7286
Excerpt: The main purpose of this chapter has been to introduce the notion of sensory perception as a form of probabilistic prediction involving a hierarchy of generative models. This broad vision brings together frontline research in machine learning and a growing body of neuroscientific conjecture and evidence. It provides a simple and elegant account of multimodal and crossmodal effects in perception and has implications for the study of (the neural correlates of) conscious experience. It also suggests, or so I have argued, a deep unity between perceiving and imagining. For to perceive the world (at least as we do) is to deploy internal resources capable of endogenously generating those same sensory effects: capable, that is, of generating those same activation patterns via a top-down sweep involving multiple intermediate layers of processing. That suggests a fundamental linkage between ‘passive perception’ and active imagining, with each capacity being continuously bootstrapped by the other. Perceiving and imagining (if these models are on the right track) are simultaneous effects of a single underlying neural strategy.
Clark A. (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science. The Behavioral and Brain Sciences 36(3): 181–204. https://cepa.info/7285
Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this “hierarchical prediction machine” approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.
Recent work in computational and cognitive neuroscience depicts the brain as an ever-active prediction machine: an inner engine continuously striving to anticipate the incoming sensory barrage. I briefly introduce this class of models before contrasting two ways of understanding the implied vision of mind. One way (Conservative Predictive Processing) depicts the predictive mind as an insulated inner arena populated by representations so rich and reconstructive as to enable the organism to ‘throw away the world’. The other (Radical Predictive Processing) stresses the use of fast and frugal, action-involving solutions of the kind highlighted by much work in robotics and embodied cognition. But it goes further, by showing how predictive schemes can combine frugal and more knowledge-intensive strategies, switching between them fluently and continuously as task and context dictate. I end by exploring some parallels with work in enactivism, and by noting a certain ambivalence concerning internal representations and their role in the predictive mind.
Clark A. (2017) Busting out: Predictive brains, embodied minds, and the puzzle of the evidentiary veil. Noûs 51(4): 727–753. https://cepa.info/5067
Biological brains are increasingly cast as ‘prediction machines’: evolved organs whose core operating principle is to learn about the world by trying to predict their own patterns of sensory stimulation. This, some argue, should lead us to embrace a brain‐bound ‘neurocentric’ vision of the mind. The mind, such views suggest, consists entirely in the skull‐bound activity of the predictive brain. In this paper I reject the inference from predictive brains to skull‐bound minds. Predictive brains, I hope to show, can be apt participants in larger cognitive circuits. The path is thus cleared for a new synthesis in which predictive brains act as entry‐points for ‘extended minds’, and embodiment and action contribute constitutively to knowing contact with the world.
Clark A. (2018) A nice surprise? Predictive processing and the active pursuit of novelty. Phenomenology and the Cognitive Sciences 17: 521–534.
Recent work in cognitive and computational neuroscience depicts human brains as devices that minimize prediction error signals: signals that encode the difference between actual and expected sensory stimulations. This raises a series of puzzles whose common theme concerns a potential misfit between this bedrock informationtheoretic vision and familiar facts about the attractions of the unexpected. We humans often seem to actively seek out surprising events, deliberately harvesting novel and exciting streams of sensory stimulation. Conversely, we often experience some wellexpected sensations as unpleasant and to-be-avoided. In this paper, I explore several core and variant forms of this puzzle, using them to display multiple interacting elements that together deliver a satisfying solution. That solution requires us to go beyond the discussion of simple information-theoretic imperatives (such as ‘minimize long-term prediction error’) and to recognize the essential role of species-specific prestructuring, epistemic foraging, and cultural practices in shaping the restless, curious, novelty-seeking human mind.
Constructivist learning theory predicts that knowledge encoded from data by learners themselves will be more flexible, transferable, and useful than knowledge encoded for them by experts and transmitted to them by an instructor or other delivery agent. If this prediction is correct, then learners should be modeled as scientists and use the reasoning and technologies of scientists to construct their own knowledge. However, it cannot be taken for granted that the prediction is correct, or correct in every knowledge domain. The present study attempts to establish conditions in which the prediction can be operationalized and tested. It reports on the adaptation of constructivist principles to instructional design in a particular domain, second language vocabulary acquisition. Students learning English for academic purposes in the Sultanate of Oman followed one of two approaches to vocabulary expansion, learning pre-encoded dictionary definitions of words, or constructing definitions for themselves using an adapted version of the computational tools of lexicographers. After 12 weeks, both groups were equal in definitional knowledge of target words, but lexicography group students were more able to transfer their word knowledge to novel contexts.
de Bruin L. & Michael J. (2017) Prediction error minimization: Implications for embodied cognition and the extended mind hypothesis. Brain and Cognition 112: 58–63. https://cepa.info/5567
Over the past few years, the prediction error minimization (PEM) framework has increasingly been gaining ground throughout the cognitive sciences. A key issue dividing proponents of PEM is how we should conceptualize the relation between brain, body and environment. Clark advocates a version of PEM which retains, at least to a certain extent, his prior commitments to Embodied Cognition and to the Extended Mind Hypothesis. Hohwy, by contrast, presents a sustained argument that PEM actually rules out at least some versions of Embodied and Extended cognition. The aim of this paper is to facilitate a constructive debate between these two competing alternatives by explicating the different theoretical motivations underlying them, and by homing in on the relevant issues that may help to adjudicate between them.
Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.