Predictive processing (PP) approaches to the mind are increasingly popular in the cognitive sciences. This surge of interest is accompanied by a proliferation of philosophical arguments, which seek to either extend or oppose various aspects of the emerging framework. In particular, the question of how to position predictive processing with respect to enactive and embodied cognition has become a topic of intense debate. While these arguments are certainly of valuable scientific and philosophical merit, they risk underestimating the variety of approaches gathered under the predictive label. Here, we first present a basic review of neuroscientific, cognitive, and philosophical approaches to PP, to illustrate how these range from solidly cognitivist applications – with a firm commitment to modular, internalistic mental representation – to more moderate views emphasizing the importance of ‘body-representations’, and finally to those which fit comfortably with radically enactive, embodied, and dynamic theories of mind. Any nascent predictive processing theory (e.g., of attention or consciousness) must take into account this continuum of views, and associated theoretical commitments. As a final point, we illustrate how the Free Energy Principle (FEP) attempts to dissolve tension between internalist and externalist accounts of cognition, by providing a formal synthetic account of how internal ‘representations’ arise from autopoietic self-organization. The FEP thus furnishes empirically productive process theories (e.g., predictive processing) by which to guide discovery through the formal modelling of the embodied mind.
This article presents a unifying theory of the embodied, situated human brain called the Hierarchically Mechanistic Mind (HMM). The HMM describes the brain as a complex adaptive system that actively minimises the decay of our sensory and physical states by producing self-fulfilling action-perception cycles via dynamical interactions between hierarchically organised neurocognitive mechanisms. This theory synthesises the free-energy principle (FEP) in neuroscience with an evolutionary systems theory of psychology that explains our brains, minds, and behaviour by appealing to Tinbergen’s four questions: adaptation, phylogeny, ontogeny, and mechanism. After leveraging the FEP to formally define the HMM across different spatiotemporal scales, we conclude by exploring its implications for theorising and research in the sciences of the mind and behaviour.
Open peer commentary on the article “A Moving Boundary, a Plastic Core: A Contribution to the Third Wave of Extended-Mind Research” by Timotej Prosen. Abstract: I sympathize with Prosen’s conviction in integrating enactivism, the free-energy principle, and the extended-mind hypothesis. However, I show that he uses the concept of “boundary” ambiguously. By disambiguating it, I suggest that we can keep both Markov blankets and operational closure as ways of drawing the boundaries of a cognitive system. Nevertheless, from an enactive perspective, neither of those boundaries is a “cognitive” boundary.
Bruineberg J., Kiverstein J. & Rietveld E. (2018) The anticipating brain is not a scientist: The free-energy principle from an ecological-enactive perspective. Synthese 195(6): 2417–2444. https://cepa.info/4497
In this paper, we argue for a theoretical separation of the free-energy principle from Helmholtzian accounts of the predictive brain. The free-energy principle is a theoretical framework capturing the imperative for biological self-organization in information-theoretic terms. The free-energy principle has typically been connected with a Bayesian theory of predictive coding, and the latter is often taken to support a Helmholtzian theory of perception as unconscious inference. If our interpretation is right, however, a Helmholtzian view of perception is incompatible with Bayesian predictive coding under the free-energy principle. We argue that the free energy principle and the ecological and enactive approach to mind and life make for a much happier marriage of ideas. We make our argument based on three points. First we argue that the free energy principle applies to the whole animal–environment system, and not only to the brain. Second, we show that active inference, as understood by the free-energy principle, is incompatible with unconscious inference understood as analagous to scientific hypothesis-testing, the main tenet of a Helmholtzian view of perception. Third, we argue that the notion of inference at work in Bayesian predictive coding under the free-energy principle is too weak to support a Helmholtzian theory of perception. Taken together these points imply that the free energy principle is best understood in ecological and enactive terms set out in this paper.
Over the last 30 years, representationalist and dynamicist positions in the philosophy of cognitive science have argued over whether neurocognitive processes should be viewed as representational or not. Major scientific and technological developments over the years have furnished both parties with ever more sophisticated conceptual weaponry. In recent years, an enactive generalization of predictive processing – known as active inference – has been proposed as a unifying theory of brain functions. Since then, active inference has fueled both representationalist and dynamicist campaigns. However, we believe that when diving into the formal details of active inference, one should be able to find a solution to the war; if not a peace treaty, surely an armistice of a sort. Based on an analysis of these formal details, this paper shows how both representationalist and dynamicist sensibilities can peacefully coexist within the new territory of active inference.
Phantom perceptions arise almost universally in people who sustain sensory deafferentation, and in multiple sensory domains. The question arises ‘why’ the brain creates these false percepts in the absence of an external stimulus? The model proposed answers this question by stating that our brain works in a Bayesian way, and that its main function is to reduce environmental uncertainty, based on the freeenergy principle, which has been proposed as a universal principle governing adaptive brain function and structure. The Bayesian brain can be conceptualized as a probability machine that constantly makes predictions about the world and then updates them based on what it receives from the senses. The freeenergy principle states that the brain must minimize its Shannonian free-energy, i.e. must reduce by the process of perception its uncertainty (its prediction errors) about its environment. As completely predictable stimuli do not reduce uncertainty, they are not worthwhile of conscious processing. Unpredictable things on the other hand are not to be ignored, because it is crucial to experience them to update our understanding of the environment. Deafferentation leads to topographically restricted prediction errors based on temporal or spatial incongruity. This leads to an increase in topographically restricted uncertainty, which should be adaptively addressed by plastic repair mechanisms in the respective sensory cortex or via (para)hippocampal involvement. Neuroanatomically, filling in as a compensation for missing information also activates the anterior cingulate and insula, areas also involved in salience, stress and essential for stimulus detection. Associated with sensory cortex hyperactivity and decreased inhibition or map plasticity this will result in the perception of the false information created by the deafferented sensory areas, as a way to reduce increased topographically restricted uncertainty associated with the deafferentation. In conclusion, the Bayesian updating of knowledge via active sensory exploration of the environment, driven by the Shannonian free-energy principle, provides an explanation for the generation of phantom percepts, as a way to reduce uncertainty, to make sense of the world.
Di Paolo E., Thompson E. & Beer R. (2022) Laying down a forking path: Tensions between enaction and the free energy principle. Philosophy and the Mind Sciences 3: 2. https://cepa.info/7833
Several authors have made claims about the compatibility between the Free Energy Principle (FEP) and theories of autopoiesis and enaction. Many see these theories as natural partners or as making similar statements about the nature of biological and cognitive systems. We critically examine these claims and identify a series of misreadings and misinterpretations of key enactive concepts. In particular, we notice a tendency to disregard the operational definition of autopoiesis and the distinction between a system’s structure and its organization. Other misreadings concern the conflation of processes of self-distinction in operationally closed systems and Markov blankets. Deeper theoretical tensions underlie some of these misinterpretations. FEP assumes systems that reach a non-equilibrium steady state and are enveloped by a Markov blanket. We argue that these assumptions contradict the historicity of sense-making that is explicit in the enactive approach. Enactive concepts such as adaptivity and agency are defined in terms of the modulation of parameters and constraints of the agent-environment coupling, which entail the possibility of changes in variable and parameter sets, constraints, and in the dynamical laws affecting the system. This allows enaction to address the path-dependent diversity of human bodies and minds. We argue that these ideas are incompatible with the time invariance of non-equilibrium steady states assumed by the FEP. In addition, the enactive perspective foregrounds the enabling and constitutive roles played by the world in sense-making, agency, development. We argue that this view of transactional and constitutive relations between organisms and environments is a challenge to the FEP. Once we move beyond superficial similarities, identify misreadings, and examine the theoretical commitments of the two approaches, we reach the conclusion that far from being easily integrated, the FEP, as it stands formulated today, is in tension with the theories of autopoiesis and enaction.
Several authors have made claims about the compatibility between the Free Energy Principle (FEP) and theories of autopoiesis and enaction. Many see these theories as natural partners or as making similar statements about the nature of biological and cognitive systems. We critically examine these claims and identify a series of misreadings and misinterpretations of key enactive concepts. In particular, we notice a tendency to disregard the operational definition of autopoiesis and the distinction between a system’s structure and its organization. Other misreadings concern the conflation of processes of self-distinction in operationally closed systems with Markov blankets. Deeper theoretical tensions underlie some of these misinterpretations. FEP assumes systems that reach a non-equilibrium steady state and are enveloped by a Markov blanket. We argue that these assumptions contradict the historicity of agency and sense-making that is explicit in the enactive approach. Enactive concepts such as adaptivity and agency are defined in terms of the modulation of parameters and constraints of the agent-environment coupling, which entail the possibility of redefinition of variable and parameter sets and of the dynamical laws affecting a system, a situation that escapes the assumptions of FEP. In addition, the enactive perspective foregrounds the enabling and constitutive roles played by the world in sense-making, agency, development, and the path-dependent diversity of human bodies and minds. We argue that this position is also in contradiction with the FEP. Once we move beyond superficial similarities, identify misreadings, and examine the theoretical commitments of the two approaches, we reach the conclusion that the FEP, as it stands formulated today, is profoundly incompatible with the theories of autopoiesis and enaction.
Fabry R. E. (2017) Transcending the evidentiary boundary: Prediction error minimization, embodied interaction, and explanatory pluralism. Philosophical Psychology 30: 395–414. https://cepa.info/7848
In a recent paper, Jakob Hohwy argues that the emerging predictive processing (PP) perspective on cognition requires us to explain cognitive functioning in purely internalistic and neurocentric terms. The purpose of the present paper is to challenge the view that PP entails a wholesale rejection of positions that are interested in the embodied, embedded, extended, or enactive dimensions of cognitive processes. I will argue that Hohwy’s argument from analogy, which forces an evidentiary boundary into the picture, lacks the argumentative resources to make a convincing case for the conceptual necessity to interpret PP in solely internalistic terms. For this reason, I will reconsider the postulation and explanatory role of the evidentiary boundary. I will arrive at an account of prediction error minimization and its foundation on the free energy principle that is fully consistent with approaches to cognition that emphasize the embodied and interactive properties of cognitive processes. This gives rise to the suggestion that explanatory pluralism about the application of PP is to be preferred over Hohwy’s explanatory monism that follows from his internalistic and neurocentric view of predictive cognitive systems.
A free-energy principle has been proposed recently that accounts for action, perception and learning. This review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories – optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.