De Ridder D., Vanneste S. & Freeman W. (2014) The Bayesian brain: Phantom percepts resolve sensory uncertainty. Neuroscience & Biobehavioral Reviews 44: 4–15.
Phantom perceptions arise almost universally in people who sustain sensory deafferentation, and in multiple sensory domains. The question arises ‘why’ the brain creates these false percepts in the absence of an external stimulus? The model proposed answers this question by stating that our brain works in a Bayesian way, and that its main function is to reduce environmental uncertainty, based on the freeenergy principle, which has been proposed as a universal principle governing adaptive brain function and structure. The Bayesian brain can be conceptualized as a probability machine that constantly makes predictions about the world and then updates them based on what it receives from the senses. The freeenergy principle states that the brain must minimize its Shannonian free-energy, i.e. must reduce by the process of perception its uncertainty (its prediction errors) about its environment. As completely predictable stimuli do not reduce uncertainty, they are not worthwhile of conscious processing. Unpredictable things on the other hand are not to be ignored, because it is crucial to experience them to update our understanding of the environment. Deafferentation leads to topographically restricted prediction errors based on temporal or spatial incongruity. This leads to an increase in topographically restricted uncertainty, which should be adaptively addressed by plastic repair mechanisms in the respective sensory cortex or via (para)hippocampal involvement. Neuroanatomically, filling in as a compensation for missing information also activates the anterior cingulate and insula, areas also involved in salience, stress and essential for stimulus detection. Associated with sensory cortex hyperactivity and decreased inhibition or map plasticity this will result in the perception of the false information created by the deafferented sensory areas, as a way to reduce increased topographically restricted uncertainty associated with the deafferentation. In conclusion, the Bayesian updating of knowledge via active sensory exploration of the environment, driven by the Shannonian free-energy principle, provides an explanation for the generation of phantom percepts, as a way to reduce uncertainty, to make sense of the world.
Drescher G. L. (1986) Genetic AI: Translating Piaget into Lisp. Instructional Science 14(3): 357–380. https://cepa.info/2296
This article presents a constructivist model of human cognitive development during infancy. According to constructivism, the elements of mental representation-even such basic elements as the concept of physical object-are constructed afresh by each individual, rather than being innately supplied. A (partially-specified, yet-unimplemented) mechanism, the Schema Mechanism, is proposed here; this mechanism is intended to achieve a series of cognitive constructions characteristic of infants' sensorimotor-stage development, primarily as described by Piaget. In reference to Piaget's “genetic epistemology”, I call this approach genetic AI-“genetic” not in the sense of genes, but in the sense of genesis: development from the point of origin. The Schema Mechanism focuses on Piaget's concept of the activity and evolution of cognitive schemas. The schema is construed here as a context-sensitive prediction of what will follow a certain action. Schemas are used both as assertions about the world, and as elements of plans to achieve goals. A mechanism of attribution causes a schema's assertion to be extended or revised according to the observed effects of the schema's action; due to the possible relevance of conjunctions of context conditions, the attribution facility needs to be able to sort through a combinatorial explosion of hypotheses. Crucially, the mechanism constructs representations of new actions and state elements, in terms of which schemas are expressed. Included here is a sketch of the proposed Schema Mechanism, and highlights of a hypothetical scenario of the mechanism's operation. The Schema Mechanism starts with a set of sensory and motor primitives as its sole units of representation. As with the Piagetian neonate, this leads to a “solipsist” conception: the world consists of sensory impressions transformed by motor actions. My scenario suggests how the mechanism might progress from there to conceiving of objects in space-representing an object independently of how it is currently perceived, or even whether it is currently perceived. The details of this progression paralledl the Piagetian development of object conception from the first through fifth sensorimotor stage.
Esposito E., Sold K. & Zimmermann B. (2021) Systems Theory and Algorithmic Futures: Interview with Elena Esposito. Constructivist Foundations 16(3): 356–361. https://cepa.info/7180
Abstract: By introducing us into core concepts of Niklas Luhmann’s theory of social systems, Elena Esposito shows their relevance for contemporary social sciences and the study of unsettled times. Contending that society is made not by people but by what connects them - as Luhmann does with his concept of communication - creates a fertile ground for addressing societal challenges as diverse as the Corona pandemic or the algorithmic revolution. Esposito more broadly sees in systems theory a relevant contribution to critical theory and a genuine alternative to its Frankfurt School version, while extending its reach to further conceptual refinement and new empirical issues. Fueling such refinement is her analysis of time and the complex intertwinement between past, present and future - a core issue that runs throughout her work. Her current study on the future as a prediction caught between science and divination offers a fascinating empirical case for it, drawing a thought-provoking parallel between the way algorithmic predictions are constructed today and how divinatory predictions were constructed in ancient times. Keywords: Algorithms, communication, critical theory, future, heterarchy, Luhmann, paradox, prediction, semantics, sociology, subsystems, systems theory, time.
Fabry R. E. (2017) Transcending the evidentiary boundary: Prediction error minimization, embodied interaction, and explanatory pluralism. Philosophical Psychology 30: 395–414. https://cepa.info/7848
In a recent paper, Jakob Hohwy argues that the emerging predictive processing (PP) perspective on cognition requires us to explain cognitive functioning in purely internalistic and neurocentric terms. The purpose of the present paper is to challenge the view that PP entails a wholesale rejection of positions that are interested in the embodied, embedded, extended, or enactive dimensions of cognitive processes. I will argue that Hohwy’s argument from analogy, which forces an evidentiary boundary into the picture, lacks the argumentative resources to make a convincing case for the conceptual necessity to interpret PP in solely internalistic terms. For this reason, I will reconsider the postulation and explanatory role of the evidentiary boundary. I will arrive at an account of prediction error minimization and its foundation on the free energy principle that is fully consistent with approaches to cognition that emphasize the embodied and interactive properties of cognitive processes. This gives rise to the suggestion that explanatory pluralism about the application of PP is to be preferred over Hohwy’s explanatory monism that follows from his internalistic and neurocentric view of predictive cognitive systems.
Fabry R. E. (2018) Betwixt and between: The enculturated predictive processing approach to cognition. Synthese 195(6): 2483–2518. https://cepa.info/5389
Many of our cognitive capacities are the result of enculturation. Enculturation is the temporally extended transformative acquisition of cognitive practices in the cognitive niche. Cognitive practices are embodied and normatively constrained ways to interact with epistemic resources in the cognitive niche in order to complete a cognitive task. The emerging predictive processing perspective offers new functional principles and conceptual tools to account for the cerebral and extra-cerebral bodily components that give rise to cognitive practices. According to this emerging perspective, many cases of perception, action, and cognition are realized by the on-going minimization of prediction error. Predictive processing provides us with a mechanistic perspective that helps investigate the functional details of the acquisition of cognitive practices. The argument of this paper is that research on enculturation and recent work on predictive processing are complementary. The main reason is that predictive processing operates at a sub-personal level and on a physiological time scale of explanation only. A complete account of enculturated cognition needs to take additional levels and temporal scales of explanation into account. This complementarity assumption leads to a new framework – enculturated predictive processing – that operates on multiple levels and temporal scales for the explanation of the enculturated predictive acquisition of cognitive practices. Enculturated predictive processing is committed to explanatory pluralism. That is, it subscribes to the idea that we need multiple perspectives and explanatory strategies to account for the complexity of enculturation. The upshot is that predictive processing needs to be complemented by additional considerations and conceptual tools to realize its full explanatory potential.
A free-energy principle has been proposed recently that accounts for action, perception and learning. This review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories – optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.
Gallagher S. & Allen M. (2018) Active inference, enactivism and the hermeneutics of social cognition. Synthese 195(6): 2627–2648. https://cepa.info/4222
We distinguish between three philosophical views on the neuroscience of predictive models: predictive coding (associated with internal Bayesian models and prediction error minimization), predictive processing (associated with radical connectionism and ‘simple’ embodiment) and predictive engagement (associated with enactivist approaches to cognition). We examine the concept of active inference under each model and then ask how this concept informs discussions of social cognition. In this context we consider Frith and Friston’s proposal for a neural hermeneutics, and we explore the alternative model of enactivist hermeneutics.
García de P. (2020) Ecological psychology and enactivism: A normative way out from ontological dilemmas. Frontiers in Psychology 11: 1637. https://cepa.info/7381
Two important issues of recent discussion in the philosophy of biology and of the cognitive sciences have been the ontological status of living, cognitive agents and whether cognition and action have a normative character per se. In this paper I will explore the following conditional in relation with both the notion of affordance and the idea of the living as self-creation: if we recognize the need to use normative vocabulary to make sense of life in general, we are better off avoiding taking sides on the ontological discussion between eliminativists, reductionists and emergentists. Looking at life through normative lenses is, at the very least, in tension with any kind of realism that aims at prediction and control. I will argue that this is so for two separate reasons. On the one hand, understanding the realm of biology in purely factualist, realist terms means to dispossess it of its dignity: there is more to life than something that we simply aim to manipulate to our own material convenience. On the other hand, a descriptivist view that is committed to the existence of biological and mental facts that are fully independent of our understanding of nature may be an invitation to make our ethical and normative judgments dependent on the discovery of such alleged facts, something I diagnose as a form of representationalism. This runs counter what I take to be a central democratic ideal: while there are experts whose opinion could be considered the last word on purely factual matters, where value is concerned, there are no technocratic experts above the rest of us. I will rely on the ideas of some central figures of early analytic philosophy that, perhaps due to the reductionistic and eliminativist tendencies of contemporary philosophy of mind, have not been sufficiently discussed within post-cognitivist debates.
Although prediction plays a prominent role in mental processing, we have only limited understanding of how the brain generates and employs predictions. This paper develops a theoretical framework in three steps. First I propose a process model that describes how predictions are produced and are linked to behavior. Subsequently I describe a generative mechanism, consisting of the selective amplification of neural dynamics in the context of boundary conditions. I hypothesize that this mechanism is active as a process engine in every mental process, and that therefore each mental process proceeds in two stages: (i) the formation of process boundary conditions; (ii) the bringing about of the process function by the operation – within the boundary conditions – of a relatively ‘blind’ generative process. Thirdly, from this hypothesis I derive a strategy for describing processes formally. The result is a multilevel framework that may also be useful for studying mental processes in general.
Guimaraes R. C. (2011) Metabolic basis for the self-referential genetic code. Origins of Life and Evolution of Biospheres 41: 357–371. https://cepa.info/844
The chronology of encoding amino acids in the genetic code, described by the self-referential model, prompted a search for the supporting biosynthesis pathways since the list of abiotic amino acids was not in close coherence with it. The prediction from the chronology was adequately satisfied with the identification of the Glycine-Serine Cycle of assimilation of C1-units. The start of encoding from C1-derived amino acids sits nicely at the fuzzy borders between methylotrophy and autotrophy. It is not possible to envisage the construction of metabolism from simpler nutrients. These indications support the notion that protein synthesis would be at the crux of the metabolic sink, and that metabolism is sink-driven. Relevance: In spite of the many unknowns in the area of the origin of life, it seems that the self-referential model offers a starting point for experimental verification of the formation of genetic codes. In the metabolic aspect, there is no possibility of getting simpler with respect to nutrients in the routes for metabolic contruction.