We introduce a spatial model of concentration dynamics that supports the emergence of spatiotemporal inhomogeneities that engage in metabolism–boundary co-construction. These configurations exhibit disintegration following some perturbations, and self-repair in response to others. We define robustness as a viable configuration’s tendency to return to its prior configuration in response to perturbations, and plasticity as a viable configuration’s tendency to change to other viable configurations. These properties are demonstrated and quantified in the model, allowing us to map a space of viable configurations and their possible transitions. Combining robustness and plasticity provides a measure of viability as the average expected survival time under ongoing perturbation, and allows us to measure how viability is affected as the configuration undergoes transitions. The framework introduced here is independent of the specific model we used, and is applicable for quantifying robustness, plasticity, and viability in any computational model of artificial life that demonstrates the conditions for viability that we promote.
This article revisits the concept of autopoiesis and examines its relation to cognition and life. We present a mathematical model of a 3D tesselation automaton, considered as a minimal example of autopoiesis. This leads us to a thesis T1: “An autopoietic system can be described as a random dynamical system, which is defined only within its organized autopoietic domain.” We propose a modified definition of autopoiesis: “An autopoietic system is a network of processes that produces the components that reproduce the network, and that also regulates the boundary conditions necessary for its ongoing existence as a network.” We also propose a definition of cognition: “A system is cognitive if and only if sensory inputs serve to trigger actions in a specific way, so as to satisfy a viability constraint.” It follows from these definitions that the concepts of autopoiesis and cognition, although deeply related in their connection with the regulation of the boundary conditions of the system, are not immediately identical: a system can be autopoietic without being cognitive, and cognitive without being autopoietic. Finally, we propose a thesis T2: “A system that is both autopoietic and cognitive is a living system.”
Cosmelli D., Lachaux J.-P. & Thompson E. (2007) Neurodynamics of consciousness. In: Zelazo P. D., Moscovitch M. & Thompson E. (eds.) The Cambridge handbook of consciousness. Cambridge University Press, Cambridge MA: 731–774. https://cepa.info/2378
One of the outstanding problems in the cognitive sciences is to understand how ongoing conscious experience is related to the workings of the brain and nervous system. Neurodynamics offers a powerful approach to this problem because it provides a coherent framework for investigating change, variability, complex spatiotemporal patterns of activity, and multiscale processes (among others). In this chapter, we advocate a neurodynamical approach to consciousness that integrates mathematical tools of analysis and modeling, sophisticated physiological data recordings, and detailed phenomenological descriptions. We begin by stating the basic intuition: Consciousness is an intrinsically dynamic phenomenon and must therefore be studied within a framework that is capable of rendering its dynamics intelligible. We then discuss some of the formal, analytical features of dynamical systems theory, with particular reference to neurodynamics. We then review several neuroscientific proposals that make use of dynamical systems theory in characterizing the neurophysiological basis of consciousness. We continue by discussing the relation between spatiotemporal patterns of brain activity and consciousness, with particular attention to processes in the gamma frequency band. We then adopt a critical perspective and highlight a number of issues demanding further treatment. Finally, we close the chapter by discussing how phenomenological data can relate to and ultimately constrain neurodynamical descriptions, with the long-term aim being to go beyond a purely correlational strategy of research.
This paper combines recent formulations of self-organization and neuronal processing to provide an account of cognitive dynamics from basic principles. We start by showing that inference (and autopoiesis) are emergent features of any (weakly mixing) ergodic random dynamical system. We then apply the emergent dynamics to action and perception in a way that casts action as the fulfillment of (Bayesian) beliefs about the causes of sensations. More formally, we formulate ergodic flows on global random attractors as a generalized descent on a free energy functional of the internal states of a system. This formulation rests on a partition of states based on a Markov blanket that separates internal states from hidden states in the external milieu. This separation means that the internal states effectively represent external states probabilistically. The generalized descent is then related to classical Bayesian (e.g., Kalman-Bucy) filtering and predictive coding-of the sort that might be implemented in the brain. Finally, we present two simulations. The first simulates a primordial soup to illustrate the emergence of a Markov blanket and (active) inference about hidden states. The second uses the same emergent dynamics to simulate action and action observation.
Lachaux J.-P., Pezard L., Pelt C., Garneiro L., Renault B., Varela F. J. & Martinerie J. (1997) Spatial extension of brain activity fools the single-channel reconstruction of EEG dynamics. Human Brain Mapping 5(1): 26–47. https://cepa.info/2006
Context: Neurophenomenology lies at a rich intersection of neuroscience and lived human experience, as described by phenomenology. As a new discipline, it is open to many new questions, methods, and proposals. Problem: The best available scientific ontology for neurophenomenology is based in dynamical systems. However, dynamical systems afford myriad strategies for organizing and representing neurodynamics, just as phenomenology presents an array of aspects of experience to be captured. Here, the focus is on the pervasive experience of subjective time. There is a need for concepts that describe synchronic (parallel) features of experience as well as diachronic (dynamic) structures of temporal objects. Method: The paper includes an illustrative discussion of the role of temporality in the construction of the awareness of objects, in the tradition of Husserl, James, and most of 20th century phenomenology. Temporality illuminates desiderata for the dynamical concepts needed for experiment and explanation in neurophenomenology. Results: The structure of music – rather than language – is proposed as a source for descriptive and explanatory concepts in a neurophenomenology that encompasses the pervasive experience of duration, stability, passing time, and change. Implications: The toolbox of cognitive musicology suddenly becomes available for dynamical systems approaches to the neurophenomenology of subjective time. The paper includes an illustrative empirical study of consonance and dissonance in application to an fMRI study of schizophrenia. Dissonance, in a sense strongly analogous to its acoustic musical meaning, characterizes schizophrenia at all times, while emerging in healthy brains only during distracting and demanding tasks. Constructivist content: Our experience of the present is a continuous and elaborate construction of the retention of the immediate past and anticipation of the immediate future. Musical concepts are almost entirely temporal and constructivist in this temporal sense – almost every element of music is constructed from relations to non-present musical/temporal contexts. Musicology may offer many new constructivist concepts and a way of thinking about the dynamical system that is the human brain.
Psychologism is defined as “the doctrine that the laws of mathematics and logic can be reduced to or depend on the laws governing thinking” (Moran & Cohen, 2012 266). And for Husserl, the laws of logic include the laws of meaning: “logic evidently is the science of meanings as such [Wissenschaft von Bedeutungen als solchen]” (Husserl (1975) 98/2001 225). I argue that, since it is sufficient for a theory to be psychologistic if the empiricistic theory of abstraction is employed, it follows that neural networks are psychologistic insofar as they use this theory of abstraction, which I demonstrate is the case (Husserl (1975) 191/2001 120). It’s sufficient for psychologism because, according to Husserl, the theory in question reduces one’s phenomenological ability to intend types (or universals) to one’s past history of intending tokens (or particulars), usually amalgamated in some fashion (classically via associations; recently via autoencoders) (ibid; Kelleher, 2019). Similarly dynamical systems theory entails psychologism. For dynamical systems theory ties content to the temporal evolution of a system, which, according to Husserl, violates the fact that intentionality toward validities and objectivities does not pertain to “particular temporal experience[s]” (Husserl (1975) 194/2001 121). It follows that neither the species (neural networks), nor the genus (dynamical systems), can avoid psychologism and intend objects “in specie” (ibid). After critiquing these two approaches, I proceed to give an account based on the essentialist school of cognitive psychology of how we may intend objects “in specie” while avoiding the empiricistic theory of abstraction (Keil, 1989, Carey, 2009, Marcus & Davis, 2019). Such an account preserves the type-token distinction without psychologistic reduction to the temporal evolution of a dynamical system (Hinzen, 2006). This opens the way toward a truly unifying account of Husserlian phenomenology in league with cognitive science that avoids Yoshimi’s (2016) and neurophenomenology’s psychologistic foundation (herein demonstrated) and builds upon Sokolowski’s (2003) syntactic account of Husserlian phenomenology.
According to the organizational view, the autonomy of a living system should be approached from the perspective of processes that contribute to generating and constantly maintaining the internal organization of the living system, as well as to preserving the structural relation between the organism and its environment. However, a living system is both a biological organism and a certain type of complex system. Starting from this perspective, I will define the autonomy of a living system as the totality of the states it can access as response to the challenges of the environment, meaning the totality of the system’s degrees of freedom. However, understanding the autonomy of a living system also depends on the account of the controlling mechanisms, which contribute to generating and managing its degrees of freedom. In the case of basic living organisms, one can talk of an adaptive control involving the regulation of the internal processes in order to create a coherent pattern of action that would adjust the internal and external behavior of the organism to the environmental conditions. Regulation of the internal processes and the exchange of matter and energy with the environment determine the emergence of an incipient form of self, which is a consequence of existing correlations among the basic adaptive functions of any biological system. The nervous system provides the organism with an advanced form of control, which implies a flexible and multidimensional state space, whose level of complexity is higher than the one configured by the metabolic reactions. In this case, a sensorimotor self emerges, which is the result of integration of the body and environment into a systemic whole. Moreover, in advanced organisms, such as humans, a new metacognitive level emerges, i.e. consciousness. Consciousness not only enhances the state space of an organism but also creates complex patterns of behavior with new and unpredictable trajectories, which entails multiple and complex degrees of freedom. Consciousness is at the origin of the emergence of a conscious self, which is capable of conscious selection of the constraints that would modulate its behavioral patterns.
Although dynamical systems have been used by cognitive scientists for more than a decade already (e.g. Kugler, Kelso, and Turvey, 1980), dynamical systems first gained widespread attention in the mid-1990s (e.g. Kelso, 1995; Port and van Gelder, 1995; Thelen and Smith, 1994) Dynamical systems theory was then, and continues to be, a crucial tool for embodied cognitive science. The word dynamical simply means “changing over time” and thus a dynamical system is simply a system whose behavior evolves or changes over time. The scientific study of dynamical systems is concerned with understanding, modeling, and predicting the ways in which the behavior of a system changes over time. In the last few decades, thanks to increasing computational power, researchers have begun to investigate and understand the dynamic behavior of complex biological, cognitive, and social systems, using the concepts and tools of non-linear dynamical systems. In the next section, we will describe the key concepts of modern dynamical systems theory (complexity, self-organization, soft assembly, interaction dominance, and non-linearity) In the second section, we briefly discuss some dynamical analysis techniques used in the cognitive sciences. In the third, we give some examples of the application of complex dynamical systems theory and analysis in cognitive science. In the last, we sketch some consequences of the widespread applicability of dynamical approaches to understanding neural, cognitive, and social systems.
Schiavio A. (2016) Enactive affordances and the interplay of biological and phenomenological subjectivity. Constructivist Foundations 11(2): 315–317. https://cepa.info/2570
Open peer commentary on the article “Perception-Action Mutuality Obviates Mental Construction” by Martin Flament Fultot, Lin Nie & Claudia Carello. Upshot: Enactive approaches highlight the deep interdependency of brains, action, agency, and environment in shaping the world we inhabit. This perspective goes beyond input-output models of cognition, postulating instead closed loops of action and perception framed by the agent-environment complementarity. As a unique, dynamical, system, no (internal) representational recovery is required for cognitive-behavioral experience to take place.