We present a tentative proposal for a quantitative measure of autonomy. This is something that, surprisingly, is rarely found in the literature, even though autonomy is considered to be a basic concept in many disciplines, including artificial life. We work in an information theoretic setting for which the distinction between system and environment is the starting point. As a first measure for autonomy, we propose the conditional mutual information between consecutive states of the system conditioned on the history of the environment. This works well when the system cannot influence the environment at all and the environment does not interact synergetically with the system. When, in contrast, the system has full control over its environment, we should instead neglect the environment history and simply take the mutual information between consecutive system states as a measure of autonomy. In the case of mutual interaction between system and environment there remains an ambiguity regarding whether system or environment has caused observed correlations. If the interaction structure of the system is known, we define a “causal” autonomy measure which allows this ambiguity to be resolved. Synergetic interactions still pose a problem since in this case causation cannot be attributed to the system or the environment alone. Moreover, our analysis reveals some subtle facets of the concept of autonomy, in particular with respect to the seemingly innocent system–environment distinction we took for granted, and raises the issue of the attribution of control, i.e. the responsibility for observed effects. To further explore these issues, we evaluate our autonomy measure for simple automata, an agent moving in space, gliders in the game of life, and the tessellation automaton for autopoiesis of Varela et al.
Kirchhoff M. D. & Robertson I. (2018) Enactivism and predictive processing: A non-representational view. Philosophical Explorations 21(2): 264–281. https://cepa.info/5840
This paper starts by considering an argument for thinking that predictive processing (PP) is representational. This argument suggests that the Kullback–Leibler (KL)-divergence provides an accessible measure of misrepresentation, and therefore, a measure of representational content in hierarchical Bayesian inference. The paper then argues that while the KL-divergence is a measure of information, it does not establish a sufficient measure of representational content. We argue that this follows from the fact that the KL-divergence is a measure of relative entropy, which can be shown to be the same as covariance (through a set of additional steps). It is well known that facts about covariance do not entail facts about representational content. So there is no reason to think that the KL-divergence is a measure of (mis-)representational content. This paper thus provides an enactive, non-representational account of Bayesian belief optimisation in hierarchical PP.
Le Van Quyen M., Foucher J., Lachaux J., Rodriguez E., Lutz A., Martinerie J. & Varela F. J. (2001) Comparison of Hilbert transform and wavelet methods for the analysis of neuronal synchrony. Journal of Neuroscience Methods 111(2): 83–98. https://cepa.info/2091
The quantification of phase synchrony between neuronal signals is of crucial importance for the study of large-scale interactions in the brain. Two methods have been used to date in neuroscience, based on two distinct approaches which permit a direct estimation of the instantaneous phase of a signal [Phys. Rev. Lett. 81 (1998) 3291; Human Brain Mapping 8 (1999) 194]. The phase is either estimated by using the analytic concept of Hilbert transform or, alternatively, by convolution with a complex wavelet. In both methods the stability of the instantaneous phase over a window of time requires quantification by means of various statistical dependence parameters (standard deviation, Shannon entropy or mutual information). The purpose of this paper is to conduct a direct comparison between these two methods on three signal sets: (1) neural models; (2) intracranial signals from epileptic patients; and (3) scalp EEG recordings. Levels of synchrony that can be considered as reliable are estimated by using the technique of surrogate data. Our results demonstrate that the differences between the methods are minor, and we conclude that they are fundamentally equivalent for the study of neuroelectrical signals. This offers a common language and framework that can be used for future research in the area of synchronization.
Maturana’s cognitive perspective on the living state, Dretske’s insight on how information theory constrains cognition, the Atlan/Cohen cognitive paradigm, and models of intelligence without representation, permit construction of a spectrum of dynamic necessary conditions statistical models of signal transduction, regulation, and metabolism at and across the many scales and levels of organisation of an organism and its context. Nonequilibrium critical phenomena analogous to physical phase transitions, driven by crosstalk, will be ubiquitous, representing not only signal switching, but the recruitment of underlying cognitive modules into tunable dynamic coalitions that address changing patterns of need and opportunity at all scales and levels of organisation. The models proposed here, while certainly providing much conceptual insight, should be most useful in the analysis of empirical data, much as are fitted regression equations.