Froese T. & Di Paolo E. A. (2008) Can evolutionary robotics generate simulation models of autopoiesis? Science Research Paper 598, University of Sussex. https://cepa.info/5231
There are some signs that a resurgence of interest in modeling constitutive autonomy is underway. This paper contributes to this recent development by exploring the possibility of using evolutionary robotics, traditionally only used as a generative mechanism for the study of embodied-embedded cognitive systems, to generate simulation models of constitutively autonomous systems. Such systems, which are autonomous in the sense that they self-constitute an identity under precarious conditions, have so far been elusive. The challenges and opportunities involved in such an endeavor are explicated in terms of a concrete model. While we conclude that this model fails to fully satisfy all the organizational criteria that are required for constitutive autonomy, it nevertheless serves to illustrate that evolutionary robotics at least has the potential to become a valuable tool for generating such models.
This manuscript was originally targeted at the audience of the Artificial Life XI conference, which was held in August 2008 in Winchester, UK, but ended up being rejected both as a paper and as an abstract.
Froese T. & Ziemke T. (2009) Enactive artificial intelligence: Investigating the systemic organization of life and mind. Artificial Intelligence 173: 466–500. https://cepa.info/279
Some concerns have recently been raised with regard to the sufficiency of current embodied AI for advancing our scientific understanding of intentional agency. While from an engineering or computer science perspective this limitation might not be relevant, it is of course highly relevant for AI researchers striving to build accurate models of natural cognition. We argue that the biological foundations of enactive cognitive science can provide the conceptual tools that are needed to diagnose more clearly the shortcomings of current embodied AI. In particular, taking an enactive perspective points to the need for AI to take seriously the organismic roots of autonomous agency and sense-making. We identify two necessary systemic requirements, namely constitutive autonomy and adaptivity, which lead us to introduce two design principles of enactive AI. It is argued that the development of such enactive AI poses a significant challenge to current methodologies. However, it also provides a promising way of eventually overcoming the current limitations of embodied AI, especially in terms of providing fuller models of natural embodied cognition. Finally, some practical implications and examples of the two design principles of enactive AI are also discussed.
Froese T., Virgo N. & Izquierdo E. (2007) Autonomy: A review and a reappraisal. In: Almeida e Costa F., Rocha L. M., Costa E., Harvey I. & Coutinho A. (eds.) Advances in Artificial Life. 9th European Conference, ECAL 2007. Springer, Berlin: 455–464. https://cepa.info/2678
In the field of artificial life there is no agreement on what defines ‘autonomy’. This makes it difficult to measure progress made towards understanding as well as engineering autonomous systems. Here, we review the diversity of approaches and categorize them by introducing a conceptual distinction between behavioral and constitutive autonomy. Differences in the autonomy of artificial and biological agents tend to be marginalized for the former and treated as absolute for the latter. We argue that with this distinction the apparent opposition can be resolved.
Georgeon O. L. & Aha D. (2013) The Radical Interactionism Conceptual Commitment. Journal of Artificial General Intelligence 4(2): 31–36. https://cepa.info/3787
We introduce Radical Interactionism (RI), which extends Franklin et al.’s (2013) Cognitive Cycles as Cognitive Atoms (CCCA) proposal in their discussion on conceptual commitments in cognitive models. Similar to the CCCA commitment, the RI commitment acknowledges the indivisibility of the perception-action cycle. However, it also reifies the perception-action cycle as sensorimotor interaction and uses it to replace the traditional notions of observation and action. This complies with constructivist epistemology, which suggests that knowledge of reality is constructed from regularities observed in sensorimotor experience. We use the LIDA cognitive architecture as an example to examine the implications of RI on cognitive models. We argue that RI permits self- programming and constitutive autonomy, which have been acknowledged as desirable cognitive capabilities in artificial agents.
Georgeon O. L., Marshall J. B. & Manzotti R. (2013) ECA: An enactivist cognitive architecture based on sensorimotor modeling. Biologically Inspired Cognitive Architectures 6: 46–57. https://cepa.info/1009
A novel way to model an agent interacting with an environment is introduced, called an Enactive Markov Decision Process (EMDP). An EMDP keeps perception and action embedded within sensorimotor schemes rather than dissociated, in compliance with theories of embodied cognition. Rather than seeking a goal associated with a reward, as in reinforcement learning, an EMDP agent learns to master the sensorimotor contingencies offered by its coupling with the environment. In doing so, the agent exhibits a form of intrinsic motivation related to the autotelic principle (Steels), and a value system attached to interactions called “interactional motivation.” This modeling approach allows the design of agents capable of autonomous self-programming, which provides rudimentary constitutive autonomy – a property that theoreticians of enaction consider necessary for autonomous sense-making (e.g., Froese & Ziemke). A cognitive architecture is presented that allows the agent to discover, memorize, and exploit spatio-sequential regularities of interaction, called Enactive Cognitive Architecture (ECA). In our experiments, behavioral analysis shows that ECA agents develop active perception and begin to construct their own ontological perspective on the environment. Relevance: This publication relates to constructivism by the fact that the agent learns from input data that does not convey ontological information about the environment. That is, the agent learns by actively experiencing its environment through interaction, as opposed to learning by registering observations directly characterizing the environment. This publication also relates to enactivism by the fact that the agent engages in self-programming through its experience from interacting with the environment, rather than executing pre-programmed behaviors.
Georgeon O. L., Mille A. & Gay S. L. (2016) Intelligence artificielle sans données ontologiques sur une réalité présupposée. Intellectica 65: 143–168. https://cepa.info/7341
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “ as such” but only “ as they can experience it”. c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception / cognition/ action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.
Georgeon O., Mille A. & Gay S. (2016) Intelligence artificielle sans données ontologiques sur une réalité presupposée [Artificial intelligence without using ontological data about a presupposed reality]. Intellectica 65: 143–168. https://cepa.info/3662
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “as such” but only “as they can experience it.” c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception/cognition/action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.
Georgeon O., Mille A. & Gay S. (2016) Intelligence artificielle sans données ontologiques sur une réalité présupposée [Artificial intelligence without using ontological data about a presupposed reality]. Intellectica 65: 143–168. https://cepa.info/5025
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “as such” but only “as they can experience it.” c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception/cognition/action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.
Guckelsberger C. & Salge C. (2016) Does empowerment maximisation allow for enactive artificial agents. In: Gershenson C., Froese T., Siqueiros J. M., Aguilar W., Izqueirdo E. J. & Sayama H. (eds.) Proceedings of the Fifteenth International Conference on the Synthesis and Simulation of Living Systems (Alife 2016). MIT Press, Cambridge MA: 704–711. https://cepa.info/4347
The enactive AI framework wants to overcome the sense-making limitations of embodied AI by drawing on the bio-systemic foundations of enactive cognitive science. While embodied AI tries to ground meaning in sensorimotor interaction, enactive AI adds further requirements by grounding sensorimotor interaction in autonomous agency. At the core of this shift is the requirement for a truly intrinsic value function. We suggest that empowerment, an information-theoretic quantity based on an agents embodiment, represents such a function. We highlight the role of empowerment maximisation in satisfying the requirements of enactive AI, i.e. establishing constitutive autonomy and adaptivity, in detail. We then argue that empowerment, grounded in a precarious existence, allows an agent to enact a world based on the relevance of environmental features in respect to its own identity.
Rocha V., Brandao L., Nogueira Y., Cavalcante-Neto J. & Vidal C. (2021) Autonomous foraging of virtual characters with a constructivist cognitive architecture. In: Proceedings of the Symposium on Virtual and Augmented Reality (SVR’21). Association for Computing Machinery. New York, NY: 101–110.
Immersive experiences in virtual reality simulations require natural-looking virtual characters. Autonomy researchers argue that only the agent’s own experience can model their behavior. In this regard, the Constitutive Autonomy through Self-programming Hypothesis (CASH) is an effective approach to implement this model. In this paper, we contribute to the discussion of CASH within dynamic and continuous environments by developing mechanisms of memory decay, contradiction penalty, and relative valence. Such improvements aim to see how the agent might continuously reevaluate their learned schemas. The results show that our agents were able to develop autonomously into performing plausible behaviors, despite the changing environment.