de Carvalho L. L., Pereira D. J. & Coelho S. A. (2016) Origins and evolution of enactive cognitive science: Toward an enactive cognitive architecture. Biologically Inspired Cognitive Architectures 16: 169–178. https://cepa.info/7223
This paper presents a historical perspective on the origin of the enactive approach to cognitive science, starting chronologically from cybernetics, with the aim of clarifying its main concepts, such as enaction, autopoiesis, structural coupling and natural drift; thus showing their influences in computational approaches and models of cognitive architecture. Works of renowned authors, as well as some of their main commentators, were addressed to report the development of enactive approach. We indicate that the enactive approach transcends its original context within biology, and at a second moment within connectionism, changes the understanding of the relationships so far established between the body and the environment, and the ideas of conceptual relationships between the mind and the body. The influence on computational theories is of great importance, leading to new artificial intelligence systems as well as the proposition of complex, autopoietic and alive machines. Finally, the article stresses the importance of the enactive approach in the design of agents, understanding that previous approaches have very different cognitive architectures and that a prototypical model of enactive cognitive architecture is one of the largest challenges today.
Emery N. J. & Clayton N. S. (2008) Imaginative scrub-jays, causal rooks, and a liberal application of Occam’s aftershave. Behavioral and Brain Sciences 31(02): 134–135.
We address the claim that nonhuman animals do not represent unobservable states, based on studies of physical cognition by rooks and social cognition by scrub-jays. In both cases, the most parsimonious explanation for the results is counter to the reinterpretation hypothesis. We suggest that imagination and prospection can be investigated in animals and included in models of cognitive architecture.
Georgeon O. L. & Aha D. (2013) The Radical Interactionism Conceptual Commitment. Journal of Artificial General Intelligence 4(2): 31–36. https://cepa.info/3787
We introduce Radical Interactionism (RI), which extends Franklin et al.’s (2013) Cognitive Cycles as Cognitive Atoms (CCCA) proposal in their discussion on conceptual commitments in cognitive models. Similar to the CCCA commitment, the RI commitment acknowledges the indivisibility of the perception-action cycle. However, it also reifies the perception-action cycle as sensorimotor interaction and uses it to replace the traditional notions of observation and action. This complies with constructivist epistemology, which suggests that knowledge of reality is constructed from regularities observed in sensorimotor experience. We use the LIDA cognitive architecture as an example to examine the implications of RI on cognitive models. We argue that RI permits self- programming and constitutive autonomy, which have been acknowledged as desirable cognitive capabilities in artificial agents.
Georgeon O. L. & Ritter F. E. (2012) An intrinsically-motivated schema mechanism to model and simulate emergent cognition. Cognitive Systems Research 15–16: 73–92.
We introduce an approach to simulate the early mechanisms of emergent cognition based on theories of enactive cognition and on constructivist epistemology. The agent has intrinsic motivations implemented as inborn proclivities that drive the agent in a proactive way. Following these drives, the agent autonomously learns regularities afforded by the environment, and hierarchical sequences of behaviors adapted to these regularities. The agent represents its current situation in terms of perceived affordances that develop through the agent’s experience. This situational representation works as an emerging situation awareness that is grounded in the agent’s interaction with its environment and that in turn generates expectations and activates adapted behaviors. Through its activity and these aspects of behavior (behavioral proclivity, situation awareness, and hierarchical sequential learning), the agent starts to exhibit emergent sensibility, intrinsic motivation, and autonomous learning. Following theories of cognitive development, we argue that this initial autonomous mechanism provides a basis for implementing autonomously developing cognitive systems.
Georgeon O. L., Marshall J. B. & Manzotti R. (2013) ECA: An enactivist cognitive architecture based on sensorimotor modeling. Biologically Inspired Cognitive Architectures 6: 46–57. https://cepa.info/1009
A novel way to model an agent interacting with an environment is introduced, called an Enactive Markov Decision Process (EMDP). An EMDP keeps perception and action embedded within sensorimotor schemes rather than dissociated, in compliance with theories of embodied cognition. Rather than seeking a goal associated with a reward, as in reinforcement learning, an EMDP agent learns to master the sensorimotor contingencies offered by its coupling with the environment. In doing so, the agent exhibits a form of intrinsic motivation related to the autotelic principle (Steels), and a value system attached to interactions called “interactional motivation.” This modeling approach allows the design of agents capable of autonomous self-programming, which provides rudimentary constitutive autonomy – a property that theoreticians of enaction consider necessary for autonomous sense-making (e.g., Froese & Ziemke). A cognitive architecture is presented that allows the agent to discover, memorize, and exploit spatio-sequential regularities of interaction, called Enactive Cognitive Architecture (ECA). In our experiments, behavioral analysis shows that ECA agents develop active perception and begin to construct their own ontological perspective on the environment. Relevance: This publication relates to constructivism by the fact that the agent learns from input data that does not convey ontological information about the environment. That is, the agent learns by actively experiencing its environment through interaction, as opposed to learning by registering observations directly characterizing the environment. This publication also relates to enactivism by the fact that the agent engages in self-programming through its experience from interacting with the environment, rather than executing pre-programmed behaviors.
Georgeon O., Mille A. & Gay S. (2016) Intelligence artificielle sans données ontologiques sur une réalité presupposée [Artificial intelligence without using ontological data about a presupposed reality]. Intellectica 65: 143–168. https://cepa.info/3662
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “as such” but only “as they can experience it.” c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception/cognition/action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.
Georgeon O., Mille A. & Gay S. (2016) Intelligence artificielle sans données ontologiques sur une réalité présupposée [Artificial intelligence without using ontological data about a presupposed reality]. Intellectica 65: 143–168. https://cepa.info/5025
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “as such” but only “as they can experience it.” c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception/cognition/action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.
Kirschner P. A., Sweller J. & Clark R. E. (2006) Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist 41(2): 75–86. https://cepa.info/3773
Evidence for the superiority of guided instruction is explained in the context of our knowledge of human cognitive architecture, expert–novice differences, and cognitive load. Although unguided or minimally guided instructional approaches are very popular and intuitively appealing, the point is made that these approaches ignore both the structures that constitute human cognitive architecture and evidence from empirical studies over the past half-century that consistently indicate that minimally guided instruction is less effective and less efficient than instructional approaches that place a strong emphasis on guidance of the student learning process. The advantage of guidance begins to recede only when learners have sufficiently high prior knowledge to provide “internal” guidance. Recent developments in instructional research and instructional designmodels that support guidance during instruction are briefly described.
Lenartowicz M., Weinbaum D. & Braathen P. (2016) The individuation of social systems: A cognitive framework. Procedia Computer Science 88: 15–20. https://cepa.info/4759
We present a socio-human cognitive framework that radically deemphasizes the role of individual human agents required for both the formation of social systems and their ongoing operation thereafter. Our point of departure is Simondon’s (1992) theory of individuation, which we integrate with the enactive theory of cognition (Di Paolo et al., 2010) and Luhmann’s (1996) theory of social systems. This forges a novel view of social systems as complex, individuating sequences of communicative interactions that together constitute distributed yet distinct cognitive agencies, acquiring a capacity to exert influence over their human-constituted environment. We conclude that the resulting framework suggests several different paths of integrating AI agents into human society. One path suggests the emulation of a largely simplified version of the human mind, reduced in its functions to a specific triple selection-making which is necessary for sustaining social systems. Another one conceives AI systems that follow the distributed, autonomous architecture of social systems, instead that of humans.
Sandini G., Metta G. & Vernon D. (2007) The iCub cognitive humanoid robot: An open-system research platform for enactive cognition. In: Lungarella M., Iida F., Bongard J. & Pfeifer R. (eds.) 50 Years of AI. Springer-Verlag, Berlin: 358–369. https://cepa.info/7235
This paper describes a multi-disciplinary initiative to promote collaborative research in enactive artificial cognitive systems by developing the iCub: a open-systems 53 degree-of-freedom cognitive humanoid robot. At 94 cm tall, the iCub is the same size as a three year-old child. It will be able to crawl on all fours and sit up, its hands will allow dexterous manipulation, and its head and eyes are fully articulated. It has visual, vestibular, auditory, and haptic sensory capabilities. As an open system, the design and documentation of all hardware and software is licensed under the Free Software Foundation GNU licences so that the system can be freely replicated and customized. We begin this paper by outlining the enactive approach to cognition, drawing out the implications for phylogenetic configuration, the necessity for ontogenetic development, and the importance of humanoid embodiment. This is followed by a short discussion of our motivation for adopting an open-systems approach. We proceed to describe the iCub’s mechanical and electronic specifications, its software architecture, its cognitive architecture. We conclude by discussing the iCub phylogeny, i.e. the robot’s intended innate abilities, and an scenario for ontogenesis based on human neo-natal development.