Olivier L. Georgeon is currently an associate researcher at the LIRIS Lab with a fellowship from the French government (ANR-RPDOC program). He received a master’s in computer engineering from Ecole Centrale de Marseille in 1988, and a PhD in psychology from the Université de Lyon in 2008.
Georgeon O. L. (2014) Learning by Experiencing versus Learning by Registering. Constructivist Foundations 9(2): 211–213. https://constructivist.info/9/2/211
Open peer commentary on the article “Subsystem Formation Driven by Double Contingency” by Bernd Porr & Paolo Di Prodi. Upshot: Agents that learn from perturbations of closed control loops are considered constructivist by virtue of the fact that their input (the perturbation) does not convey ontological information about the environment. That is, they learn by actively experiencing their environment through interaction, as opposed to learning by registering directly input data characterizing the environment. Generalizing this idea, the notion of learning by experiencing provides a broader conceptual framework than cybernetic control theory for studying the double contingency problem, and may yield more progress in constructivist agent design.
Georgeon O. L. (2017) Little AI: Playing a constructivist robot. SoftwareX 6: 161–164. https://cepa.info/5866
Little AI is a pedagogical game aimed at presenting the founding concepts of constructivist learning and developmental Artificial Intelligence. It primarily targets students in computer science and cognitive science but it can also interest the general public curious about these topics. It requires no particular scientific background; even children can find it entertaining. Professors can use it as a pedagogical resource in class or in online courses. The player presses buttons to control a simulated “baby robot”. The player cannot see the robot and its environment, and initially ignores the effects of the commands. The only information received by the player is feedback from the player’s commands. The player must learn, at the same time, the functioning of the robot’s body and the structure of the environment from patterns in the stream of commands and feedback. We argue that this situation is analogous to how infants engage in early-stage developmental learning (e.g., Piaget (1937), [1]).
Georgeon O. L. & Aha D. (2013) The Radical Interactionism Conceptual Commitment. Journal of Artificial General Intelligence 4(2): 31–36. https://cepa.info/3787
We introduce Radical Interactionism (RI), which extends Franklin et al.’s (2013) Cognitive Cycles as Cognitive Atoms (CCCA) proposal in their discussion on conceptual commitments in cognitive models. Similar to the CCCA commitment, the RI commitment acknowledges the indivisibility of the perception-action cycle. However, it also reifies the perception-action cycle as sensorimotor interaction and uses it to replace the traditional notions of observation and action. This complies with constructivist epistemology, which suggests that knowledge of reality is constructed from regularities observed in sensorimotor experience. We use the LIDA cognitive architecture as an example to examine the implications of RI on cognitive models. We argue that RI permits self- programming and constitutive autonomy, which have been acknowledged as desirable cognitive capabilities in artificial agents.
Georgeon O. L. & Boltuc P. (2016) Circular Constitution of Observation in the Absence of Ontological Data. Constructivist Foundations 12(1): 17–19. https://cepa.info/3796
Open peer commentary on the article “Circularity and the Micro-Macro-Difference” by Manfred Füllsack. Upshot: We join Füllsack in his effort to untangle the concepts of circular causation, macro states, and observation by reanalyzing one of our own simulations in the light of these concepts. This simulation presents an example agent that keeps track of its own macro states. We examine how human observers (experimenters and readers of this commentary) can consider such an agent as an observing agent on its own.
Georgeon O. L. & Cordier A. (2014) Inverting the interaction cycle to model embodied agents. Procedia Computer Science 41: 243–248. https://cepa.info/3785
Cognitive architectures should make explicit the conceptual begin and end points of the agent/environment interaction cycle. Most architectures begin with the agent receiving input data representing the environment, and end with the agent sending output data. This paper suggests inverting this cycle: the agent sends output data that specifies an experiment, and receives input data that represents the result of this experiment. This complies with the embodiment paradigm because the input data does not directly represent the environment and does not amount to the agent’s perception. We illustrate this in an example and propose an assessment method based upon activity-trace analysis.
Georgeon O. L. & Guillermin M. (2018) Mastering the Laws of Feedback Contingencies Is Essential to Constructivist Artificial Agents. Constructivist Foundations 13(2): 300–301. https://cepa.info/4629
Open peer commentary on the article “Plasticity, Granularity and Multiple Contingency - Essentials for Conceiving an Artificial Constructivist Agent” by Manfred Füllsack. Upshot: We support Füllsack’s claim that contingency is essential in the conception of artificial constructivist agents. Linking this claim to O’Regan and Noë’s theory of sensorimotor contingencies, we argue that artificial constructivist agents should master the laws of feedback contingencies. In particular, artificial constructivist agents should process input data as feedback from their actions rather than as percepts representing the environment.
Georgeon O. L. & Hassas S. (2013) Single Agents Can Be Constructivist too. Constructivist Foundations 9(1): 40–42. https://constructivist.info/9/1/040
Open peer commentary on the article “Exploration of the Functional Properties of Interaction: Computer Models and Pointers for Theory” by Etienne B. Roesch, Matthew Spencer, Slawomir J. Nasuto, Thomas Tanay & J. Mark Bishop. Upshot: We support Roesch and his co-authors’ theoretical stance on constructivist artificial agents, and wish to enrich their “exploration of the functional properties of interaction” with complementary results. By revisiting their experiments with an agent that we developed previously, we explore two issues that they deliberately left aside: autonomous intentionality and dynamic reutilization of knowledge by the agent. Our results reveal an alternative pathway to constructivism that addresses the central question of intentionality in a single agent from the very beginning of its design, suggesting that the property of distributed processing proposed by Roesch et al. is not essential to constructivism.
Georgeon O. L. & Marshall J. B. (2013) Demonstrating sensemaking emergence in artificial agents: A method and an example. International Journal of Machine Consciousness 5(2): 131–144. https://cepa.info/3786
We propose an experimental method to study the possible emergence of sensemaking in artificial agents. This method involves analyzing the agent’s behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Small Loop Problem. We argue that the agent’s behavior demonstrates sensemaking if the agent learns to exploit regularities of interaction to fulfill its self-motivation as if it understood (at least partially) the underlying functioning of the environment. As a corollary, we argue that sensemaking and self-motivation come together. We propose a new method to generate self-motivation in an artificial agent called interactional motivation. An interactionally motivated agent seeks to perform interactions with predefined positive values and avoid interactions with predefined negative values. We applied the proposed sensemaking emergence demonstration method to an agent implemented previously, and produced example reports that suggest that this agent is capable of a rudimentary form of sensemaking.
Georgeon O. L. & Ritter F. E. (2012) An intrinsically-motivated schema mechanism to model and simulate emergent cognition. Cognitive Systems Research 15–16: 73–92.
We introduce an approach to simulate the early mechanisms of emergent cognition based on theories of enactive cognition and on constructivist epistemology. The agent has intrinsic motivations implemented as inborn proclivities that drive the agent in a proactive way. Following these drives, the agent autonomously learns regularities afforded by the environment, and hierarchical sequences of behaviors adapted to these regularities. The agent represents its current situation in terms of perceived affordances that develop through the agent’s experience. This situational representation works as an emerging situation awareness that is grounded in the agent’s interaction with its environment and that in turn generates expectations and activates adapted behaviors. Through its activity and these aspects of behavior (behavioral proclivity, situation awareness, and hierarchical sequential learning), the agent starts to exhibit emergent sensibility, intrinsic motivation, and autonomous learning. Following theories of cognitive development, we argue that this initial autonomous mechanism provides a basis for implementing autonomously developing cognitive systems.
Georgeon O. L., Marshall J. B. & Manzotti R. (2013) ECA: An enactivist cognitive architecture based on sensorimotor modeling. Biologically Inspired Cognitive Architectures 6: 46–57. https://cepa.info/1009
A novel way to model an agent interacting with an environment is introduced, called an Enactive Markov Decision Process (EMDP). An EMDP keeps perception and action embedded within sensorimotor schemes rather than dissociated, in compliance with theories of embodied cognition. Rather than seeking a goal associated with a reward, as in reinforcement learning, an EMDP agent learns to master the sensorimotor contingencies offered by its coupling with the environment. In doing so, the agent exhibits a form of intrinsic motivation related to the autotelic principle (Steels), and a value system attached to interactions called “interactional motivation.” This modeling approach allows the design of agents capable of autonomous self-programming, which provides rudimentary constitutive autonomy – a property that theoreticians of enaction consider necessary for autonomous sense-making (e.g., Froese & Ziemke). A cognitive architecture is presented that allows the agent to discover, memorize, and exploit spatio-sequential regularities of interaction, called Enactive Cognitive Architecture (ECA). In our experiments, behavioral analysis shows that ECA agents develop active perception and begin to construct their own ontological perspective on the environment. Relevance: This publication relates to constructivism by the fact that the agent learns from input data that does not convey ontological information about the environment. That is, the agent learns by actively experiencing its environment through interaction, as opposed to learning by registering observations directly characterizing the environment. This publication also relates to enactivism by the fact that the agent engages in self-programming through its experience from interacting with the environment, rather than executing pre-programmed behaviors.