Georgeon O. L. & Marshall J. B. (2013) Demonstrating sensemaking emergence in artificial agents: A method and an example. International Journal of Machine Consciousness 5(2): 131–144. https://cepa.info/3786
Georgeon O. L. & Marshall J. B.
Demonstrating sensemaking emergence in artificial agents: A method and an example.
International Journal of Machine Consciousness 5(2): 131–144.
Fulltext at https://cepa.info/3786
We propose an experimental method to study the possible emergence of sensemaking in artificial agents. This method involves analyzing the agent’s behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Small Loop Problem. We argue that the agent’s behavior demonstrates sensemaking if the agent learns to exploit regularities of interaction to fulfill its self-motivation as if it understood (at least partially) the underlying functioning of the environment. As a corollary, we argue that sensemaking and self-motivation come together. We propose a new method to generate self-motivation in an artificial agent called interactional motivation. An interactionally motivated agent seeks to perform interactions with predefined positive values and avoid interactions with predefined negative values. We applied the proposed sensemaking emergence demonstration method to an agent implemented previously, and produced example reports that suggest that this agent is capable of a rudimentary form of sensemaking.
Georgeon O. L., Marshall J. B. & Manzotti R. (2013) ECA: An enactivist cognitive architecture based on sensorimotor modeling. Biologically Inspired Cognitive Architectures 6: 46–57. https://cepa.info/1009
Georgeon O. L., Marshall J. B. & Manzotti R.
ECA: An enactivist cognitive architecture based on sensorimotor modeling.
Biologically Inspired Cognitive Architectures 6: 46–57.
Fulltext at https://cepa.info/1009
A novel way to model an agent interacting with an environment is introduced, called an Enactive Markov Decision Process (EMDP). An EMDP keeps perception and action embedded within sensorimotor schemes rather than dissociated, in compliance with theories of embodied cognition. Rather than seeking a goal associated with a reward, as in reinforcement learning, an EMDP agent learns to master the sensorimotor contingencies offered by its coupling with the environment. In doing so, the agent exhibits a form of intrinsic motivation related to the autotelic principle (Steels), and a value system attached to interactions called “interactional motivation.” This modeling approach allows the design of agents capable of autonomous self-programming, which provides rudimentary constitutive autonomy – a property that theoreticians of enaction consider necessary for autonomous sense-making (e.g., Froese & Ziemke). A cognitive architecture is presented that allows the agent to discover, memorize, and exploit spatio-sequential regularities of interaction, called Enactive Cognitive Architecture (ECA). In our experiments, behavioral analysis shows that ECA agents develop active perception and begin to construct their own ontological perspective on the environment. Relevance: This publication relates to constructivism by the fact that the agent learns from input data that does not convey ontological information about the environment. That is, the agent learns by actively experiencing its environment through interaction, as opposed to learning by registering observations directly characterizing the environment. This publication also relates to enactivism by the fact that the agent engages in self-programming through its experience from interacting with the environment, rather than executing pre-programmed behaviors.