Georgeon O. L. & Aha D. (2013) The Radical Interactionism Conceptual Commitment. Journal of Artificial General Intelligence 4(2): 31–36. https://cepa.info/3787
We introduce Radical Interactionism (RI), which extends Franklin et al.’s (2013) Cognitive Cycles as Cognitive Atoms (CCCA) proposal in their discussion on conceptual commitments in cognitive models. Similar to the CCCA commitment, the RI commitment acknowledges the indivisibility of the perception-action cycle. However, it also reifies the perception-action cycle as sensorimotor interaction and uses it to replace the traditional notions of observation and action. This complies with constructivist epistemology, which suggests that knowledge of reality is constructed from regularities observed in sensorimotor experience. We use the LIDA cognitive architecture as an example to examine the implications of RI on cognitive models. We argue that RI permits self- programming and constitutive autonomy, which have been acknowledged as desirable cognitive capabilities in artificial agents.
Georgeon O. L. & Marshall J. B. (2013) Demonstrating sensemaking emergence in artificial agents: A method and an example. International Journal of Machine Consciousness 5(2): 131–144. https://cepa.info/3786
We propose an experimental method to study the possible emergence of sensemaking in artificial agents. This method involves analyzing the agent’s behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Small Loop Problem. We argue that the agent’s behavior demonstrates sensemaking if the agent learns to exploit regularities of interaction to fulfill its self-motivation as if it understood (at least partially) the underlying functioning of the environment. As a corollary, we argue that sensemaking and self-motivation come together. We propose a new method to generate self-motivation in an artificial agent called interactional motivation. An interactionally motivated agent seeks to perform interactions with predefined positive values and avoid interactions with predefined negative values. We applied the proposed sensemaking emergence demonstration method to an agent implemented previously, and produced example reports that suggest that this agent is capable of a rudimentary form of sensemaking.
Georgeon O., Mille A. & Gay S. (2016) Intelligence artificielle sans données ontologiques sur une réalité presupposée [Artificial intelligence without using ontological data about a presupposed reality]. Intellectica 65: 143–168. https://cepa.info/3662
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “as such” but only “as they can experience it.” c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception/cognition/action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.
Georgeon O., Mille A. & Gay S. (2016) Intelligence artificielle sans données ontologiques sur une réalité présupposée [Artificial intelligence without using ontological data about a presupposed reality]. Intellectica 65: 143–168. https://cepa.info/5025
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “as such” but only “as they can experience it.” c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception/cognition/action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.