My main interests are in interactive artificial intelligence, virtual reality and their respective links with cognitive science. In conjonction with the development of automous models for participative simulation, I'm interested in the basis of autonomy. This work lead me to the embodied cognition field of cognitive science and to the open question on the possibility of connections between this field and computer simulation, virtual reality or artificial intelligence. In this perspective I'm also interested in phenomenology, constructivism and art. I teach software engineering, computer science and artificial intelligence at ENIB (enginner school) and UBO university (Brest university).
De Loor P. (2018) Three Other Challenges for Artificial Constructivist Agent from an Enactive Perspective. Constructivist Foundations 13(2): 298–300. https://cepa.info/4628
Open peer commentary on the article “Plasticity, Granularity and Multiple Contingency - Essentials for Conceiving an Artificial Constructivist Agent” by Manfred Füllsack. Upshot: This commentary aims to deepen the proposition made by Füllsack about the possibility of minimal constructivist agents from an enactive perspective. Even though I agree with the main purpose of his target article, I maintain that it could be interesting to go further and to introduce notions such as boundary, viability, multi-scale modelling and phenomenal domain separation. I also argue that the examples and models proposed in the target article are challenged by these concepts.
De Loor P., Manac’h K. & Chevaillier P. (2014) The memorization of in-line sensorimotor invariants: Toward behavioral ontogeny and enactive agents. Artificial Life and Robotics 19(2): 127–135. https://cepa.info/4552
This paper presents a behavioral ontogeny for artificial agents based on the interactive memorization of sensorimotor invariants. The agents are controlled by continuous timed recurrent neural networks (CTRNNs) which bind their sensors and motors within a dynamic system. The behavioral ontogenesis is based on a phylogenetic approach: memorization occurs during the agent’s lifetime and an evolutionary algorithm discovers CTRNN parameters. This shows that sensorimotor invariants can be durably modified through interaction with a guiding agent. After this phase has finished, agents are able to adopt new sensorimotor invariants relative to the environment with no further guidance. We obtained these kinds of behaviors for CTRNNs with 3–6 units, and this paper examines the functioning of those CTRNNs. For instance, they are able to internally simulate guidance when it is externally absent, in line with theories of simulation in neuroscience and the enactive field of cognitive science.
De Loor P., Manac’h K. & Tisseau J. (2009) Enaction-based artificial intelligence: Toward co-evolution with humans in the loop. Minds & Machines 19(3): 319–343. https://cepa.info/4547
This article deals with the links between the enaction paradigm and artificial intelligence. Enaction is considered a metaphor for artificial intelligence, as a number of the notions which it deals with are deemed incompatible with the phenomenal field of the virtual. After explaining this stance, we shall review previous works regarding this issue in terms of artificial life and robotics. We shall focus on the lack of recognition of co-evolution at the heart of these approaches. We propose to explicitly integrate the evolution of the environment into our approach in order to refine the ontogenesis of the artificial system, and to compare it with the enaction paradigm. The growing complexity of the ontogenetic mechanisms to be activated can therefore be compensated by an interactive guidance system emanating from the environment. This proposition does not however, resolve that of the relevance of the meaning created by the machine (sense-making) Such reflections lead us to integrate human interaction into this environment in order to construct relevant meaning in terms of participative artificial intelligence. This raises a number of questions with regards to setting up an enactive interaction. The article concludes by exploring a number of issues, thereby enabling us to associate current approaches with the principles of morphogenesis, guidance, the phenomenology of interactions and the use of minimal enactive interfaces in setting up experiments which will deal with the problem of artificial intelligence in a variety of enaction-based ways.