De Loor P., Manac’h K. & Tisseau J. (2009) Enaction-based artificial intelligence: Toward co-evolution with humans in the loop. Minds & Machines 19(3): 319–343. https://cepa.info/4547
This article deals with the links between the enaction paradigm and artificial intelligence. Enaction is considered a metaphor for artificial intelligence, as a number of the notions which it deals with are deemed incompatible with the phenomenal field of the virtual. After explaining this stance, we shall review previous works regarding this issue in terms of artificial life and robotics. We shall focus on the lack of recognition of co-evolution at the heart of these approaches. We propose to explicitly integrate the evolution of the environment into our approach in order to refine the ontogenesis of the artificial system, and to compare it with the enaction paradigm. The growing complexity of the ontogenetic mechanisms to be activated can therefore be compensated by an interactive guidance system emanating from the environment. This proposition does not however, resolve that of the relevance of the meaning created by the machine (sense-making) Such reflections lead us to integrate human interaction into this environment in order to construct relevant meaning in terms of participative artificial intelligence. This raises a number of questions with regards to setting up an enactive interaction. The article concludes by exploring a number of issues, thereby enabling us to associate current approaches with the principles of morphogenesis, guidance, the phenomenology of interactions and the use of minimal enactive interfaces in setting up experiments which will deal with the problem of artificial intelligence in a variety of enaction-based ways.
Synaptic communication, nonsynaptic diffusion neurotransmission and glial activity each update the morphology of the other two. These interactions lead to an endogenous structure of causal entailment. It has internal ambiguities rendering it incomputable. The entailed effects are bizarre. These include abduction of novelty in response to conflicting cues, a resolution of the seeming conflict between freewill and determinism, and anticipatory behavior. Such inherent ambiguity of the causal entailment structure does not preclude the implementation of brain-like activities artificially. Although an algorithm is incapable of neuromimetically reproducing self-referential character of the brain, there is a currently-feasible strategy for wiring a “human in the loop” to use the cognitive powers of anticipation and unconscious integration to provide dramatic improvement in the operation of large engineered systems.