Enactivism claims that sensory-motor activity and embodiment are crucial in perceiving the environment and that machine vision could be a much simpler business if considered in this context. However, computational models of enactive vision are very rare and often rely on handcrafted control systems. In this article, we argue that the apparent complexity of the environment and of the robot brain can be significantly simplified if perception, behavior, and learning are allowed to co-develop on the same timescale. In doing so, robots become sensitive to, and actively exploit, characteristics of the environment that they can tackle within their own computational and physical constraints. We describe the application of this methodology in three sets of experiments: shape discrimination, car driving, and wheeled robot navigation. A further set of experiments, where the visual system can develop the receptive fields by means of unsupervised Hebbian learning, demonstrates that the receptive fields are consistently and significantly affected by the behavior of the system and differ from those predicted by most computational models of the visual cortex. Finally, we show that our robots can also replicate the performance deficiencies observed in experiments of motor deprivation with kittens.
We will upload a full textversion shortly.
The publication has not yet bookmarked in any reading list
You cannot bookmark this publication into a reading list because you are not member of any Log in to create one.