Bickhard M. H. (1997) Emergence of representation in autonomous agents. Special issue on epistemological aspects of embodied artificial intelligence. Cybernetics and Systems 28(6): 489–498.
A problem of action selection emerges in complex and even not so complex interactive agents: what to do next? The problem of action selection occurs equally for natural and for artificial agents for any embodied agent. The obvious solution to this problem constitutes a form of representation, interactive representation, that is arguably the fundamental form of representation. More carefully, interactive representation satisfies a criterion for representation that no other model of representation in the literature can satisfy or even attempts to address: the possibility of systemdetectable representational error. It also resolves and avoids myriad other problematics of representation and integrates or opens the door to many additional mental processes and phenomena, such as motivation.
Bruineberg J. P. & Van den Herik J. C. (2021) Embodying mental affordances. Inquiry, Latest articles. https://cepa.info/7976
The concept of affordances is rapidly gaining traction in the philosophy of mind and cognitive sciences. Affordances are opportunities for action provided by the environment. An important open question is whether affordances can be used to explain mental action such as attention, counting, and imagination. In this paper, we critically discuss McClelland’s (‘The Mental Affordance Hypothesis’, 2020, Mind, 129(514), pp. 401–427) mental affordance hypothesis. While we agree that the affordance concept can be fruitfully employed to explain mental action, we argue that McClelland’s mental affordance hypothesis contain remnants of a Cartesian understanding of the mind. By discussing the theoretical framework of the affordance competition hypothesis, we sketch an alternative research program based on the principles of embodied cognition that evades the Cartesian worries. We show how paradigmatic mental acts, such as imagination, counting, and arithmetic, are dependent on sensorimotor interaction with an affording environment. Rather than make a clear distinction between bodily and mental action, the mental affordances highlight the embodied nature of our mental action. We think that in developing our alternative research program on mental affordances, we can maintain many of the excellent insights of McClelland’s account without reintroducing the very distinctions that affordances were supposed to overcome.
Buhrmann T. & Di Paolo E. (2014) Non-representational sensorimotor knowledge. In: Del Pobil A , Chinellato E., Martinez-Martin E., Hallam J., Cervera E. & Morales A. (eds.) From animals to animats 13. Springer, New York: 21–31. https://cepa.info/2521
The sensorimotor approach argues that in order to perceive one needs to first “master” the relevant sensorimotor contingencies, and then exercise the acquired practical know-how to become “attuned” to the actual and potential contingencies a particular situation entails. But the approach provides no further detail about how this mastery is achieved or what precisely it means to become attuned to a situation. We here present an agent-based model to show how sen- sorimotor attunement can be understood as a dynamic and non-representational process in which a particular sensorimotor coordination is enacted as a response to a given environmental context, without requiring deliberative action selection.
The aim of this paper is to study the role that anticipation plays in adaptive autonomous systems. The emphasis will be on epistemological consequences of adap-tation in practical robotic systems as they are currently developed in the new field of embodied artificial intelligence. The autonomy of physical robots is peculiar, because it consists in behavioral autonomy as well as epistemic autonomy. While the former is a problem that is often addressed, the latter poses difficult foundational questions for the field. We study the role that anticipation plays in this context. It is argued that embodied systems are a particularly interesting case for the study of epistemic autonomy. This is due to the fact that the adaptation process in robots generates a special form of representation that indicates the outcome of interaction and thus can support action selection schemes. The role of these representations and their epistemic and ontological consequences for the system as well as epistemological consequences for system observers are investigated.
Riegler A. (2007) Superstition in the machine. In: Butz M. V., Sigaud O., Pezzulo G. & Baldassarre G. (eds.) Anticipatory behavior in adaptive learning systems: From brains to individual and social behavior. Lecture Notes in Artificial Intelligence. Springer, New York: 57–72. https://cepa.info/4214
It seems characteristic for humans to detect structural patterns in the world to anticipate future states. Therefore, scientific and common sense cognition could be described as information processing which infers rule-like laws from patterns in data-sets. Since information processing is the domain of computers, artificial cognitive systems are generally designed as pattern discoverers. This paper questions the validity of the information processing paradigm as an explanation for human cognition and a design principle for artificial cognitive systems. Firstly, it is known from the literature that people suffer from conditions such as information overload, superstition, and mental disorders. Secondly, cognitive limitations such as a small short-term memory, the set-effect, the illusion of explanatory depth, etc. raise doubts as to whether human information processing is able to cope with the enormous complexity of an infinitely rich (amorphous) world. It is suggested that, under normal conditions, humans construct information rather than process it. The constructed information contains anticipations which need to be met. This can be hardly called information processing, since patterns from the “outside” are not used to produce action but rather to either justify anticipations or restructure the cognitive apparatus. When it fails, cognition switches to pattern processing, which, given the amorphous nature of the experiential world, is a lost cause if these patterns and inferred rules do not lead to a (partial) reorganisation of internal structures such that constructed anticipations can be met again. In this scenario, superstition and mental disorders are the result of a profound and/or random restructuring of already existing cognitive components (e.g., action sequences). This means that whenever a genuinely cognitive system is exposed to pattern processing it may start to behave superstitiously. The closer we get to autonomous self-motivated artificial cognitive systems, the bigger the danger becomes of superstitious information processing machines that “blow up” rather than behave usefully and effectively. Therefore, to avoid superstition in cognitive systems they should be designed as information constructing entities.