Abrahamson D. (2021) Grasp actually: An evolutionist argument for enactivist mathematics education. Human Development 65(2): 77–93. https://cepa.info/7084
What evolutionary account explains our capacity to reason mathematically? Identifying the biological provenance of mathematical thinking would bear on education, because we could then design learning environments that simulate ecologically authentic conditions for leveraging this universal phylogenetic inclination. The ancient mechanism coopted for mathematical activity, I propose, is our fundamental organismic capacity to improve our sensorimotor engagement with the environment by detecting, generating, and maintaining goal-oriented perceptual structures regulating action, whether actual or imaginary. As such, the phenomenology of grasping a mathematical notion is literally that – gripping the environment in a new way that promotes interaction. To argue for the plausibility of my thesis, I first survey embodiment literature to implicate cognition as constituted in perceptuomotor engagement. Then, I summarize findings from a design-based research project investigating relations between learning to move in new ways and learning to reason mathematically about these conceptual choreographies. As such, the project proposes educational implications of enactivist evolutionary biology.
Aguilera M. (2015) Interaction dynamics and autonomy in cognitive systems, from sensorimotor coordination to collective action. Universidad de Zaragoza, Zaragoza, Spain. https://cepa.info/4791
The concept of autonomy is of crucial importance for understanding life and cognition. Whereas cellular and organismic autonomy is based in the self-production of the material infrastructure sustaining the existence of living beings as such, we are interested in how biological autonomy can be expanded into forms of autonomous agency, where autonomy as a form of organization is extended into the behaviour of an agent in interaction with its environment (and not its material self-production) In this thesis, we focus on the development of operational models of sensorimotor agency, exploring the construction of a domain of interactions creating a dynamical interface between agent and environment. We present two main contributions to the study of autonomous agency: First, we contribute to the development of a modelling route for testing, comparing and validating hypotheses about neurocognitive autonomy. Through the design and analysis of specific neurodynamical models embedded in robotic agents, we explore how an agent is constituted in a sensorimotor space as an autonomous entity able to adaptively sustain its own organization. Using two simulation models and different dynamical analysis and measurement of complex patterns in their behaviour, we are able to tackle some theoretical obstacles preventing the understanding of sensorimotor autonomy, and to generate new predictions about the nature of autonomous agency in the neurocognitive domain. Second, we explore the extension of sensorimotor forms of autonomy into the social realm. We analyse two cases from an experimental perspective: the constitution of a collective subject in a sensorimotor social interactive task, and the emergence of an autonomous social identity in a large-scale technologically-mediated social system. Through the analysis of coordination mechanisms and emergent complex patterns, we are able to gather experimental evidence indicating that in some cases social autonomy might emerge based on mechanisms of coordinated sensorimotor activity and interaction, constituting forms of collective autonomous agency.
Andrew A. M. (2005) Artificial neural nets and BCL. Kybernetes 34(1/2): 33–39.
Purpose: Attention is drawn to a principle of “significance feedback” in neural nets that was devised in the encouraging ambience of the Biological Computer Laboratory and is arguably fundamental to much of the subsequent practical application of artificial neural nets. Design/methodology/approach – The background against which the innovation was made is reviewed, as well as subsequent developments. It is emphasised that Heinz von Foerster and BCL made important contributions prior to their focus on second-order cybernetics. Findings: The version of “significance feedback” denoted by “backpropagation of error” has found numerous applications, but in a restricted field, and the relevance to biology is uncertain. Practical implications: Ways in which the principle might be extended are discussed, including attention to structural changes in networks, and extension of the field of application to include conceptual processing. Originality/value – The original work was 40 years ago, but indications are given of questions that are still unanswered and avenues yet to be explored, some of them indicated by reference to intelligence as “fractal.”
Asaro P. (2007) Heinz von Foerster and the bio-computing movements of the 1960s. In: Müller A. & Müller K. H. (eds.) An unfinished revolution? Heinz von Foerster and the Biological Computer Laboratory, BCL, 1959–1976. Edition Echoraum, Vienna: 253–275. https://cepa.info/6625
Excerpt: As I read the cybernetic literature, I became intrigued that as an approach to the mind which was often described as a predecessor to AI, cybernetics had a much more sophisticated approach to mind than its purported successor. I was soon led to Prof. Herbert Brün’s seminar in experimental composition, and to the archives of the Biological Computer Laboratory (BCL) in the basement of the University of Illinois library. Since then, I have been trying to come to terms with what it was that was so special about the BCL, what allowed it to produce such interesting ideas and projects which seem alien and exotic in comparison to what mainstream AI and Cognitive Science produced in the same era. And yet, despite its appealing philosophical depth and technological novelty, it seems to have been largely ignored or forgotten by mainstream research in these areas. I believe that these are the same concerns that many of the authors of the recent issue of Cybernetics and Human Knowing (Brier & Glanville, 2003) express in regard to the legacy of von Foerster and the BCL. How could such an interesting place, full of interesting things and ideas have just disappeared and been largely forgotten, even in its own home town?
Asaro P. (2008) Computer als Modelle des Geistes. Über Simulation und das Gehirn als Modell des Designs von Computern. Österreichische Zeitschrift für Geschichtswissenschaften 19(4): 41–72. https://cepa.info/2310
The article considers the complexities of thinking about the computer as a model of the mind. It examines the computer as being a model of the brain in several very different senses of “model‘. On the one hand the basic architecture of the first modern stored-program computers was „modeled on“ the brain by John von Neumann. Von Neumann also sought to build a mathematical model of the biological brain as a complex system. A similar but different approach to modeling the brain was taken by Alan Turing, who on the one hand believed that the mind simply was a universal computer, and who sought to show how brain-like networks could self-organize into Universal Turing Machines. And on the other hand, Turing saw the computer as the universal machine that could simulate any other machine, and thus any particular human skill and thereby could simulate human intelligence. This leads to a discussion of the nature of “simulation” and its relation to models and modeling. The article applies this analysis to a written correspondence between Ashby and Turing in which Turing urges Ashby to simulate his cybernetic Homeostat device on the ACE computer, rather than build a special machine.
Asaro P. (2008) From mechanisms of adaptation to intelligence amplifiers: the philosophy of W. Ross Ashby. In: Husbands P., Holland O. & Wheeler M. (eds.) The mechanical mind in history. MIT Press, Cambridge MA: 149–184. https://cepa.info/2329
This chapter sketches an intellectual portrait of W. Ross Ashby’s thought from his earliest work on the mechanisms of intelligence in 1940 through the birth of what is now called artificial intelligence (AI), around 1956, and to the end of his career in 1972. It begins by examining his earliest published works on adaptation and equilibrium, and the conceptual structure of his notions of the mechanisms of control in biological systems. In particular, it assesses his conceptions of mechanism, equilibrium, stability, and the role of breakdown in achieving equilibrium. It then proceeds to his work on refining the concept of “intelligence,” on the possibility of the mechanical augmentation and amplification of human intelligence, and on how machines might be built that surpass human understanding in their capabilities. Finally, the chapter considers the significance of his philosophy and its role in cybernetic thought.
Recent work on the fundamental processes of regulation in biology (Ashby, 1956) has shown the importance of a certain quantitative relation called the law of requisite variety. After this relation had been found, we appreciated that it was related to a theorem in a world far removed from the biological – that of Shannon on the quantity of noise or error that could be removed through a correction-channel (Shannon and Weaver, 1949; theorem 10). In this paper I propose to show the relationship between the two theorems, and to indicate something of their implications for regulation, in the cybernetic sense, when the system to be regulated is extremely complex.
Bakken T., Hernes T. & Wiik E. (2009) Innovation and organization: An overview from the perspective of Luhmann’s autopoiesis. In: Magalhães R. & Sanchez R. (eds.) Autopoiesis in organization theory and practice. Emerald, Bingley UK: 69–88. https://cepa.info/7958
Excerpt: Can autopoietic systems not be creative and innovative? Or does the biological roots of the concept and notions such as “structural determinism” and “structural states” make it impossible to capture “the new” in the system’s dynamics’? The aim of the following discussion is to outline the theory of autopoietic systems, as it pertains to action theory and the understanding of the phenomenon of innovation. This will be elucidated by examining how systems theory combines concepts of (1) the old and the new, (2) the real and the possible, and (3) the redundant and the variable.
Barandiaran X. (2017) Autonomy and enactivism: Towards a theory of sensorimotor autonomous agency. Topoi 36(3): 409–430. https://cepa.info/4149
The concept of “autonomy,” once at the core of the original enactivist proposal in The Embodied Mind (Varela et al. in The embodied mind: cognitive science and human experience. MIT Press, Cambridge, 1991), is nowadays ignored or neglected by some of the most prominent contemporary enactivists approaches. Theories of autonomy, however, come to fill a theoretical gap that sensorimotor accounts of cognition cannot ignore: they provide a naturalized account of normativity and the resources to ground the identity of a cognitive subject in its specific mode of organization. There are, however, good reasons for the contemporary neglect of autonomy as a relevant concept for enactivism. On the one hand, the concept of autonomy has too often been assimilated into autopoiesis (or basic autonomy in the molecular or biological realm) and the implications are not always clear for a dynamical sensorimotor approach to cognitive science. On the other hand, the foundational enactivist proposal displays a metaphysical tension between the concept of operational closure (autonomy), deployed as constitutive, and that of structural coupling (sensorimotor dynamics); making it hard to reconcile with the claim that experience is sensorimotorly constituted. This tension is particularly apparent when Varela et al. propose Bittorio (a 1D cellular automata) as a model of the operational closure of the nervous system as it fails to satisfy the required conditions for a sensorimotor constitution of experience. It is, however, possible to solve these problems by re-considering autonomy at the level of sensorimotor neurodynamics. Two recent robotic simulation models are used for this task, illustrating the notion of strong sensorimotor dependency of neurodynamic patterns, and their networked intertwinement. The concept of habit is proposed as an enactivist building block for cognitive theorizing, re-conceptualizing mental life as a habit ecology, tied within an agent’s behaviour generating mechanism in coordination with its environment. Norms can be naturalized in terms of dynamic, interactively self-sustaining, coherentism. This conception of autonomous sensorimotor agency is put in contrast with those enactive approaches that reject autonomy or neglect the theoretical resources it has to offer for the project of naturalizing minds.
Biosemiotics is the synthesis of biology and semiotics, and its main purpose is to show that semiosis is a fundamental component of life, i.e., that signs and meaning exist in all living systems. This idea started circulating in the 1960s and was proposed independently from enquires taking place at both ends of the Scala Naturae. At the molecular end it was expressed by Howard Pattee’s analysis of the genetic code, whereas at the human end it took the form of Thomas Sebeok’s investigation into the biological roots of culture. Other proposals appeared in the years that followed and gave origin to different theoretical frameworks, or different schools, of biosemiotics. They are: (1) the physical biosemiotics of Howard Pattee and its extension in Darwinian biosemiotics by Howard Pattee and by Terrence Deacon, (2) the zoosemiotics proposed by Thomas Sebeok and its extension in sign biosemiotics developed by Thomas Sebeok and by Jesper Hoffmeyer, (3) the code biosemiotics of Marcello Barbieri and (4) the hermeneutic biosemiotics of Anton Markoš. The differences that exist between the schools are a consequence of their different models of semiosis, but that is only the tip of the iceberg. In reality they go much deeper and concern the very nature of the new discipline. Is biosemiotics only a new way of looking at the known facts of biology or does it predict new facts? Does biosemiotics consist of testable hypotheses? Does it add anything to the history of life and to our understanding of evolution? These are the major issues of the young discipline, and the purpose of the present paper is to illustrate them by describing the origin and the historical development of its main schools.