Excerpt: We have chosen the calculus of indications as our basic ground for systemic descriptions. We wish to consider in this chapter the indicational forms of those systems exhibiting autonomy. When describing a system, we have seen that all indications are relative to one another, as they all stand in relation to some indicational space or domain. So far we have considered only the most fundamental of these relations: containment. That is, we have only being concerned with the inside/outside relationship between crosses. This gives rise to expressions which, if they were geometrical forms, would be like Chinese boxes. When considering autonomous systems, and because of the closure thesis, we have seen that their organization contains “bootstrapping” processes that exhibit indefinite recursion of their component elements. This would amount to a form that reenters its indicational space, that informs itself. In the geometrical analogy it would be like a Klein bottle, where inside and outside become hopelessly confused.
Varela F. J. (1979) Eigenbehavior: Some algebraic foundations of self-referential system processes. Chapter 13 in: Principles of biological autonomy. Elsevier North Holland, New York: 170–207.
Excerpt: This chapter is concerned with representing organizational closure in operational terms. To this end we shall go beyond what was presented in the last chapter to construct two key notions: infinite trees of operators and solutions of equations over them. The idea of a solution of an equation over the class of infinite trees is an appropriate way to give more precise meaning to the intuitive idea of coordinations and simultaneity of interactions. The self-referential and recursive nature of a network of processes, characteristic of the autonomy of natural systems, is captured by the invariant behavior proper to the way the component processes are interconnected. Thus the complementary descriptions be-havior/recursion (cf. Chapter 10) are represented in a nondual form. The (fixed-point) invariance of a network can be related explicitly to the underlying recursive dynamics; the component processes are seen as unfoldment of the unit’s behavior.
Varela F. J. (1979) The immune network: Self and nonsense in the molecular domain. Chapter 14 in: Principles of biological autonomy. Elsevier North Holland, New York: 211–237.
Excerpt: The intention of this last part of the book is to show how the mechanisms of identity of an autonomous systems correlate with the establishment of cognitive interactions with its environment. In other words, I shall argue that mechanisms of knowledge and mechanisms of identity are two sides of the same systemic coin. Instead of embarking at this point in a discussion of what this means (more explicitly than in Chapters 2–6), I shall adopt the strategy of discussing two cases in detail, the immune and the nervous systems. The central idea is to look at the organizational closure of these two systems, pointing to the invariances that permit their distinction and characterization as units. The discussion, however, relates the closure of these two systems to a complementary feature: their structural plasticity – that is to say, how the specific components that realize their closure can be modified and changed under perturbations from the environment.
Vaz N. M. (2011) Francisco Varela and the immunological self. Systems Research and Behavioral Science 28: 696–703. https://cepa.info/4220
Francisco Varela, a leading neurobiologist and cognitive scientist, made a 10-year-long incursion into immunology. His in-depth contributions aimed to develop a systemic description to replace the standard stimulus/response/regulation scaffold that has governed immunology since its inception in the 19th century. Many of these efforts involved expansions of the notions introduced by Niels Jerne in his idiotypic network theory (Jerne, 1974a, 1974b) with the added notion of organizational closure, derived from the autopoietic theory. However, today, just like yesterday, the immunological community remains inclined to neglect these efforts and instead rests satisfied with half-a-century old clonal selection concepts (Burnet, 1959).
Verheggen T. & Baerveldt C. (2012) Mixed up perspectives: Reply to Chryssides et al. and Daanen and their critique of enactive cultural psychology. Culture & Psychology 18(2): 272–284.
In earlier contributions to Culture & Psychology we have put forward enactivism as an epistemological alternative for representationalist accounts of meaning in relation to action and experience. Critics continue to charge enactive cultural psychology of being a solipsistic and a materialist reductionistic epistemology. We address that critique, arguing that it consistently follows from misunderstanding in particular the enactivist notion of “operational closure,” and from mixing up two observer viewpoints that must be analytically severed when describing living, cognitive systems. Moreover, Daanen (2009) argued that in particular Heidegger’s phenomenology can help to reconcile enactive cultural psychology and social representation theory. We reply that although enactivism is indeed close to phenomenology, Daanen fails to appreciate Heidegger’s much more radical break with a philosophy of consciousness to anchor meaningful Being. Consequently, representationalist accounts cannot be salvaged, least of all by invoking Heidegger.
Villalobos M. & Dewhurst J. (2016) Cognición, computación y sistemas dinámicos: Vías para una posible integración teórica [Cognition, computing and dynamic systems: Possible ways of theoretical integration]. Límite. Revista Interdisciplinaria de Filosofía y Psicología 11(36): 20–31. https://cepa.info/7534
Traditionally, computational theory (CT) and dynamical systems theory (DST) have presented themselves as opposed and incompatible paradigms in cognitive science. There have been some efforts to reconcile these paradigms, mainly, by assimilating DST to CT at the expenses of its anti-representationalist commitments. In this paper, building on Piccinini’s mechanistic account of computation and the notion of functional closure, we explore an alternative conciliatory strategy. We try to assimilate CT to DST by dropping its representationalist commitments, and by inviting CT to recognize the functionally closed nature of some computational systems.
Villalobos M. & Dewhurst J. (2016) Computationalism, enactivism, and cognition: Turing Machines as functionally closed systems. In: Lieto A., Bhatt M., Oltramari A. & Vernon D. (eds.) Proceedings of the 4th International Workshop on Artificial Intelligence and Cognition (AIC 2016), 16–17 July 2016, New York City. NY, USA CEUR Workshop Proceedings: 138–147. https://cepa.info/7515
In cognitive science, computationalism is the thesis that natural cognitive systems are computing systems. Traditionally, computationalism has understood computing and cognitive systems as functionally open systems, i.e., as systems that have functional entries through which they receive inputs, and exits through which they emit outputs. In opposition to this view, enactive theory claims that natural cognitive systems, unlike computing systems, are autonomous systems whose functional organization does not have inputs and outputs. Computationalism and enactivism seem to share an assumption that computing systems are input-output functional systems. In this paper, such an assumption will be critically reviewed by appealing to the cybernetic notion of functional closure. The notion of functional closure, as elaborated in Maturanas cybernetic neurophysiology, refers to a closed functional network in which, due to the circularity of the dynamics, we cannot distinguish inputs and outputs as intrinsic functional properties of the system. On the basis of this conceptualization, it will be argued that some paradigmatic cases of computing systems (notably a physically realized Turing machine) are actually functionally closed systems, and therefore computing systems without inputs and outputs. If this analysis is right, then the incompatibility that enactivists see between computing systems and organizationally closed functional systems would no longer hold, as it would not be true that computing systems must necessarily be understood as input-output systems.
In this paper we will demonstrate that a computational system can meet the criteria for autonomy laid down by classical enactivism. The two criteria that we will focus on are operational closure and structural determinism, and we will show that both can be applied to a basic example of a physically instantiated Turing machine. We will also address the question of precariousness, and briefly suggest that a precarious Turing machine could be designed. Our aim in this paper is to challenge the assumption that computational systems are necessarily heteronomous systems, to try and motivate in enactivism a more nuanced and less rigid conception of computational systems, and to demonstrate to computational theorists that they might find some interesting material within the enactivist tradition, despite its historical hostility towards computationalism.
The autopoietic, theory and the enactive approach are two theoretical streams that, in spite of their historical link and conceptual affinities, offer very different views on the nature of living beings. In this paper, we compare these views and evaluate, in an exploratory way, their respective degrees of internal coherence. Focusing the analyses on certain key notions such as autonomy and organizational closure, we argue that while the autopoietic, theory manages to elaborate an internally consistent conception of living beings, the enactive approach presents an internal tension regarding its characterization of living beings as intentional systems directed at the environment.
Virgo N., Egbert M. D. & Froese T. (2011) The role of the spatial boundary in autopoiesis. In: Kampis G., Karsai I. & Szathmáry E. (eds.) Advances in artificial life: Darwin meets von Neumann. 10th European Conference ECAL 2009. Springer, Berlin: 234–241. https://cepa.info/2254
Abstract: We argue that the significance of the spatial boundary in autopoiesis has been overstated. It has the important task of distinguishing a living system as a unity in space but should not be seen as playing the additional role of delimiting the processes that make up the autopoietic system. We demonstrate the relevance of this to a current debate about the compatibility of the extended mind hypothesis with the enactive approach and show that a radically extended interpretation of autopoiesis was intended in one of the original works on the subject. Additionally we argue that the definitions of basic terms in the autopoietic literature can and should be made more precise, and we make some progress towards such a goal.