Briscoe G. & Paolo P. (2010) Towards autopoietic computing. In: Colugnati F. A. B., Lopes L. C. R. & Barretto S. F. A. (eds.) Digital ecosystems. Spinger, New York: 199–212. https://cepa.info/2617
A key challenge in modern computing is to develop systems that address complex, dynamic problems in a scalable and efficient way, because the increasing complexity of software makes designing and maintaining efficient and flexible systems increasingly difficult. Biological systems are thought to possess robust, scalable processing paradigms that can automatically manage complex, dynamic problem spaces, possessing several properties that may be useful in computer systems. The biological properties of self-organisation, self-replication, self-management, and scalability are addressed in an interesting way by autopoiesis, a descriptive theory of the cell founded on the concept of a system’s circular organisation to define its boundary with its environment. In this paper, therefore, we review the main concepts of autopoiesis and then discuss how they could be related to fundamental concepts and theories of computation. The paper is conceptual in nature and the emphasis is on the review of other people’s work in this area as part of a longer-term strategy to develop a formal theory of autopoietic computing.
Cariani P. (2012) Infinity and the Observer: Radical Constructivism and the Foundations of Mathematics. Constructivist Foundations 7(2): 116–125. https://cepa.info/254
Problem: There is currently a great deal of mysticism, uncritical hype, and blind adulation of imaginary mathematical and physical entities in popular culture. We seek to explore what a radical constructivist perspective on mathematical entities might entail, and to draw out the implications of this perspective for how we think about the nature of mathematical entities. Method: Conceptual analysis. Results: If we want to avoid the introduction of entities that are ill-defined and inaccessible to verification, then formal systems need to avoid introduction of potential and actual infinities. If decidability and consistency are desired, keep formal systems finite. Infinity is a useful heuristic concept, but has no place in proof theory. Implications: We attempt to debunk many of the mysticisms and uncritical adulations of Gödelian arguments and to ground mathematical foundations in intersubjectively verifiable operations of limited observers. We hope that these insights will be useful to anyone trying to make sense of claims about the nature of formal systems. If we return to the notion of formal systems as concrete, finite systems, then we can be clear about the nature of computations that can be physically realized. In practical terms, the answer is not to proscribe notions of the infinite, but to recognize that these concepts have a different status with respect to their verifiability. We need to demarcate clearly the realm of free creation and imagination, where platonic entities are useful heuristic devices, and the realm of verification, testing, and proof, where infinities introduce ill-defined entities that create ambiguities and undecidable, ill-posed sets of propositions. Constructivist content: The paper attempts to extend the scope of radical constructivist perspective to mathematical systems, and to discuss the relationships between radical constructivism and other allied, yet distinct perspectives in the debate over the foundations of mathematics, such as psychological constructivism and mathematical constructivism.
Cariani P. (2012) Mind, a Machine? Review of “The Search for a Theory of Cognition: Early Mechanisms and New Ideas” edited by Stefano Franchi and Francesco Bianchini. Constructivist Foundations 7(3): 222-227. https://cepa.info/509
Upshot: Written by recognized experts in their fields, the book is a set of essays that deals with the influences of early cybernetics, computational theory, artificial intelligence, and connectionist networks on the historical development of computational-representational theories of cognition. In this review, I question the relevance of computability arguments and Jonasian phenomenology, which has been extensively invoked in recent discussions of autopoiesis and Ashby’s homeostats. Although the book deals only indirectly with constructivist approaches to cognition, it is useful reading for those interested in machine-based models of mind.
Kampis G. (1995) Computability, self-reference, and self-amendment. Special Issue on Self-Reference in Biological and Cognitive Systems Communication and Cognition – Artificial Intelligence 12(1–2): 91–109. https://cepa.info/3082
There exist theories of cognition that assume the importance of self-referentiality and/or self-modification. We argue for the necessity of such considerations. We discuss basic concepts of self-reference and self-amendment, as well as their relationship to each other. Self-modification will be suggested to involve non-algorithmic mechanisms, and it will be developed as a primary concept from which self-reference derives. A biologically motivated mechanism for achieving both phenomena is outlined. Problems of computability are briefly discussed in connection with the definability and describability of self-modifying systems. Finally, the relevance of these problems to applications in semantic problems of cognition is shown. We proceed in the following way. The paper starts with an outline of the evolutionary approach to cognition, as that context where the problems of circularity and recursiveness can be raised. Next, complete and incomplete forms of self-references are discussed. The “causal” theory of self-referentiality is reviewed, and a thought experiment is presented, which points out that no computable model for complete self-reference can exist. On the other hand, constructive definitions are shown to offer a framework where “selfdefining” and self-modifying systems, if such exist in reality, can be formulated. Studying the realization problem, a general abstract model is given, and a “biological computation” mechanism that corresponds to it is outlined. The underlying phenomenon, called “shifting reading frame,” is discussed in relation to how self-referentiality can be achieved through self-modification. The applicability of the approach to the autonomous definition of semantic relations in symbol systems, that may allow for a kind of autonomous “symbol grounding,” is discussed.
Mossio M., Giuseppe Longo G. & Stewart J. (2012) An expression of closure to efficient causation in terms of lambda-calculus. Journal of Theoretical Biology 257(3): 489–498. https://cepa.info/477
In this paper, we propose a mathematical expression of closure to efficient causation in terms of λ-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. We conclude with a brief discussion of the question of whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create some obstacles to computability.
Mossio M., Longo G. & Stewart J. (2009) A computable expression of closure to efficient causation. Journal of Theoretical Biology 257(3): 489–498. https://cepa.info/3630
In this paper, we propose a mathematical expression of closure to efficient causation in terms of λ-calculus, we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in λ-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.
This paper offers an account of what it is for a physical system to be a computing mechanism – a system that performs computations. A computing mechanism is a mechanism whose function is to generate output strings from input strings and (possibly) internal states, in accordance with a general rule that applies to all relevant strings and depends on the input strings and (possibly) internal states for its application. This account is motivated by reasons endogenous to the philosophy of computing, namely, doing justice to the practices of computer scientists and computability theorists. It is also an application of recent literature on mechanisms, because it assimilates computational explanation to mechanistic explanation. The account can be used to individuate computing mechanisms and the functions they compute and to taxonomize computing mechanisms based on their computing power.
Piccinini G. (2008) Computation without representation. Philosophical Studies 137(2): 205–241. https://cepa.info/3921
The received view is that computational states are individuated at least in part by their semantic properties. I offer an alternative, according to which computational states are individuated by their functional properties. Functional properties are specified by a mechanistic explanation without appealing to any semantic properties. The primary purpose of this paper is to formulate the alternative view of computational individuation, point out that it supports a robust notion of computational explanation, and defend it on the grounds of how computational states are individuated within computability theory and computer science. A secondary purpose is to show that existing arguments for the semantic view are defective.
Rocha L. M. (1995) Artificial semantically closed objects. Special Issue on Self-Reference in Biological and Cognitive Systems Communication and Cognition – Artificial Intelligence 12(1–2): 63–90. https://cepa.info/3083
The notion of computability and the Church -Turing Thesis are discussed in order to establish what is and what is not a computational process. Pattee’s Semantic Closure Principle is taken as the driving idea for building non-computational models of complex systems that avoid the reductionist descent into meaningless component analysis. A slight expansion of Von Foerster’s Cognitive Tiles are then presented as elements observing Semantic Closure and used to model processes at the level of the cell; as a result, a model of a rate-dependent and memory empowered neuron is proposed in the construction of more complex Artificial Neural Networks, where neurons are temporal pattern recognition processors, rather than timeless and memoryless boolean switches.
Van de Vijver G. (1992) The experimental epistemology of Walter S. McCulloch: A minimalistic interpretation. In: Van de Vijver G. (ed.) New Perspectives on cybernetics: Self-organization, autonomy and connectionism. Kluwer, Dordrecht: 105–123. https://cepa.info/2740
When cybernetics entered the scene during the forties, high ambitions immediately arose and quite unusual claims were made as compared to those of traditional epistemology: cybernetics would do away with the distinction between ’mind’ and ’body’ (Papert, 1965); it would bring a new interpretation, through the artefact, of the Kantian synthetic a priori; cybernetics would give, following Turing’s approach of computability, a mechanistic sense to Kantian schematism (Dupuy, 1985, pp. 105–106). In this text we shall analyse the statute of that sort of claims. In other words, we shall show what is the relation between epistemological questions which we call classical or theoretical and those occurring within cybernetics, and more in particular in the work of McCulloch. McCulloch was clearly an advocate of experimental epistemology. He emphasized, in contrast with Kant for instance, that the experimental inquiry into the functioning, the emergence and the consequences of knowledge can be important for epistemology in general. Furthermore, we shall inquire how McCulloch’s experimental epistemology may be relevant to classical epistemology. Firstly, we shall discuss the position of McCulloch within cybernetics. Secondly, we shall deal with the meaning of his net of formal neurons and with the meaning of the heterarchic nets. This will allow finally us to illustrate in what sense McCulloch is to be situated outside first cybernetics and how he anticipates second order cybernetics. It will also allow us to show that the experimental epistemology which he defends can get an interpretation beyond the frame of a traditional reductionism. What we are proposing here is a minimalistic interpretation: the epistemology, insofar as it tends to be experimental, doesn’t have to serve as a support for reductionism, but rather aims at pointing out concrete limits within which a theoretical epistemology can be developed.