Consideration is given to the relevance of recent discussions of auto¬poiesis to the study of self-organizing systems. Mechanisms that could underly the physical realization of an autopoietic system are discussed. It is concluded that autopoiesis does not, by itself, provide the essential ingredient whose omission has prevented SOS studies from being more productive. Two other important missing ingredients are discussed.
Cariani P. (1993) To evolve an ear: Epistemological implications of Gordon Pask’s electrochemical devices. Systems Research 10(3): 19–33. https://cepa.info/2836
In the late 1950's Gordon Pask constructed several electrochemical devices having emergent sensory capabilities. These control systems possessed the ability to adaptively construct their own sensors, thereby choosing the relationship between their internal states and the world at large. Devices were built that evolved de novo sensitivity to sound or magnetic fields. Pask’s devices have far-reaching implications for artificial intelligence, self-constructing devices, theories of observers and epistemically-autonomous agents, theories of functional emergence, machine creativity, and the limits of contemporary machine learning paradigms.
Foerster H. von (1960) On Self-Organizing Systems and Their Environments. In: Yovits M. C. & Cameron S. (eds.) Self-Organizing Systems. Pergamon Press, London: 31–50. https://cepa.info/1593
Excerpt: I open my paper by presenting the following thesis: “There are no such things as self-organizing systems!” In the face of the title of this conference I have to give a rather strong proof of this thesis, a task which may not be at all too difficult, if there is not a secret purpose behind this meeting to promote a conspiracy to dispose of the Second Law of Thermodynamics. I shall now prove the non-existence of self-organizing systems by reductio ad absurdum of the assumption that there is such a thing as a self-organizing system.
Joslyn C. (2000) Levels of control and closure in complex semiotic systems. In: Chandler J. & Van de Vijver G. (eds.) Closure: Emergent organizations and their dynamics. New York Academy of Sciences, New York: 67–74.
It is natural to advance closures as atomic processes of universal evolution, and to analyze this concept specifically. Real complex systems like organisms and complex mechanisms cannot exist at either extreme of complete closure or lack of closure, nevertheless we should consider the properties of closures in general, the introduction of boundaries, a corresponding stability, the establishment of system autonomy and identity, and thereby the introduction of emergent new systems of potentially new types. Our focus should move from simple physical closure of common objects and classical self-organizing systems to semiotically closed systems that maintain cyclic relations of perception, interpretation, decision, and action with their environments. Thus, issues arise concerning the use and interpretation of symbols, representations, and/or internal models (whether explicit or implicit) by the system; and the syntactic, semantic, and pragmatic relations among the sign tokens, their interpretations, and their use or function for the systems in question. Primitive semiotic closures are hypothesized as equivalent to simple control systems, and in turn equivalent to simple organisms. This leads us directly to the grand hierarchical control theories of Turchin, Powers, and Albus, which provide an explicit mechanism for the formation of new levels within complex semiotically closed systems.
Kenny V. (1989) Anticipating autopoiesis: Personal construct psychology and self-organizing systems. In: Goudsmit A. L. (ed.) Self-organization in psychotherapy: Demarcations of a new perspective. Springer, Berlin: 100–133. https://cepa.info/2795
George Kelly’s theory of personal construct psychology is introduced in the context of comparisons with the radical constructivism theory of Ernst von Glasersfeld and the autopoietic theory of Humberto Maturana. Personal construct theory, although written in the decade up to 1955, anticipates in detail many of the epistemological and praxis issues currently concerning practitioners of psychotherapy. Following the comparative introduction, the formal aspects of Kelly’s theory, namely, the Fundamental Postulate and elaborative Corollaries, are explicated within the framework provided by Maturana’s theory. In this the chapter focusses upon the themes of change and stability within self-organizing systems. Finally, specific comments are addressed to the implementation of these theoretical prescriptions in the therapeutic relationship.
Noe E. & Alrøe H. F. (2006) Combining Luhmann and Actor-Network Theory to See Farm Enterprises as Self-organizing Systems. Cybernetics & Human Knowing 13(1): 34–48. https://cepa.info/3360
From a rural, sociological point of view no social theories have so far been able to grasp the ontological complexity and special character of a farm enterprise as an entity in a really satisfying way. The contention of this paper is that a combination of Luhmann’s theory of social systems and the actor-network theory (ANT) of Latour, Callon, and Law offers a new and radical framework for understanding a farm as a self-organizing, heterogeneous system. Luhmann’s theory offers an approach to understand a farm as a self-organizing system (operating in meaning) that must produce and reproduce itself through demarcation from the surrounding world by selection of meaning. The meaning of the system is expressed through the goals, values, and logic of the farming processes. This theory is, however, less useful when studying the heterogeneous character of a farm as a mixture of biology, sociology, technology, and economy. ANT offers an approach to focus on the heterogeneous network of interactions of human and non-human actors, such as knowledge, technology, money, farmland, animals, plants, etcetera, and how these interactions depend on both the quality of the actors and the network context of interaction. But the theory is weak when it comes to explaining the self-organizing character of a farm enterprise. Using Peirce’s general semiotics as a platform, the two theories in combination open a new and radical framework for multidisciplinary studies of farm enterprises that may serve as a platform for communication between the different disciplines and approaches.
Prem E. (1995) Understanding complex systems: What can the speaking lion tell us? In: Steels L. (ed.) The biology and technology of intelligent autonomous agents. Springer, Berlin: 459–474. https://cepa.info/7738
The rebirth of complex systems in several distinct domains of research has posed new epistemic questions. Self-organizing systems as well as autonomous robots have a tendency to not only behave in an unpredictable way, they are also extremely difficult to analyse. In this paper we discuss three problems with neural networks that are important for complex systems in general. They are related to the proper design of a self-organizing system, to the role of the system engineer, and to the proper explanation of system behavior. We present a generally applicable solutions, which is based on a “symbol grounding” neural network architecture. We also take a look at an implemented network which hints at the fact that grounding in this case must involve teleological terms. We then discuss the relation of this approach to the measurement problem in physics and point out similarities to existing positions in philosophy. However, it should he noted that our “solution” of the explanation problem may be judged as being a very sceptic one.
Probst G. J. B. (1985) Some cybernetic principles for the design, control, and development of social systems. Cybernetics and Systems 16(2–3): 171–180. https://cepa.info/3845
Excerpt: If a firm is to survive and be efficient in a complex environment that is constantly changing in unforeseeable ways, it is necessary for it to constantly adjust and adapt to such a large number of factors that the adaptation can only be carried out by taking into account self-organizing systems processes. We are therefore very much interested in the principles of self-organization that may be found in any viable system. In talking about a system-based approach to management of purposeful social systems, this is probably one of the most important concepts, though it is not the only one.
Reckmeyer W. J. (2016) Reflections on Constructing a Reality: The American Society for Cybernetics in the 1980s. Cybernetics & Human Knowing 23(1): 28–41.
The current focus and form of the American Society for Cybernetics (ASC) emerged during the 1980s, when the revitalized Society became a major organizing force for the field of cybernetics. As is the case with all self-organizing systems, the ASC did not spring forth fully formed like Athena from the brow of Zeus. Rather, it grew out of interactions among a small group of like-minded people who were interested in the scientific and practical aspects of cybernetics in general and of second-order cybernetics in particular. This paper summarizes the author’s reflections of those critical years, especially with respect to both the personal and professional connections that coalesced through various meetings and activities. Many of those undertakings were sponsored by the ASC, but many others were not. Yet they all helped generate a broader ASCbased community of cyberneticians. The paper closes with a look to the future, in terms of encouraging greater use of cybernetics to address significant societal challenges and improve the human condition in a global world.
Riegler A. (2008) The paradox of autonomy: The interaction between humans and autonomous cognitive artifacts. In: Dodig-Crnkovic G. & Susan Stuart S. (eds.) Computing, philosophy, and cognitive science. The nexus and the liminal. Cambridge Scholars Publishing, Cambridge: 292–301. https://cepa.info/292
According to Thrun and others, personal service robots need increasingly more autonomy in order to function in the highly unpredictable company of humans. At the same time, the cognitive processes in artifacts will become increasingly alien to us. This has several reasons: 1. Maturana’s concept of structural determinism questions conventional forms of interaction. 2. Considerably different ways of embodiment result in incompatible referential frameworks (worldviews). 3. Engineers focus on the output of artifacts, whereas autonomous cognitive systems seek to control their input state. As a result, instructional interaction – the basic ingredient of conventional man-machine relationships – with genuine autonomous systems will become impossible. Therefore the increase of autonomy will eventually lead to a paradox. Today we are still in a position to anthropomorphically trivialize the behavioral pattern of current robots (von Foerster). Eventually, however, when self-organizing systems will have reached the high levels of autonomy we wished for interacting with them may become impossible since their goals will be completely independent of ours.