al-Rifaie M. M., Leymarie F. F., Latham W. & Bishop M. J. (2017) Swarmic autopoiesis and computational creativity. Connection Science 29(4): 276–294. https://cepa.info/5027
In this paper two swarm intelligence algorithms are used, the first leading the “attention” of the swarm and the latter responsible for the tracing mechanism. The attention mechanism is coordinated by agents of Stochastic Diffusion Search where they selectively attend to areas of a digital canvas (with line drawings) which contains (sharper) corners. Once the swarm’s attention is drawn to the line of interest with a sharp corner, the corresponding line segment is fed into the tracing algorithm, Dispersive Flies Optimisation which “consumes” the input in order to generate a “swarmic sketch” of the input line. The sketching process is the result of the “flies” leaving traces of their movements on the digital canvas which are then revisited repeatedly in an attempt to re-sketch the traces they left. This cyclic process is then introduced in the context of autopoiesis, where the philosophical aspects of the autopoietic artist are discussed. The autopoetic artist is described in two modalities: gluttonous and contented. In the Gluttonous Autopoietic Artist mode, by iteratively focussing on areas-of-rich-complexity, as the decoding process of the input sketch unfolds, it leads to a less complex structure which ultimately results in an empty canvas; therein reifying the artwork’s “death”. In the Contented Autopoietic Artist mode, by refocussing the autopoietic artist’s reflections on “meaning” onto different constitutive elements, and modifying her reconstitution, different behaviours of autopoietic creativity can be induced and therefore, the autopoietic processes become less likely to fade away and more open-ended in their creative endeavour.
Open peer commentary on the article “Systems Theory and Algorithmic Futures: Interview with Elena Esposito” by Elena Esposito, Katrin Sold & Bénédicte Zimmermann. Abstract: Elena Esposito’s interview is a great opportunity to ponder on the applications of Niklas Luhmann’s systems theory to current social themes. Among these, I consider the meaning of a theory of society in sociology, the value of critical theory, and the use of algorithms in communication. I differ from Esposito regarding my views on these topics, in that she seems hesitant to make a clear statement regarding the constructivist stance of systems theory’s proposals, in contrast with other approaches.
Bell T. & Lodi M. (2019) Constructing Computational Thinking Without Using Computers. Constructivist Foundations 14(3): 342–351. https://cepa.info/6049
Context: The meaning and implications of “computational thinking” (CT) are only now starting to be clarified, and the applications of the Computer Science (CS) Unplugged approach are becoming clearer as research is appearing. Now is a good time to consider how these relate, and what the opportunities and issues are for teachers using this approach. Problem: The goal here is to connect computational thinking explicitly to the CS Unplugged pedagogical approach, and to identify the context where Unplugged can be used effectively. Method: We take a theoretical approach, selecting a representative sample of CS Unplugged activities and mapping them to CT concepts. Results: The CS Unplugged activities map well onto commonly accepted CT concepts, although caution must be taken not to regard CS Unplugged as being a complete approach to CT education. Implications: There is evidence that CS Unplugged activities have a useful role to help students and teachers engage with CT, and to support hands-on activities with digital devices. Constructivist content: A constructivist approach to teaching computer science concepts can be particularly valuable at present because the public (and many teachers who are likely to have to become engaged with the subject) do not see CS as something they are likely to understand. Providing a clear way for anyone to construct this knowledge for themselves gives an opportunity to empower them when it might otherwise have been regarded as a domain that is open to only a select few.
This article investigates how a motivational module can drive an animat to learn a sensorimotor cognitive map and use it to generate flexible goal-directed behavior. Inspired by the rat’s hippocampus and neighboring areas, the time growing neural gas (TGNG) algorithm is used, which iteratively builds such a map by means of temporal Hebbian learning. The algorithm is combined with a motivation module, which activates goals, priorities, and consequent activity gradients in the developing cognitive map for the self-motivated control of behavior. The resulting motivated TGNG thus combines a neural cognitive map learning process with top-down, self-motivated, anticipatory behavior control mechanisms. While the algorithms involved are kept rather simple, motivated TGNG displays several emergent behavioral patterns, self-sustainment, and reliable latent learning. We conclude that motivated TGNG constitutes a solid basis for future studies on self-motivated cognitive map learning, on the design of further enhanced systems with additional cognitive modules, and on the realization of highly adaptive, interactive, goal-directed, cognitive systems. The system essentially constructs a spatial reality. At the same time it learns to interact with this reality, driven by its internal motivations (Hullian drives).
Clarke B. (2021) How Can Algorithms Participate in Communication? Constructivist Foundations 16(3): 366–368. https://cepa.info/7183
Open peer commentary on the article “Systems Theory and Algorithmic Futures: Interview with Elena Esposito” by Elena Esposito, Katrin Sold & Bénédicte Zimmermann. Abstract: Esposito’s theoretical approach indicates the fertility, first, of transplanting social systems theory into other fields, and next, of bringing classical cybernetic topics such as computation by algorithms back into Luhmann’s multi-modal constructivist framework of differentiated system operations.
Esposito E. (2017) Artificial communication? The production of contingency by algorithms. Zeitschrift für Soziologie 46(4): 249–65. https://cepa.info/7142
Discourse about smart algorithms and digital social agents still refers primarily to the construction of artificial intelligence that reproduces the faculties of individuals. Recent developments, however, show that algorithms are more efficient when they abandon this goal and try instead to reproduce the ability to communicate. Algorithms that do not “think” like people can affect the ability to obtain and process information in society. Referring to the concept of communication in Niklas Luhmann’s theory of social systems, this paper critically reconstructs the debate on the computational turn of big data as the artificial reproduction not of intelligence but of communication. Self-learning algorithms parasitically take advantage – be it consciously or unaware – of the contribution of web users to a “virtual double contingency.” This provides society with information that is not part of the thoughts of anyone, but, nevertheless, enters the communication circuit and raises its complexity. The concept of communication should be reconsidered to take account of these developments, including (or not) the possibility of communicating with algorithms.
Esposito E. (2021) Author’s Response: Opacity and Complexity of Learning Black Boxes. Constructivist Foundations 16(3): 377–380. https://cepa.info/7187
Abstract: Non-transparent machine learning algorithms can be described as non-trivial machines that do not have to be understood, but controlled as communication partners. From the perspective of sociological systems theory, the normative component of control should be addressed with a critical attitude, observing what is normal as improbable.
Esposito E., Sold K. & Zimmermann B. (2021) Systems Theory and Algorithmic Futures: Interview with Elena Esposito. Constructivist Foundations 16(3): 356–361. https://cepa.info/7180
Abstract: By introducing us into core concepts of Niklas Luhmann’s theory of social systems, Elena Esposito shows their relevance for contemporary social sciences and the study of unsettled times. Contending that society is made not by people but by what connects them - as Luhmann does with his concept of communication - creates a fertile ground for addressing societal challenges as diverse as the Corona pandemic or the algorithmic revolution. Esposito more broadly sees in systems theory a relevant contribution to critical theory and a genuine alternative to its Frankfurt School version, while extending its reach to further conceptual refinement and new empirical issues. Fueling such refinement is her analysis of time and the complex intertwinement between past, present and future - a core issue that runs throughout her work. Her current study on the future as a prediction caught between science and divination offers a fascinating empirical case for it, drawing a thought-provoking parallel between the way algorithmic predictions are constructed today and how divinatory predictions were constructed in ancient times. Keywords: Algorithms, communication, critical theory, future, heterarchy, Luhmann, paradox, prediction, semantics, sociology, subsystems, systems theory, time.
Georgeon O. L., Mille A. & Gay S. L. (2016) Intelligence artificielle sans données ontologiques sur une réalité présupposée. Intellectica 65: 143–168. https://cepa.info/7341
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “ as such” but only “ as they can experience it”. c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception / cognition/ action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.
Georgeon O., Mille A. & Gay S. (2016) Intelligence artificielle sans données ontologiques sur une réalité presupposée [Artificial intelligence without using ontological data about a presupposed reality]. Intellectica 65: 143–168. https://cepa.info/3662
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “as such” but only “as they can experience it.” c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception/cognition/action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.