In the early 1990s pioneer experiments on chemical autopoiesis (self-production) led, on one hand, to the discovery of lipidic micro-compartments and their dynamics as useful models for origins-of-life research, and on the other hand, to the adoption of a systemic perspective in experimental research on minimal living cells. Moreover, the underlying idea of constructing cell models by assembling chemical components (the constructive, or synthetic, approach) has provided an operational field now recognized as bottom-up synthetic biology. This article discusses the origin of chemical autopoiesis and recapitulates the very early experiments, then presents examples of current developments that aim at assembling protocells and artificial/synthetic cells both for basic and applied science.
Steffe L. P. & Thompson P. W. (2000) Teaching experiment methodology: Underlying principles and essential elements. In: Lesh R. & Kelly A. E. (eds.) Research design in mathematics and science education. Lawrence Erlbaum, Hillsdale NJ: 267–307. https://cepa.info/2110
A primary purpose for using teaching experiment methodology is for researchers to experience, firsthand, students’2 mathematical learning and reasoning. Without the experiences afforded by teaching, there would be no basis for coming to understand the powerful mathematical concepts and operations students construct or even for suspecting that these concepts and operations may be distinctly different from those of researchers. The constraints that researchers experience in teaching constitute a basis for understanding students’ mathematical constructions. As we, the authors, use it, “constraint” has a dual meaning. Researchers’ imputations to students of mathematical understandings and operations are constrained by the language and actions they are able to bring forth in students. They also are constrained by students’ mistakes, especially those mistakes that are essential; that is, mistakes that persist despite researchers’ best efforts to eliminate them. Sources of essential mistakes reside in students’ current mathematical knowledge. To experience constraints in these two senses is our primary reason for doing teaching experiments. The first type of constraint serves in building up a “mathematics of students” and the second type serves in circumscribing such a mathematics within conceptual boundaries.
Stern J. (2008) Decoupling, Sparsity, Randomization, and Objective Bayesian Inference. Cybernetics & Human Knowing 15(2): 49–68. https://cepa.info/3367
Decoupling is a general principle that allows us to separate simple components in a complex system. In statistics, decoupling is often expressed as independence, no association, or zero covariance relations. These relations are sharp statistical hypotheses, that can be tested using the FBST -Full Bayesian Significance Test. Decoupling relations can also be introduced by some techniques of Design of Statistical Experiments, DSEs, like randomization. This article discusses the concepts of decoupling, randomization and sparsely connected statistical models in the epistemological framework of cognitive constructivism.
Enactivism claims that sensory-motor activity and embodiment are crucial in perceiving the environment and that machine vision could be a much simpler business if considered in this context. However, computational models of enactive vision are very rare and often rely on handcrafted control systems. In this article, we argue that the apparent complexity of the environment and of the robot brain can be significantly simplified if perception, behavior, and learning are allowed to co-develop on the same timescale. In doing so, robots become sensitive to, and actively exploit, characteristics of the environment that they can tackle within their own computational and physical constraints. We describe the application of this methodology in three sets of experiments: shape discrimination, car driving, and wheeled robot navigation. A further set of experiments, where the visual system can develop the receptive fields by means of unsupervised Hebbian learning, demonstrates that the receptive fields are consistently and significantly affected by the behavior of the system and differ from those predicted by most computational models of the visual cortex. Finally, we show that our robots can also replicate the performance deficiencies observed in experiments of motor deprivation with kittens.
Ulrich C., Tillema E. S., Hackenberg A. J. & Norton A. (2014) Constructivist Model Building: Empirical Examples From Mathematics Education. Constructivist Foundations 9(3): 328–339. https://constructivist.info/9/3/328
Context: This paper outlines how radical constructivist theory has led to a particular methodological technique, developing second-order models of student thinking, that has helped mathematics educators to be more effective teachers of their students. Problem: The paper addresses the problem of how radical constructivist theory has been used to explain and engender more viable adaptations to the complexities of teaching and learning. Method: The paper presents empirical data from teaching experiments that illustrate the process of second-order model building. Results: The result of the paper is an illustration of how second-order models are developed and how this process, as it progresses, supports teachers to be more effective. Implications: This paper has the implication that radical constructivism has the potential to impact practice.
Umpleby S. A. (2005) A history of the cybernetics movement in the United States. Journal of the Washington Academy of Sciences 91(2): 54–66. https://cepa.info/2763
Key events in the history of cybernetics and the American Society for Cybernetics are discussed: The origin of cybernetics in the Macy Foundation conferences held in the late 1940s and early 1950s; the pursuit of different interpretations of cybernetics by several professional societies; the reasons why the U. S. government supported or did not support cybernetics in the 1950s, 60s, and 70s; early experiments in cyberspace in the 1970s; conversations with Soviet scientists in the 1980s; the development of second order cybernetics in the 1990s; and increased interest in cybernetics in Europe and the U. S. in the 2000s due at least in part to improved understanding of the assumptions underlying the cybernetics movement. The history of cybernetics in the U. S. is viewed from the perspective of the American Society for Cybernetics (ASC). Several questions are addressed. Why was the ASC founded rather late, in 1964, about 10 years after the Macy Conferences ended? Why has the ASC remained small (300 or 400 members at its peak)? Why are there currently no departments or institutes of cybernetics in the US? How has thinking about cybernetics changed during the sixty year history of cybernetics in the US? Since most professionals in the US now spend a few hours a day in “cyberspace,” why do most of them know nothing about cybernetics?
Umpleby S. A. (2008) A Short History of Cybernetics in the United States. Österreichische Zeitschrift für Geschichtswissenschaften 19(4): 28–40. https://cepa.info/2309
Key events in the history of cybernetics and the American Society for Cybernetics are discussed, among them the origin of cybernetics in the Macy Foundation conferences in the late 1940s and early 1950s; different interpretations of cybernetics by several professional societies; reasons why the U. S. government did or did not support cybernetics in the 1950s, 1960s, and 1970s; early experiments in cyberspace in the 1970s; conversations with Soviet scientists in the 1980s; the development of “second order” cybernetics; and increased interest in cybernetics in Europe and the United States in the 2000s, due at least in part to improved understanding of the assumptions underlying the cybernetics movement. The history of cybernetics in the United States is viewed from the perspective of the American Society for Cybernetics (ASC) and several questions are addressed as to its future.
Van Orden G. C., Holden J. G. & Turvey M. T. (2003) Self-organization of cognitive performance. Journal of Experimental Psychology: General 132(3): 331–350. https://cepa.info/4694
Background noise is the irregular variation across repeated measurements of human performance. Background noise remains after task and treatment effects are minimized. Background noise refers to intrinsic sources of variability, the intrinsic dynamics of mind and body, and the internal workings of a living being. Two experiments demonstrate l/f scaling (pink noise) in simple reaction times and speeded word naming times, which round out a catalog of laboratory task demonstrations that background noise is pink noise. Ubiquitous pink noise suggests processes of mind and body that change each other’s dynamics. Such interaction-dominant dynamics are found in systems that self-organize their behavior. Self-organization provides an unconventional perspective on cognition, but this perspective closely parallels a contemporary interdisciplinary view of living systems.
Vörös S. (2017) Enacting Enaction: Conceptual Nest or Existential Mutation? Constructivist Foundations 12(2): 148–150. https://cepa.info/4064
Open peer commentary on the article “Enaction as a Lived Experience: Towards a Radical Neurophenomenology” by Claire Petitmengin. Upshot: I reflect and expand upon three aspects of Petitmengin’s illuminating article. After (a) contrasting existential (Petitmengin) and theoretical (Kirchhoff and Hutto) views of neurophenomenology, I (b) embed Petitmengin’s account of the experiential dissolution of the hard problem of consciousness into a larger framework by drawing parallels with previous experiments on unitive/non-dual experiences (Deikman). Finally, I (c) raise the question of how seriously we are willing to take the pragmatics of investigating and cultivating lived experience both in phenomenological research and in education and science in general.
Walde P., Wick R., Fresta M., Mangone A. & Luisi P. (1994) Autopoietic self-reproduction of fatty-acid vesicles. Journal of the American Chemical Society 116(26): 11649–11654. https://cepa.info/5527
Conditions are described under which vesicles formed by caprylic acid and oleic acid in water are able to undergo autopoietic self-reproduction-namely an increase of their population number due to a reaction which takes place within the spherical boundary of tile vesicles themselves. This is achieved by letting a certain amount of the neat water-insoluble caprylic or oleic anhydride hydrolyze at alkaline pH: the initial increase of the concentration of the released acid/carboxylate is extremely slow (several days to reach the conditions for spontaneous vesicle formation), but afterwards, the presence of vesicles brings about a rapid second phase leading to more and more vesicles being formed in an overall autocatalytic process. The catalytic power of the caprylic acid and oleic acid vesicles toward the hydrolysis of the corresponding anhydride is documented in a set of independent experiments. In these experiments, the hydrolysis was carried out in the presence of vesicles at a pH corresponding approximately to the pK of the acid in the vesicles. The process of autopoietic self-reproduction of caprylic acid and oleic acid vesicles is studied as a function of temperature: by increasing temperature (up to 70 degrees C), the exponential time progress of vesicle formation tends to become steeper while the long initial slow phase is significantly shortened. The caprylic acid and oleic acid vesicles are characterized by electron microscopy and by determining their internal volume. The question whether and to what extent these vesicles form a classic chemical equilibrium system-in which namely the free surfactant is in equilibrium with the aggregates-is also investigated.