Agnew N. M. & Brown J. L. (1989) Foundations for a model of knowing II. Fallible but functional knowledge. Canadian Psychology 30(2): 168–183. https://cepa.info/7560
An evolving theory known as “constructivism” challenges the traditional view of how we generate and revise knowledge. Constructivism helps address a major issue raised by modern scholars of the history and philosophy of science, and decision theory. The question is: How do we reduce the search and solution space of complex and changing environments to “mind size” (i.e., to fit our limited memory and computational capacity)? One emerging answer is that we rely heavily upon robust presuppositions and simplified representations of environmental structure. However, such constructed knowledge is likely to be highly fallible, relying as it must on impoverished data bases in the service of strong expectations or paradigms. In this paper we address two issues: Under what conditions can knowledge be highly fallible and at the same time be highly functional?; Can we make a plausible case, within this constructivist frame of reference, for realism, for knowledge that approximates “reality”?
Aufenvenne P., Egner H. & Elverfeldt K. (2014) On Climate Change Research, the Crisis of Science and Second-order Science. Constructivist Foundations 10(1): 120–129. https://cepa.info/1179
Context: This conceptual paper tries to tackle the advantages and the limitations that might arise from including second-order science into global climate change sciences, a research area that traditionally focuses on first-order approaches and that is currently attracting a lot of media and public attention. Problem: The high profile of climate change research seems to provoke a certain dilemma for scientists: despite the slowly increasing realization within the sciences that our knowledge is temporary, tentative, uncertain, and far from stable, the public expectations towards science and scientific knowledge are still the opposite: that scientific results should prove to be objective, reliable, and authoritative. As a way to handle the uncertainty, scientists tend to produce “varieties of scenarios” instead of clear statements, as well as reports that articulate different scientific opinions about the causes and dynamics of change (e.g., the IPCC. This might leave the impression of vague and indecisive results. As a result, esteem for the sciences seems to be decreasing within public perception. Method: This paper applies second-order observation to climate change research in particular and the sciences in general. Results: Within most sciences, it is still quite unusual to disclose and discuss the epistemological foundations of the respective research questions, methods and ways to interpret data, as research proceeds mainly from some version of realistic epistemological positions. A shift towards self-reflexive second-order science might offer possibilities for a return to a “less polarized” scientific and public debate on climate change because it points to knowledge that is in principle tentative, uncertain and fragmented as well as to the theory- and observation-dependence of scientific work. Implications: The paper addresses the differences between first-order and second-order science as well as some challenges of science in general, which second-order science might address and disclose. Constructivist content: Second-order science used as observation praxis (second-order observation) for this specific field of research.
Bakken T., Hernes T. & Wiik E. (2009) An autopoietic understanding of “innovative organization”. In: Magalhães R. & Sanchez R. (eds.) Autopoiesis in organizations and information systems. Emerald, Bingley: 169–182.
Excerpt: We would argue that an autopoietic theory of organization is in fact also a theory of innovation. Without the possibility of novelty, autopoietic organization is hardly possible. A second argument we make in this chapter is that, contrary to much of the literature on organization and innovation, an autopoietic view does not consider the degree to which innovation takes place. Instead it considers how the nature of communication shapes expectations, thus influencing the search for novelty. If we assume that different functions within an organization operate according to different modes of communication, we may come to a different understanding of how the organization engages with novelty. Key to this understanding is that different organizational functions operate with different degrees of redundancy in their communication.
Balsemão Pires E. (2016) Second order ethics as therapy. Lambert Academic Publishing, Saarbrücken. https://cepa.info/4578
The classical formulation of the object of ethics refers to a knowledge of the rules of the adaptation of the human species to their natural environments, to normative expectations supposed in the others and to the biographical evolution of the self. Accordingly, a doctrine of the duties was edified on three pillars, embracing a reference to the duties towards nature, towards the others and towards oneself. Notwithstanding the fact that human action obeys to a variety of factors including bio-physiological conditions and the dimensions of the social environment, ancient and modern metaphysical models of ethics favored the commendatory discourse about the predicates “right” and “wrong,” concurring to ultimate goals. The ethical discussions consisted chiefly in the investigation of the adequacy of the subordinate goals to the final ends of the human action or in the treatment of the metaphysical questions related to free will or determinism, the opposition of the intentionality of the voluntary conduct of man to the mechanical or quasi-mechanical responses of the inferior organisms or machines. From a “second order” approach to the ethical action and imperatives, I propose with this book a critical analysis of the metaphysical and the Kantian ethics. Relevance: In “Ethics and Second-Order Cybernetics” (1992) Heinz von Foerster referred the importance of the application of his notion of “second-order cybernetics” to ethics and moral reasoning. Initially, second-order cybernetics intended an epistemological discussion of recursive operations in non-trivial machines, which were able to include in their evolving states their own self-awareness in observations. The application of his views to ethics entails new challenges. After H. von Foerster essay, what I mean with “second-order ethics is an attempt to identify the advantages of the adoption of his proposal, some consequences in the therapeutically field and lines for new developments.
Social Systems Theory has a long and distinguished history. It has progressed from a mechanical model of social processes, to a biological model, to a process model, to models that encompass chaos, complexity, evolution and autopoiesis. Social systems design methodology is ready for the twenty-first century. From General Systems Theory’s early days of glory and hubris, through its days of decline and disparagement, through its diaspora into different disciplines, systems theory is today living up to its early expectations.
Purpose: To provide illumination of how systems tend to produce an output nobody expected. It is in these moments that observers may learn something about their own expectations. Design/methodology/approach – The paper discusses two cases in the history of art: faked Vermeer paintings and a test Heinz von Foerster did in the art department at the University of Illinois. Findings: McLuhan’s notion of the “collide-oscope” is applied to the way Heinz von Foerster (ab)uses images in his own texts; furthermore it is applied to the way the BCL was organized. The formal structure of the “collide-oscope” offers a model of perception. Originality/value – Provides a discussion of a fundamental message of cybernetics – that we cannot escape collisions and disturbances. They are its essence. Relevance: This paper relates to the second-order cybernetics of Heinz von Foerster.
Bitbol M. (2019) Neurophenomenology of surprise. In: Depraz N. & Celle A. (eds.) Surprise at the intersection of phenomenology and linguistics. John Benjamins, Amsterdam: 9–21. https://cepa.info/6662
A theory of the central nervous system was formulated recently, in general thermodynamical terms. According to it, the function of a central nervous system, and more generally of living autopoietic units, is to minimize “surprise.” The nervous system fulfills its task, and the animal maintains its viability, by changing their inner organization or their ecological niche so as to maximize the predictability of what happens to them, and to minimize the correlative production of entropy. But what is the first-person correlate of this third-person description of the adaptation of living beings? What is the phenomenological counterpart of this state of minimal suprise? A plausible answer is that it amounts to a state of “déjà vu,” or to the monotony of habit. By contrast, says Henri Maldiney, surprise is lived as a sudden encounter with reality, a reality that is recognized as such because it is radically unexpected. Surprise is a concussion for the brain, it is a risk for a living being, but it can be lived in the first person as an awakening to what there is.
Context: There is a movement to change education so that it is adequate to social expectations and uses the full potential of technology. However, there has been no significant breakthrough in this area and there is no clear evidence why. Problem: A potential issue explaining why education falls behind is the way educators focus on education. There is a possibility that a significant step in the learning process is routinely neglected. Method: Two different approaches to using IT in education are tested in two different environments: a university level course based on constructionism and IBL projects for secondary school students. Results: It is possible to apply constructionism in education, but there are still problems. They are not related to how students construct knowledge, but how they deconstruct knowledge. Implications: The most significant problem of deconstruction is that it requires creative skills. This makes it very difficult to formalize it and to provide effective recommendations for its application. Constructivist Content: Deconstruction is a prerequisite of construction, thus deconstructionism deserves more attention and study. A proper application of deconstructionism will make it possible to reconstruct education in a way that is impossible with the current approaches.
Temporal codes and neural temporal processing architectures (neural timing nets) that potentially subserve perception of pitch and rhythm are discussed. We address 1) properties of neural interspike interval representations that may underlie basic aspects of musical tonality (e.g., octave similarities), 2) implementation of pattern-similarity comparisons between interval representations using feedforward timing nets, and 3) representation of rhythmic patterns in recurrent timing nets. Computer simulated interval-patterns produced by harmonic complex tones whose fundamentals are related through simple ratios showed higher correlations than for more complex ratios. Similarities between interval-patterns produced by notes and chords resemble similarity-judgements made by human listeners in probe tone studies. Feedforward timing nets extract common temporal patterns from their inputs, so as to extract common pitch irrespective of timbre and vice versa. Recurrent timing nets build up complex temporal expectations over time through repetition, providing a means of representing rhythmic patterns. They constitute alternatives to oscillators and clocks, with which they share many common functional properties.
Clark A. (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science. The Behavioral and Brain Sciences 36(3): 181–204. https://cepa.info/7285
Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this “hierarchical prediction machine” approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.