Context: The enactivist tradition, out of which neurophenomenology arose, rejects various internalisms – including the representationalist and information-processing metaphors – but remains wedded to one further internalism: the claim that the structure of perceptual experience is directly, constitutively linked only to internal, brain-based dynamics. Problem: I aim to reject this internalism and defend an alternative analysis. Method: The paper presents a direct-realist, externalist, sensorimotor account of perceptual experience. It uses the concept of counterfactual meaningful action to defend this view against various objections. Results: This account of experience matches certain first-person features of experience better than an internalist account could. It is fully tractable as “normal science.” Implications: The neuroscientific conception of brain function should change from that of internal representation or modelling to that of enabling meaningful, embodied action in ways that constitutively involve the world. Neurophenomenology should aim to match the structure of first-person experience with the structure of meaningful agent-world interactions, not with that of brain dynamics. Constructivist content: The sensorimotor approach shows us what external objects are, such that we may enact them, and what experience is, such that it may present us with those enacted objects.
Brier S. (1997) What is a possible ontological and epistemological framework for a true universal “information science”? The suggestion of a cybersemiotics. World Futures: The Journal of New Paradigm Research 49(3–4): 287–308.
The subject of the article is what the paradigmatic framework for a universal information science or informatics should be and what kind of science we can expect it to be. The mechanistic and rationalistic information processing paradigm of cognitive science is the dominating research program in this research area which is heavily dominated by computer science and informatics. It is pointed out that this logical and mechanistic approach is unable to give an understanding of human signification and its basis in the connotations of biological and social relationships. The paper then discusses – on the basis of previous work – the ontological and epistemological problems of the idea of “information science” and constructs an alternative non‐mechanistic view based on the idea of autopoiesis of second order cybernetics and Peirce’s concept of chaos and evolution. These ideas are related to the cybersemiotic framework for transdisciplinary research of information and communication which the author has developed in other papers. Cybersemiotics is an integration of second order cybernetics, Peirce’s triadic semiosis and Wittgenstein’s theory of language game to a non‐Cartesian cognitive science.
Brito C. F. & Marques V. X. (2016) Is there a role for computation in the enactive paradigm? In: Müller V. C. (ed.) Fundamental issues of artificial intelligence. Springer, Cham: 79–94. https://cepa.info/5719
The main contribution of this paper is a naturalized account of the phenomenon of computation. The key idea for the development of this account is the identification of the notion of syntactical processing (or information processing) with the dynamical evolution of a constrained physical process, based on the observation that both evolve according to an arbitrary set of rules. This identification, in turn, revealed that, from the physical point of view, computation could be understood in terms of the operation of a component subdivided into two parts, (a) the constrained process and (b) the constraints that control its dynamics, where the interactions with the rest of the system are mediated by configurational changes of the constrained process. The immediate consequence of this analysis is the observation that this notion of computation can be readily integrated into the enactive paradigm of cognition.
Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into independent information processing units which must interface with each other via representations. Instead, the intelligent system is decomposed into independent and parallel activity producers which all interface directly to the world through perception and action, rather than interface to each other particularly much. The notions of central and peripheral systems evaporate – everything is both central and peripheral. Based on these principles we have built a very successful series of mobile robots which operate without supervision as Creatures in standard office environments.
Bruni J. (2014) Expanding the self-referential paradox: The Macy conferences and the second wave of cybernetic thinking. In: Arnold D. P. (ed.) Traditions of systems theory: Major figures and contemporary developments. Routledge, New York: 78–83. https://cepa.info/2327
According to the American Society for Cybernetics (2012), there is no unified comprehensive account of a far-reaching narrative that takes into account all of the Macy Conferences and what was discussed and accomplished at these meetings. This chapter will thus propose how group dialogues on concepts such as information and feedback allowed the Macy Conferences to act as a catalyst for second-order systems theory, when fi rstorder, steady-state models of homeostasis became supplanted by those of self-reference in observing systems. I will trace how such a development transpired through a conferences-wide interdisciplinary mindset that promoted the idea of refl exivity. According to N. Katherine Hayles, the conferences’ singular achievement was to create a “new paradigm” for “looking at human beings … as information-processing entities who are essentially similar to intelligent machines,” by routing Claude Shannon’s information theory through Warren McCulloch’s “model of neural functioning” and John von Neumann’s work in “biological systems” and then capitalizing on Norbert Wiener’s “visionary” talent for disseminating the “larger implications” of such a paradigm shift. From this perspective, the most crucial work would achieve its fruition after the end of the Macy conferences. Yet the foundations for such work were, perforce, cast during the discussions at the conferences that epitomize science in the making and, as such, warrant our careful attention.
Cariani P. (1991) Some epistemological implications of devices which construct their own sensors and effectors. In: Varela F. J. & Bourgine P. (eds.) Towards a Practice of Autonomous Systems. MIT Press, Cambridge MA: 484–493. https://cepa.info/6407
Various classes of physical devices having adaptive sensors, coordinative parts, and/or effectors are considered with respect to the kinds of informational relations they permit the device to have with its environment. Devices which can evolve their own physical hardware can expand their repertoires of measurements, computations, and controls in a manner analogous to the structural evolution of sensory, coordinative, and effector organs over phylogeny. In particular, those devices which have the capacity to adaptively construct new sensors and effectors gain the ability to modify the relationship between their internal states and the world at large. Such devices in effect adaptively create their own (semantic) categories rather than having them explicitly specified by an external designer. An electrochemical device built in the 1950's which evolved the capacity to sense sound is discussed as a rudimentary exemplar of a class of adaptive, sensor- evolving devices. Such devices could potentially serve as semantically-adaptive front-ends for computationally-adaptive classifiers, by altering the feature primitives (primitive categories) that the classifier operates with. Networks composed of elements capable of evolving new sensors and effectors could evolve new channels for inter-element signalling by creating new effector-sensor combinations between elements. The new channels could be formed orthogonal to pre-existing ones, in effect increasing the dimensionality of the signal space. Such variable-dimension signalling networks might potentially lead to more flexible modes of information processing and storage. Devices having the means to both choose their sensors (primitive perceptual categories) and effectors (primitive action categories) as well as the coordinative mappings between the two sets would acquire a degree of epistemic autonomy not yet found in contemporary devices.
Cariani P. (2013) Self-organization in Brains. Constructivist Foundations 9(1): 35–38. https://constructivist.info/9/1/035
Open peer commentary on the article “Exploration of the Functional Properties of Interaction: Computer Models and Pointers for Theory” by Etienne B. Roesch, Matthew Spencer, Slawomir J. Nasuto, Thomas Tanay & J. Mark Bishop. Upshot: Artificial life computer simulations hold the potential for demonstrating the kinds of bottom-up, cooperative, self-organizing processes that underlie the self-construction of observer-actors. This is a worthwhile, if limited, attempt to use such simulations to address this set of core constructivist concerns. Although we concur with much of the philosophical perspective in the target article, we take issue with some of the implied positions related to dynamical systems, sensorimotor contingency theory, and neural information processing. Ideally, we would like to see computational approaches more directly address adaptive, constructive processes and mechanisms operant in minds and brains. This would entail using tasks that are more relevant to the psychology of human and animal learning than performing digit sums or sorts. It also could involve relating the dynamics of agents more explicitly to ensembles of communicating neural assemblies.
Cheung K. C. (1993) On meaningful measurement: Issues of reliability and validity from a humanistic constructivist information-processing perspective. In: Proceedings of the Third International Seminar on Misconceptions and Educational Strategies in Science and Mathematics. Cornell University, Ithaca, 1–4 August 1993. Misconceptions Trust, Ithaca NY. https://cepa.info/7243
In the past decade, there have been ample interests in the assessment of cognitive and affective processes and products for the purposes of meaningful learning. Meaningful measurement has been proposed which is in accordance with a humanistic constructivist information-processing perspective. Students’ responses to the assessment tasks are evaluated according to an item response measurement model, together with a hypothesized model detailing the progressive forms of knowing/competence under examination. There is a possibility of incorporating student errors and alternative frameworks into these evaluation procedures. Meaningful measurement drives us to examine the composite concepts of “ability” and “difficulty.” Under the rubric of meaningful measurement, validity assessment (i.e. internal and external validities) is essentially the same as an inquiry into the meanings afforded by the measurements. Reliability, measured in terms of standard errors of measurement, is guaranteed within acceptable limits if testing validity is secured. Further evidences of validity may be provided by indepth analyses of how “epistemic subjects” of different levels of competence and proficiency engage in different types of assessment tasks, where affective and metacognitive behaviors may be examined as well. These ways of undertaking MM can be codified by proposing a three-level conceptualization of MM, where reliability and validity are central issues for an explication of this conceptualization.
Christensen W. D. & Hooker C. A. (2000) An interactivist-constructivist approach to intelligence: Self-directed anticipative learning. Philosophical Psychology 13(1): 5–45. https://cepa.info/4156
This paper outlines an original interactivist–constructivist (I-C) approach to modelling intelligence and learning as a dynamical embodied form of adaptiveness and explores some applications of I-C to understanding the way cognitive learning is realized in the brain. Two key ideas for conceptualizing intelligence within this framework are developed. These are: (1) intelligence is centrally concerned with the capacity for coherent, context-sensitive, self-directed management of interaction; and (2) the primary model for cognitive learning is anticipative skill construction. Self-directedness is a capacity for integrative process modulation which allows a system to “steer” itself through its world by anticipatively matching its own viability requirements to interaction with its environment. Because the adaptive interaction processes required of intelligent systems are too complex for effective action to be prespecified (e.g. genetically) learning is an important component of intelligence. A model of self-directed anticipative learning (SDAL) is formulated based on interactive skill construction, and argued to constitute a central constructivist process involved in cognitive development. SDAL illuminates the capacity of intelligent learners to start with the vague, poorly defined problems typically posed in realistic learning situations and progressively refine them, transforming them into problems with sufficient structure to guide the construction of a solution. Finally, some of the implications of I-C for modelling of the neuronal basis of intelligence and learning are explored; in particular, Quartz and Sejnowski’s recent neural constructivism paradigm, enriched by Montague and Sejnowski’s dopaminergic model of anticipative–predictive neural learning, is assessed as a promising, but incomplete, contribution to this approach. The paper concludes with a fourfold reflection on the divergence in cognitive modelling philosophy between the I-C and the traditional computational information processing approaches.
Cobb P. (1987) Information-processing psychology and mathematics education: A constructivist perspective. Journal of Mathematical Behavior 6(1): 3–40. https://cepa.info/2968
Discusses the implications of information processing psychology for mathematics education, with a focus on the works of schema theorists such as D. E. Rumelhart and D. A. Norman and R. Glaser and production system theorists such as J. H. Larkin, J. G. Greeno, and J. R. Anderson. Learning is considered in terms of the actor’s and the observer’s perspective and the distinction between declarative and procedural knowledge. Comprehension and meaning in mathematics also are considered. The role of abstraction and generalization in the acquisition of mathematical knowledge is discussed, and the difference between helping children to “see, ” as opposed to construct abstract relationships is elucidated. The goal of teaching is to help students modify or restructure their existing schema in predetermined ways by finding instructional representations that enable students to construct their own expert representations.