Brito C. F. & Marques V. X. (2016) Is there a role for computation in the enactive paradigm? In: Müller V. C. (ed.) Fundamental issues of artificial intelligence. Springer, Cham: 79–94. https://cepa.info/5719
The main contribution of this paper is a naturalized account of the phenomenon of computation. The key idea for the development of this account is the identification of the notion of syntactical processing (or information processing) with the dynamical evolution of a constrained physical process, based on the observation that both evolve according to an arbitrary set of rules. This identification, in turn, revealed that, from the physical point of view, computation could be understood in terms of the operation of a component subdivided into two parts, (a) the constrained process and (b) the constraints that control its dynamics, where the interactions with the rest of the system are mediated by configurational changes of the constrained process. The immediate consequence of this analysis is the observation that this notion of computation can be readily integrated into the enactive paradigm of cognition.
Chandler J. L. R. (2000) Complexity IX: Closure over the organization of a scientific truth. In: Chandler J. & Van de Vijver G. (eds.) Closure: Emergent organizations and their dynamics. New York Academy of Sciences, New York: 75–90.
The specificity of human knowledge allow one to construct specific truths about human behavior. A structural notation and language for describing a complex hierarchically organized biological system was developed for the explicit purpose of analyzing the origins of health and disease. A specific application of these concepts to a specific patient (such as an individual suffering from the heritable disease, sickle cell anemia) requires a systematic formulation of a scientific truth. No universal law is applicable. The value of a clinical truth for the patient, as well as for the physician and society, is substantial. This value has moral, ethical, and legal weight. Both physics and chemistry use a universal external invariant reference system. Human beings and other living organisms function by an internal reference system that is neither invariant nor universal. In order to address the complexity of scientific truths within living systems, a mathematical graph is constructed from observations, descriptions, and symbolizations of the relevant human scientific activities and is placed in mutual coreference with three philosophical theories of truth. Consistency within the referencing relations of the graphic object creates an image of complex truths. When mapped over degrees of internal organization, the structural consistencies can form hierarchically transitive relations (many-to-one) creating redundancies that confirm one-another. Both structural and dynamic information can be composed within the graphic framework. The redundancies intrinsic to the degrees of organization notation, semantics, and syntax augment one another in the search for scientific truth. The degree of certitude emerging from structural implications increases in relation to the number of hierarchical degrees of organization invoked to represent the various behaviors of complex systems (as illustrated by the sickle cell anemia example). The successful synthesis of the complex image (or complex simple) of a scientific truth approaches the Heideggerian notion of identity in the sense that A = A and A is A.
Diettrich O. (1997) Sprache als Theorie: Von der Rolle der Sprache im Lichte einer konstruktivistischen Erkenntnistheorie. Papiere zur Linguistik 56(1): 77–106. https://cepa.info/5340
Theories and languages have in common that they aim at describing the world and the experiences made in the world. The specificity of theories is based on the fact that they code certain laws of nature. The specificity of languages is based on the fact that they code our worldview by means of their syntax. Also mathematics can be considered as theory in so far as it codes the constituting axioms. Language can achieve the objectivity postulated by analytical philosophy only if it can refer to a mathematics and logic being objective in the sense of platonism and based on a definitive set of axioms, or if the world-view concerned is definitive and based upon an objective (and therefore definitive) set of laws of nature. The first way is blocked by Goedel’s incompleteness theorem. The objectivity of the laws of nature being necessary for going the second way is questioned as well by what is called the constructivist evolutionary epistemology (CEE): the perceived patterns and regularities from which we derive the laws of nature is considered by the CEE to be invariants of inborn cognitive (sensory) operators. Then, the so called laws of nature are the result of cognitive evolution and therefore are human specific. Whether, e.g., we would identify the law of energy conservation which in physics results from the homogeneity of time, depends on the mental time-metric generator defining what is homogeneous in time. If cognitive operators are extended by means of experimental operators the result can be expressed in classical terms if both commute in the sense of operator algebra (quantitative extensions). Otherwise results would be inconsistent with the classical worldview and would require non-classical approaches such as quantum mechanics (qualitative extensions). As qualitative extensions can never be excluded from future experimental reasearch, it follows that the development of theories cannot converge towards a definitive set of laws of nature or a definitive ‘theory of everything’ describing the structure of reality. Also the structures of mathematics and logic we use have to be considered als invariants of mental operators. It turns out that the incompleteness theorem of Goedel has to be seen as analogy of the incompleteness of physical theories due to possible qualitative experimental extensions. Language, therefore, cannot be considered as an objective depiction of independently existing facts and matters but only as a theory generating propositions being consistent with our world-view. The competence of language is based on the fact that the mental mechanisms generating the ontology we use in our syntax are related to those generating our perceptions. Similar applies to the relationship between the operators generating perceived and mathematical structures enabling us to compress empirical data algorithmically (i.e. to transform them into mathematically articulated theories) and then to extrapolate them by means of the theory concerned (inductive inference). An analogue mechanism establishes our ability to compress verbal texts semantically (i.e. to reduce them to their meaning) and then to extrapolate them (i.e. to draw correct conclusions within the framework of the meaning concerned). This suggests a modified notion of meaning seing it as a linguistic analogy to theories. Similar to physical and mathematical theories also languages can be extended qualitatively particularly by means of metaphorical combinations of semantically noncompatible elements. The development of languages towards it actual richness can be seen as a process of ongoing metaphorosation. this leads to some parallels between verbal, cultural and genetic communication.
Gallagher S. (2009) The key to the Chinese Room. In: Leidlmair K. (ed.) After cognitivism: A reassessment of cognitive science and philosophy. Springer, Dordrecht: 87–96. https://cepa.info/4208
Excerpt: The “systems reply,” for example, states that it may not be the syntax alone, but the whole system – the syntax and the physics (the person, but also the room, the Chinese characters, the rule ledger, etc.) – that generates the semantics. My intention in this paper is not to champion the systems reply or to use it to defend Strong AI. But I’ll take the systems reply as my point of departure, and I’ll begin by asking: What precisely are the elements of the system, or what other elements need to be added to the system if we are to explain semantics? I’ll develop this view along lines that also incorporate aspects of the “robot reply,” which argues that the system has to be embodied in some way, and exposed to the world outside of the CR. This kind of approach has already been outlined by others (Rey 1986; Harnad 1989, 2002; and especially Crane 2003), but I don’t follow these lines back to the position of an enhanced and strengthened AI. Properly constructed, this hybrid systems/robot reply – or what I’ll call more generally, the systems approach – doesn’t lead us back to the tenets of Strong AI, but can actually serve Searle’s critique. Indeed, I’ll suggest that the best systems approach is already to be found in Searle’s own work, although Searle misses something important in his rejection of the systems reply and in framing his answer to the question of semantics in terms of the biological nature of the brain.
Glasersfeld E. von (1969) Semantics and the syntactic classification of words. In: Proceedings of the third international conference on computational linguistics. ICCL, Sanga S’by. https://cepa.info/1308
Traditional grammars classify words according to generic syntactic functions or morphological characteristics. For teaching humans and for descriptive linguistics this seemed sufficient. The advent of computers has changed the situation. Since machines are devoid of experiential knowledge, they need a more explicit grammar to handle natural language. Correlational Grammar is an attempt in that direction. The paper describes parts of correlational syntax and shows how a highly differentiated syntax can be used to establish word classes for which an intensional semantic definition can then be found. It exemplifies this approach in two areas of grammar: predicative adjectives and transitive verbs. The classification serves to eliminate ambiguity and spurious computer interpretations of natural language sentences.
Glasersfeld E. von & Notarmarco B. (1968) Some adjective classes derived from correlational grammar. The Georgia Institute for Research, Athens GA. https://cepa.info/1307
The paper demonstrates the possibility of deriving, from the Correlational Grammar developed solely for the purpose of automatic sentence analysis, a classification of words that could be useful in language analysis and language teaching. A group of some 90 frequent English adjectives serves as example; they are sorted into ten classes according to their behavior in strings of the type “John is easy to please,"“John is eager to please,"“John is likely to please,” etc. It is suggested that the members of a least some of these classes show common semantic features that could be used to obtain intensional definitions which would theoretically confirm the empirically derived extensional definitions supplied by correlational grammar.
The second version of the Multistore Sentence Analysis System, implemented on an IBM 360/65, uses a correlational grammar to parse English sentences and displays the parsings as hierarchical syntactic structures comparable to tree diagrams. Since correlational syntax comprises much that is usually considered semantic information, the system demonstrates ways and means of resolving certain types of ambiguity that are frequent obstacles to univocal sentence analysis. Particular emphasis is given to the “significant address” method of programming which was developed to speed up the procedure (processing times, at present, are 0.5–1.5 seconds for sentences up to 16 words). By structuring an area of the central core in such a way that the individual location of bytes becomes significant, the shifting of information is avoided; the use of binary masks further simplifies the many operations of comparison required by the procedure. Samples of print-out illustrate some salient features of the system.
Gunji Y. & Nakamura T. (1991) Time reverse automata patterns generated by Spencer-Brown’s modulator: Invertibility based on autopoiesis [The sense of the individual: Questions to Peter Fuchs: Autopoiesis, microdiversity, interaction]. Biosystems 25(3): 151–177.
In the present paper the self-consistency or operational closure of autopoiesis is described by introducing time explicitly. It is an extension of Spencer-Brown’s idea of time, however. The definition of time is segregated into two parts, corresponding to the syntax and semantics of language, respectively. In this context, time reversibility is defined by the formalization of the relationship between time and self-consistency. This idea has also been discussed in the context of designation and/or naming. Here we will discuss it in the context of cellular automata and explain the structure of one-to-many type mappings. Our approach is the first attempt to extend autopoietic systems in terms of dynamics. It illustrates how to introduce an autopoietic time which looks irreversible, but without the concept of entropy.
Nasuto S. J. & Bishop J. M. (2013) Of (zombie) mice and animats. In: Müller V. C. (ed.) Philosophy and theory of artificial intelligence. Springer, Berlin: 85–106. https://cepa.info/4829
The Chinese Room Argument purports to show that ‘syntax is not sufficient for semantics’; an argument which led John Searle to conclude that ‘programs are not minds’ and hence that no computational device can ever exhibit true understanding. Yet, although this controversial argument has received a series of criticisms, it has withstood all attempts at decisive rebuttal so far. One of the classical responses to CRA has been based on equipping a purely computational device with a physical robot body. This response, although partially addressed in one of Searle’s original contra arguments – the ‘robot reply’ – more recently gained friction with the development of embodiment and enactivism, two novel approaches to cognitive science that have been exciting roboticists and philosophers alike. Furthermore, recent technological advances – blending biological beings with computational systems – have started to be developed which superficially suggest that mind may be instantiated in computing devices after all. This paper will argue that (a) embodiment alone does not provide any leverage for cognitive robotics wrt the CRA, when based on a weak form of embodiment and that (b) unless they take the body into account seriously, hybrid bio-computer devices will also share the fate of their disembodied or robotic predecessors in failing to escape from Searle’s Chinese room.
In this paper I sketch a rough taxonomy of self-organization which may be of relevance in the study of cognitive and biological systems. I frame the problem both in terms of the language Heinz von Foerster used to formulate much of second-order cybernetics as well as the language of current theories of self-organization and complexity. In particular, I defend the position that, on the one hand, self- organization alone is not rich enough for our intended simulations, and on the other, that genetic selection in biology and symbolic representation in cognitive science alone leave out the very important (self-organizing) characteristics of particular embodiments of evolving and learning systems. I propose the acceptance of the full concept of symbol with its syntactic, semantic, and pragmatic dimensions. I argue that the syntax should be treated operationally in second-order cybernetics.