Asaro P. (2008) Computer als Modelle des Geistes. Über Simulation und das Gehirn als Modell des Designs von Computern. Österreichische Zeitschrift für Geschichtswissenschaften 19(4): 41–72. https://cepa.info/2310
The article considers the complexities of thinking about the computer as a model of the mind. It examines the computer as being a model of the brain in several very different senses of “model‘. On the one hand the basic architecture of the first modern stored-program computers was „modeled on“ the brain by John von Neumann. Von Neumann also sought to build a mathematical model of the biological brain as a complex system. A similar but different approach to modeling the brain was taken by Alan Turing, who on the one hand believed that the mind simply was a universal computer, and who sought to show how brain-like networks could self-organize into Universal Turing Machines. And on the other hand, Turing saw the computer as the universal machine that could simulate any other machine, and thus any particular human skill and thereby could simulate human intelligence. This leads to a discussion of the nature of “simulation” and its relation to models and modeling. The article applies this analysis to a written correspondence between Ashby and Turing in which Turing urges Ashby to simulate his cybernetic Homeostat device on the ACE computer, rather than build a special machine.
Asaro P. (2008) From mechanisms of adaptation to intelligence amplifiers: the philosophy of W. Ross Ashby. In: Husbands P., Holland O. & Wheeler M. (eds.) The mechanical mind in history. MIT Press, Cambridge MA: 149–184. https://cepa.info/2329
This chapter sketches an intellectual portrait of W. Ross Ashby’s thought from his earliest work on the mechanisms of intelligence in 1940 through the birth of what is now called artificial intelligence (AI), around 1956, and to the end of his career in 1972. It begins by examining his earliest published works on adaptation and equilibrium, and the conceptual structure of his notions of the mechanisms of control in biological systems. In particular, it assesses his conceptions of mechanism, equilibrium, stability, and the role of breakdown in achieving equilibrium. It then proceeds to his work on refining the concept of “intelligence,” on the possibility of the mechanical augmentation and amplification of human intelligence, and on how machines might be built that surpass human understanding in their capabilities. Finally, the chapter considers the significance of his philosophy and its role in cybernetic thought.
Taking its orientation from Peter Winch, this article critiques from a Wittgensteinian point of view some “theoreticist” tendencies within constructivism. At the heart of constructivism is the deeply Wittgensteinian idea that the world as we know and understand it is the product of human intelligence and interests. The usefulness of this idea can be vitiated by a failure to distinguish conceptual from empirical questions. I argue that such a failure characterises two influential constructivist theories, those of Ernst von Glasersfeld and David Bloor. These are considered in turn. Both theories seek to give a general, causal account of knowledge: von Glasersfeld’s in term of cognitive subjectivity, Bloor’s in terms of social agreement. Ironically, given that both writers cite Wittgenstein as a source of theoretical inspiration, assumptions of both theories run counter to key Wittgensteinian arguments. To show that Wittgenstein’s views offer no solace to the realist, the article closes with a brief consideration of John Searle’s theory of knowledge.
Piaget J. (1978) What is psychology? American Psychologist 33: 648–652. https://cepa.info/5558
Five points are made: (1) Psychology is the science not only of the individual but also of humans in general. For example, mathematics and physics have been created by human beings, and this creation can be understood only in terms of/ human intelligence in its totality. (2) Psychology is a natural science, and, like every other science, it is built not only with what comes from the object but also with the structures constructed by the subject. (3) Psychology occupies a key position among the sciences because it explains the notions and operations used in the development of all the sciences. (4) It is impossible to dissociate psychology from epistemology. (5) Psychology, like all other sciences, can thrive only on interdisciplinary cooperation.
Ratcliffe M. (2012) There can be no cognitive science of dasein. In: Kiverstein J. & Wheeler M. (eds.) Heidegger and cognitive science. Palgrave Macmillan, Basingstoke: 135–156.
Excerpt: In this chapter, I will consider the prospects for a positive Heideggerian approach to cognitive science and artificial intelligence (AI). Hubert Dreyfus and others have of course developed and refined a highly effective Heideggerian critique of cognitive science over a number of years. However, this critique addresses the limitations of a particular way of doing cognitive science rather than cognitive science per se. The orthodox cognitive science of the 1960s and 1970s, aspects of which still linger on today, is charged by Dreyfus with misconstruing the nature of human intelligence in several interrelated ways. For instance, he argues that symbol manipulation cannot facilitate the pervasive practical know-how that is implicated in most, if not all, human thought and activity. Furthermore, classical AI fails to explain how human cognition succeeds in organising a vast, holistic body of knowledge so as to facilitate a grasp of relevance. Somehow, we manage to identify information that is relevant to our negotiation of novel and open-ended situations without running through absolutely everything we know in order to determine what is applicable and what is not. Dreyfus and his brother, Stuart Dreyfus, have also formulated a model of skill acquisition which challenges assumptions that are central to classical AI. According to their account, although we might begin to learn a new skill by employing an explicit rule, skill maturation does not consist in that rule gradually becoming implicit. Instead, explicit rules operate as a kind of scaffolding that is ultimately discarded and replaced by practical, bodily know-how. So possessing a skill is not, first and foremost, a matter of being able to explicitly or implicitly manipulate rules. Associated with this account of skill-learning is an emphasis on the role || of the body in cognition. Much that we accomplish is not a matter of internal computation but of bodily activities embedded in appropriate kinds of environment.
Schneider S., Abdel-Fattah A., Angerer B. & Weber F. (2013) Model construction in general intelligence. In: Kühnberger K.-U., Rudolph S. & Wang P. (eds.) Proceedings of the AGI 2013. Springer, Berlin: 109–118. https://cepa.info/937
In this conceptual paper we propose a shift of perspective for parts of AI – from search to model construction. We claim that humans construct a model of a problem situation consisting of only a few, hierarchically structured elements. A model allows selective exploration of possible continuations and solutions, and for the flexible instantiation and adaptation of concepts. We underpin our claims with results from two protocol studies on problem-solving imagery and on the inductive learning of an algorithmic structure. We suggest that a fresh look into the small-scale construction processes humans execute would further ideas in categorization, analogy, concept formation, conceptual blending, and related fields of AI. Relevance: In accordance with von Glasersfeld’s postulate that knowledge is actively built up by the cognizing subject, the paper emphasizes the constructive nature of human intelligence. While problem space models in AI also partly reflect this, the metaphorical language of problem space search lends itself to epistemological misinterpretations.
Warwick K. (2009) The philosophy of W. Ross Ashby and its relationship to ‘The Matrix”. International Journal of General Systems 38(2): 239–253. https://cepa.info/4667
Ashby was a keen observer of the world around him, as per his technological and psychiatrical developments. Over the years, he drew numerous philosophical conclusions on the nature of human intelligence and the operation of the brain, on artificial intelligence and the thinking ability of computers and even on science in general. In this paper, the quite profound philosophy espoused by Ashby is considered as a whole, in particular in terms of its relationship with the world as it stands now and even in terms of scientific predictions of where things might lead. A meaningful comparison is made between Ashby’s comments and the science fiction concept of ‘The Matrix’ and serious consideration is given as to how much Ashby’s ideas lay open the possibility of the matrix becoming a real world eventuality.
Yoshikawa Y., Asada M. & Hosoda K. (2004) Towards imitation learning from a viewpoint of an internal observer. In: Iida F., Pfeifer R., Steels L. & Kuniyoshi Y. (eds.) Embodied artificial intelligence. Springer-Verlag, Heidelberg: 278–283.
How an internal observer, that is not given any a priori knowledge or interpretation of what its sensors receives, learn to imitate seems a formidable issue from a viewpoint of a constructivist approach towards both establishing the design principle for an intelligent robot and understanding human intelligence. This paper argue two issues towards imitation by an internal observer: one concerns how to construct the self body representation of the robot with vision and proprioception and the other concerns how to construct a mapping of vocalization between agents with different articulation systems. Preliminary results with real robots are given.