Abraham T. H. (2003) Integrating Mind and Brain: Warren S. McCulloch, Cerebral Localization, and Experimental Epistemology. Endeavour 27(1): 32–38. https://cepa.info/2927
Recently, historians have focused on Warren S. McCul¬loch’s role in the cybernetics movement during the 1940s and 1950s, and his contributions to the develop¬ment of computer science and communication theory. What has received less attention is McCulloch’s early work in neurophysiology, and its relationship to his philosophical quest for an ‘experimental epistemology’ – a physiological theory of knowledge. McCulloch’s early laboratory work during the 1930s addressed the problem of cerebral localization: localizing aspects of behaviour in the cerebral cortex of the brain. Most of this research was done with the Dutch neurophysiolo¬gist J. G. Dusser de Barenne at Yale University. The con¬nection between McCulloch’s philosophical interests and his experimental work can be expressed as a search for a physiological a priori, an integrated mechanism of sensation.
Andrew A. M. (2005) Artificial neural nets and BCL. Kybernetes 34(1/2): 33–39.
Purpose: Attention is drawn to a principle of “significance feedback” in neural nets that was devised in the encouraging ambience of the Biological Computer Laboratory and is arguably fundamental to much of the subsequent practical application of artificial neural nets. Design/methodology/approach – The background against which the innovation was made is reviewed, as well as subsequent developments. It is emphasised that Heinz von Foerster and BCL made important contributions prior to their focus on second-order cybernetics. Findings: The version of “significance feedback” denoted by “backpropagation of error” has found numerous applications, but in a restricted field, and the relevance to biology is uncertain. Practical implications: Ways in which the principle might be extended are discussed, including attention to structural changes in networks, and extension of the field of application to include conceptual processing. Originality/value – The original work was 40 years ago, but indications are given of questions that are still unanswered and avenues yet to be explored, some of them indicated by reference to intelligence as “fractal.”
Asaro P. (2007) Heinz von Foerster and the bio-computing movements of the 1960s. In: Müller A. & Müller K. H. (eds.) An unfinished revolution? Heinz von Foerster and the Biological Computer Laboratory, BCL, 1959–1976. Edition Echoraum, Vienna: 253–275. https://cepa.info/6625
Excerpt: As I read the cybernetic literature, I became intrigued that as an approach to the mind which was often described as a predecessor to AI, cybernetics had a much more sophisticated approach to mind than its purported successor. I was soon led to Prof. Herbert Brün’s seminar in experimental composition, and to the archives of the Biological Computer Laboratory (BCL) in the basement of the University of Illinois library. Since then, I have been trying to come to terms with what it was that was so special about the BCL, what allowed it to produce such interesting ideas and projects which seem alien and exotic in comparison to what mainstream AI and Cognitive Science produced in the same era. And yet, despite its appealing philosophical depth and technological novelty, it seems to have been largely ignored or forgotten by mainstream research in these areas. I believe that these are the same concerns that many of the authors of the recent issue of Cybernetics and Human Knowing (Brier & Glanville, 2003) express in regard to the legacy of von Foerster and the BCL. How could such an interesting place, full of interesting things and ideas have just disappeared and been largely forgotten, even in its own home town?
Asaro P. (2008) Computer als Modelle des Geistes. Über Simulation und das Gehirn als Modell des Designs von Computern. Österreichische Zeitschrift für Geschichtswissenschaften 19(4): 41–72. https://cepa.info/2310
The article considers the complexities of thinking about the computer as a model of the mind. It examines the computer as being a model of the brain in several very different senses of “model‘. On the one hand the basic architecture of the first modern stored-program computers was „modeled on“ the brain by John von Neumann. Von Neumann also sought to build a mathematical model of the biological brain as a complex system. A similar but different approach to modeling the brain was taken by Alan Turing, who on the one hand believed that the mind simply was a universal computer, and who sought to show how brain-like networks could self-organize into Universal Turing Machines. And on the other hand, Turing saw the computer as the universal machine that could simulate any other machine, and thus any particular human skill and thereby could simulate human intelligence. This leads to a discussion of the nature of “simulation” and its relation to models and modeling. The article applies this analysis to a written correspondence between Ashby and Turing in which Turing urges Ashby to simulate his cybernetic Homeostat device on the ACE computer, rather than build a special machine.
Asaro P. (2008) From mechanisms of adaptation to intelligence amplifiers: the philosophy of W. Ross Ashby. In: Husbands P., Holland O. & Wheeler M. (eds.) The mechanical mind in history. MIT Press, Cambridge MA: 149–184. https://cepa.info/2329
This chapter sketches an intellectual portrait of W. Ross Ashby’s thought from his earliest work on the mechanisms of intelligence in 1940 through the birth of what is now called artificial intelligence (AI), around 1956, and to the end of his career in 1972. It begins by examining his earliest published works on adaptation and equilibrium, and the conceptual structure of his notions of the mechanisms of control in biological systems. In particular, it assesses his conceptions of mechanism, equilibrium, stability, and the role of breakdown in achieving equilibrium. It then proceeds to his work on refining the concept of “intelligence,” on the possibility of the mechanical augmentation and amplification of human intelligence, and on how machines might be built that surpass human understanding in their capabilities. Finally, the chapter considers the significance of his philosophy and its role in cybernetic thought.
This dissertation reconsiders the nature of scientific models through an historical study of the development of electronic models of the brain by Cybernetics researchers in the 1940s. By examining how these unique models were used in the brain sciences, it develops the concept of a “working model” for the brain sciences. Working models differ from theoretical models in that they are subject to manipulation and interactive experimentation, i.e., they are themselves objects of study and part of material culture. While these electronic brains are often disparaged by historians as toys and publicity stunts, I argue that they mediated between physiological theories of neurons and psychological theories of behavior so as to leverage their compelling material performances against the lack of observational data and sparse theoretical connections between neurology and psychology. I further argue that working models might be used by cognitive science to better understand how the brain develops performative representations of the world.
Asaro P. M. (2009) Information and regulation in robots, perception and consciousness: Ashby’s embodied minds. International Journal of General Systems 38(2): 111–128. https://cepa.info/348
This article considers W. Ross Ashby’s ideas on the nature of embodied minds, as articulated in the last five years of his career. In particular, it attempts to connect his ideas to later work by others in robotics, perception and consciousness. While it is difficult to measure his direct influence on this work, the conceptual links are deep. Moreover, Ashby provides a comprehensive view of the embodied mind, which connects these areas. It concludes that the contemporary fields of situated robotics, ecological perception, and the neural mechanisms of consciousness might all benefit from a reconsideration of Ashby’s later writings.
Ashby W. R. (1962) Principles of the self-organizing system. In: Foerster H. von & Zopf Jr. G. W. (eds.) Principles of self-organization. Pergamon Press, New York: 255–278. https://cepa.info/4372
Questions of principle are sometimes regarded as too unpractical to be important, but I suggest that that is certainly not the case in our subject. The range of phenomena that we have to deal with is so broad that, were it to be dealt with wholly at the technological or practical level, we would be defeated by the sheer quantity and complexity of it. The total range can be handled only piecemeal; among the pieces are those homomorphisms of the complex whole that we call “abstract theory” or “general principles.” They alone give the bird’s-eye view that enables us to move about in this vast field without losing our bearings. I propose, then, to attempt such a bird’s-eye survey.
Ashby W. R. (1967) The place of the brain in the natural world. Currents in Modern Biology 1: 95–104.
A great deal is known already about the brain, but most of our knowledge of it is still in the form of experimental and observational facts, With the growing interest in the brain’s more general properties, however, such as in “artificial intelligence” in its various forms. the time has come for an abstract formulation of the nature of”brain”. a formulation suitable for a direct translation to the computer or hardware, The paper gives such a formulation on the basis of set theory and the concept of the state-determined system.
Ashby W. R. (1972) Setting goals in cybernetic systems. In: Robinson H. W. & Knight D. E. (eds.) Cybernetics, artificial intelligence and ecology. Spartan Books, New York NY: 33–44.