Главная > Документ


Building the Second Mind: 1956 and

the Origins of Artificial Intelligence Computing

Rebecca E. Skinner

Smashwords Edition

Copyright 2012 by Rebecca E. Skinner

This ebook is licensed for your personal enjoyment only. This ebook may not be re-sold or given away to other people. If you would like to share this book with another person, please purchase an additional copy for each recipient. If you’re reading this book and did not purchase it, or it was not purchased for your use only, then please return to and purchase your own copy. Thank you for respecting the hard work of this author.

This is the Beta edition of the first of three books of the Building the Second Mind history of AI. It does not contain illustrations, live links, or an index, and footnotes are referenced in parentheses in the text.

The author expressly requests feedback from readers, for the revised edition to follow. Please respond to reskinnerbuildingthesecondmind@.

Table of Contents

Chapter .5. Preface

Chapter 1. Introduction

Chapter 2. The Ether of Ideas in the Thirties and the War Years

Chapter 3. The New World and the New Generation in the Forties

Chapter 4. The Practice of Physical Automata

Chapter 5. Von Neumann, Turing, and Abstract Automata

Chapter 6. Chess, Checkers, and Games

Chapter 7. The Impasse at End of Cybernetic Interlude

Chapter 8. Newell, Shaw, and Simon at the Rand Corporation

Chapter 9. The Declaration of AI at the Dartmouth Conference

Chapter 10. The Inexorable Path of Newell and Simon

Chapter 11. McCarthy and Minsky begin Research at MIT

Chapter 12. AI’s Detractors Before its Time

Chapter 13. Conclusion: Another Pregnant Pause

Chapter 14. Acknowledgements

Chapter 15. Bibliography

Chapter 16. Endnotes

Chapter .5. Preface

Introduction

Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing is a history of the origins of AI. AI, the field that seeks to do things that would be considered intelligent if a human being did them, is a universal of human thought, developed over centuries. Various efforts to carry this out appear- in the forms of robotic machinery and more abstract tools and systems of symbols intended to artificially contrive knowledge. The latter sounds like alchemy, and in a sense it certainly is. There is no gold more precious than knowledge. That this is a constant historical dream, deeply rooted in the human experience, is not in doubt. However, it was not more than a dream until the machinery that could put it into effect was relatively cheap, robust, and available for ongoing experimentation.

The digital computer was invented during the years leading to and including the Second World War, and AI became a tangible possibility. Software that used symbols to enact the steps of problem-solving could be designed and executed. However, envisioning our possibilities when they are in front of us is often a more formidable challenge than bringing about their material reality. AI in the general sense of intelligence cultivated through computing had also been discussed with increasing confidence through the early 1950s. As we will see, bringing it into reality as a concept took repeated hints, fits, and starts until it finally appeared as such in 1956.

Our story is an intellectual saga with several supporting leads, a large peripheral cast, and the giant sweep of Postwar history in the backdrop. There is no single ‘great man’ in this opus. As far as the foundation of AI is concerned, all of the founders were great. Even the peripheral cast was composed of people who were major figures in other fields. Nor, frankly, is there a villain either.

Themes and Thesis

The book tells the story of the development of the cognitive approach to psychology, computer science (software), and the development of software that undertook to do ‘intelligent’ things during mid-century. To this end, I study the early development of computing and psychology in the middle decades of the century, ideas about ‘Giant Brains’, and the formation of the field of study known as AI.

Why did this particular culture spring out of this petri dish, at this time? In addition to ‘why’, I consider the accompanying where, how, and who. This work is expository: I am concerned with the enrichment of the historical record. Notwithstanding the focus on the story, the author of necessity participates in the thematic concerns of historians of computer science. Several themes draw our attention.

The role of the military in the initial birth and later development of the computer and its ancillary technologies should not be erased, eroded, or diminished. Make no mistake: war is abhorrent. But sustained nation-building and military drives can yield staggering technological advances. War is a powerful driver of technological innovations (1). This is particularly the case with the development of ‘general-purpose technologies’, that is, those which present an entirely new way of processing material or information (2). These technologies of necessity create and destroy new industries, means of locomotion, creation of energy, and processing of information (steel rather than iron, the book, the electric generator, the automobile, the digital computer). In the process, these fundamental technologies will bring about new forms of communication, cultural activities, and numerous ancillary industries. We repeat, for effect: AI is the progeny of the Second World War, as is the digital computer, the microwave oven, the transistor radio and portable music devices, desktop and laptop computers, cellular telephones, the iPod, iPad, computer graphics, and thousands of software applications.

The theory of the Cold War’s creative power and fell hand in shaping the Computer Revolution is prevalent in the current academic discourse on this topic. Much of the debate is subtle, and I will not comment except in general agreement (3).

The role of the Counterculture in creating the Computer Revolution is affectively appealing. In its strongest form, this theory holds that revolutionary hackers created most, if not all of the astonishing inventions in computer applications (4). For many of the computer applications of the Sixties and Seventies, including games, software systems, security and vision, this theory holds a good deal of force. However, the period under discussion in this book refutes the larger statement, though. The thesis has its chronology backwards. The appearance of the culturally revolutionary ‘t-shirts’ was preceded by a decade and a half of hardware, systems and software language work by the culturally conservative ‘white-shirts’. (Throughout the Fifties, IBM insisted on a dress code of white shirts, worn by white Protestant men) (5).

Yet there is one way in which those who lean heavily on the cultural aspect of the Computer Revolution are absolutely correct. An appropriate and encouraging organizational culture was also essential in the development of the computer in every aspect, along the course of the entire chronology. This study emphasizes more emphatically the odd mélange of a number of different institutional contexts in computing, and how they came together to study one general endeavor. AI in its origins started with individual insights and projects, rather than cultivated in any single research laboratory. The establishment of AI preceded its social construction. We could say that AI’s initial phase as revolutionary science (or exogenous shock, in the economist’s terms) preceded its institutionalization and establishment of an overall “ecology of knowledge” (6). However, once AI was started, it too relied heavily on its institutional settings. In turn, the founders of AI established and cultivated research environments that would continue to foster innovation. The cultivation of such an environment is more evident in the later history of AI, rather than in the tentative movements in the 1950s.

Yet another salient theme of this work is the sheer audacity of the paradigm transition that AI itself entailed. The larger manner of thinking about intelligence as a highly tangible quality, and about thinking as something that could have qualitative aspects to it, required a vast change between the late 1930s and the mid-1950s. As with any such alteration of focus, this required the new agenda to be made visible- and envisioned while it still seemed like an extreme and far-fetched concept (7).

A final theme is the larger role of AI in the history of the Twentieth century. It is certainly true that the Cold War’s scientific and research environment was a ‘Closed World’ in which an intense, intellectually charged, politically obsessive culture thrived. The stakes involved in the Cold War itself were the highest possible ones; the intensity of its inner circles this is no surprise. However, in this case, the larger cultural themes of the Twentieth century had created their own “closed world”. Between them, Marxian political philosophy and Freudian influence on culture had robbed the arts, politics and literature of its vitality. This elite literary ‘Closed world’ saw science and technology as aesthetically unappealing and inevitably hijacked by the political forces that funded research. The resolution of the Cold War, and the transformation of the economy and ultimately of culture by the popularization of computing, would not take place for decades. Yet the overcoming of the cultural impasse of the Twentieth century would be a long-latent theme in which AI and computing would later play a part (8).

Outline of the Text

In Building the Second Mind: 1956 and the Origins of Artificial Intelligence Computing, we examine the way in which AI was formed at its start, its originators, the world they lived in and how they chose this unique path, the computers that they used and the larger cultural beliefs about those machines, and the context in which they managed to find both the will and the way to achieve this. 1956 was the tipping point, rather than the turning point, for this entry into an alternative way of seeing the world. Our chapter outline indicates the line the book follows.

The chapter outline delineates the book’s course. The Introduction and Conclusion chapters frame the book and discuss history outside of the time frame covered in BTSM, and establishes AI as a constant in world intellectual history (Chapter One). The other chapters are narrative, historical, and written within the context of their time.

The intellectual environment of the Thirties and the war years is foreign to us. It lacked computers, and likewise lacked any sort of computational metaphor for intelligence. Studies of intelligence without any real reference to psychology nevertheless abounded (Chapter Two). Cognitive psychology was not a topic of academic study through the first two quarters of the twentieth century. Yet intelligent processes and learning were discussed in multifold ways. Formal logic developed symbolic languages for the representation of the real world by a symbolic language. This language was not yet computational. However, information theory, which concerned the integrity of transmission of electrical signals, was invented by Claude Shannon, Warren Weaver, and other engineers. The latter two things had not yet been joined in the implementation of computer languages- but this was a development that could have been predicted.

Revisiting the late 1940s, one is struck by the sheer foreignness of the environment (Chapter Three). The political environment, with the ominous Cold War between the Soviet Union and the United States, and practically every other country damaged by the Second World War, seems firmly in another century. The financial and academic anemia of the 1930s gave way to the wartime research programs of the 1940s, and brought with it opportunities for a new generation. The universities expanded, and many new research institutions were established. Moreover, the ongoing development of the computer and other technologies that benefitted from the Cold War offered opportunities unimaginable before 1945. The generation that benefitted from these new circumstances, too young to have served in the Second World War, or to have had their personal histories and careers marred by the Depression, was indeed fortunate in the timing of their lives. The leaders of AI for its first decades- John McCarthy, Marvin Lee Minsky, Allen Newell, and Herbert Alexander Simon- were uniquely favored by history. These four, and their larger cohort, benefitted as well from the increasingly open intellectual environment of the Cold War, as professional societies, the universities, research institutes, and even popular media all encouraged the discussion of computing, intelligence and its emulation (Chapter Four).

Continually repressed by the state of academic psychology, the study of intelligence further made its appearance in the design of other intelligent beings besides robotic artifacts (Chapter Five). Singular minds such as John Von Neumann and Alan Turing proposed intelligent automata, which would be binary coded programs that could carry out computable functions (i.e., equations). Von Neumann’s discussion of automata, and the work of Warren McCulloch and Walter Pitts, suggested that these proposed creations be made of biological matter- essentially A-Life before its time. Turing also designed the eponymous Turing Test, a means of determining whether a given intelligent machine was actually intelligent. Both Turing and Von Neumann died before AI had advanced very far; however they greatly influenced their generation in general and Minsky and McCarthy in particular.

If electric robotic automata were one prevalent form of the emulation of intelligence throughout the 20th century, games were another form and often represented the higher ground for such representation (Chapter Six). Chess is too large a search space for undirected ‘blind’ search, so it immediately challenged the early users of computers toward strategy. Claude Shannon used the gedankenexperiment of chess as a backdrop for the visualization of problem-solving as an upside-down tree the branches and branching points of which can be portrayed as positions and moves respectively. Other computer scientists, at Los Alamos and at the National Bureau of Standards and the ever-busy Newell and Simon and their colleague Clifford Shaw, began to work on games, often as a lark when the workday at the computer was over. Finally, IBM programmer Arthur Samuel developed a checkers-playing computer program that was so popular with the media that IBM sent him to Europe to try to hush the attention.

Early in the Fifties, ideas began to advance far ahead of technological expression. The grand automata envisioned by Von Neumann, Turing, Warren Weaver and others could not be realized practically. Computers worked very poorly; there was no operating software and all programs had to be written by hand and executed slowly during limited hours. The earliest digital computers were initially characterized in the popular media, as dangerous and preposterous machines that were visually akin to ‘giant brains’ on legs. There was scarcely any terminology for any aspect of human intelligence. Cognitive psychology and its close studies of conventions in thought processes were finally initiated, but existed on a very small scale. Cybernetics was stalled, dominated by Norbert Wiener’s increasingly maudlin statements as to the fear of nuclear war. Technological advances were required to dispel the impasse through ongoing progress on all fronts (Chapter Seven).

In Chapter Eight, we will bring in the essential role of the Rand Corporation in providing an amenable petri dish for some of the earliest AI programs. The Rand Corporation in Santa Monica, the think tank spun off from Air Force research after the Second World War, became the location of choice for Cold Warriors discussing nuclear war strategies during the Fifties and Sixties. Rand boasted a rare digital computer, used for calculations of war games. Herbert Simon, later joined by Allen Newell, began to consult there in 1952. Clifford Shaw, a programmer at Rand, worked with them to develop the Logic Theorist. Using Rand’s Johnniac computer, they devised this program, which is given uncompleted logic theorems to prove. Newell and Simon brought this program’s evidence to the Dartmouth Conference in 1956.

The Dartmouth Summer Conference, held in 1956 at Dartmouth University in Hanover, New Hampshire, brought together the significant participants in AI during its first two decades (Chapter Nine). This was the tipping point at which AI was established as such; at which the name AI, ‘artificial intelligence’ was first widely used; and at which the general trends and differences were clarified. It is also the moment at which NSS’ prospectively more felicitous term of ‘complex information processing’ was rejected. However, the clarification of AI as a common effort established the founders as a group with common beliefs about the possibility of AI itself.

The Dartmouth Conference did not change the research orientation of any of its participants, but it did establish the concept and name of AI, and present it to a distinguished cohort of the computer and Cybernetics milieu. Over the next several years, the Cybernetic agenda for intelligence and its biological metaphors for intelligence was prevalent. While it would eventually dwindle simply due to its lack of connection to a scientific research paradigm, its dissipation would take a number of years. The next two chapters examine the progress of AI during the remainder of the 1950s and into the first year of the 1960s. At Carnegie Tech, Newell and Simon continued on their inexorable path, using chess-playing and the ambitiously named General Problem Solver program as a way to establish a vocabulary for cogitation (Chapter Ten). The latter was impressive but certainly not what it claimed to be, but AI has done well when it has aimed for big things, even if those things were not achieved immediately.

The next chapter follows McCarthy and Minsky (Chapter Eleven). Working at Dartmouth, then MIT for several years, then settling finally at Stanford, John McCarthy made foundational contributions to the field in the form of the LISP computer language and the invention of timesharing. At MIT, Marvin Minsky worked at the Research Laboratory of Electronics, and then joined the Mathematics department in 1958. He and McCarthy established the Artificial Intelligence Project the same year, and began to gather an eager undergraduate following of student hackers, who initiated research in visual display, computer games, and graphics.

Like any audacious idea, AI attracted detractors who asserted that it was impossible, lacking in sensitivity toward intelligence itself, and overly audacious (Chapter Twelve). Once AI was actually an extant aspiration, it garnered much more bile and publicity than the detractors desired. During the first several years, the most prominent and persistent of the detractors appeared and began criticizing the field on the grounds that AI could not grasp the phenomenological nature of human perception. This is entirely true of early AI- even truncated cogitation is difficult to embody- but the extremely negative tone meant that it was not constructive criticism.

As our story ends at the conclusion of the 1950s, AI had built a foothold in the major institutions which would nurture it over the next decades (Chapter Thirteen). Its founders had their research funded, and the early major achievements were extant. The intellectual orientation which demands usage of computer programs to try to explore cognition was well-established. Gradually but surely, this paradigm was supplanting a cybernetic orientation which took its cues from engineering.

Chapter 1. Introduction: The Prehistory of AI

The Conceptual Watershed of the Dartmouth Conference

In the summer of 1956, roughly two dozen men gathered at the bucolic rural campus of Dartmouth University in Hanover, New Hampshire for a conference designated ‘the Dartmouth Summer Research Project on Artificial Intelligence.’ Dartmouth professor John McCarthy, who had initially suggested the conference, had explained in the conference’s original proposal:

" A two-month, ten-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth in Hanover, N.H. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it" (9).

This was a strange and novel concept at this time- on a par with the proposal of a round Earth, a united nations, universal or female suffrage, the abolition of slavery, the separation of church and state, evolution through genetic mutation and natural selection, or plate tectonics, in their respective times. The possibility of computers simulating thinking was outside the boundaries of their stipulated tasks. These consisted primarily of adding large rows of numbers for ballistic, actuarial, and meteorological equations and the like. The computer of the future had even been portrayed as a giant brain on legs- a monstrous and weird image with no practical import. The idea of computers that engaged in anthropomorphically appealing, ‘intelligent’ activities was as immediately probable and appealing to common wisdom as suggesting a portable or household nuclear reactor to fill one’s heating and electrical needs. One of the world’s foremost computing authorities had stated just a few years earlier that the world market for computers would never exceed a half-dozen (10). In this context, it is easy to see why AI lacked not only the respect of many computer professional, but also obvious, immediate clues as to how to proceed.

Today, computing applications number in the hundreds of thousands, instantaneous wireless connectivity is taken for granted, and computers and PDAs are ubiquitous in the lives of every middle-class person on the planet. But in 1956, the world was unimaginably different- computationally speaking. Computing was slow, ponderous, and involved input of both programs and data through paper cards or even paper tape, and weak and fallible magnetic core memory. The number of computers in the world could almost be counted precisely, because they were all owned by governments or major corporations or research centers (11). Storage took place on magnetic tape (a rare computing and storage medium that persists even today). There were no operating systems, no systems or commercial applications software, no computing application stores, wireless connectivity, tech support (online or on the phone), Apple Stores, Geek Squad, no online world at all.

A plethora of weaknesses, including lack of choice of vendor or end product, inflexibility of use, lack of universality, and slow progress in consumer product development, characterized information technology itself. Clunky, rotary-dialed telephones were attached to the wall by wires. Phone calls, especially long-distance, were expensive and often had to be arranged with a live operator. Poor-quality mimeographs producing smudged purple copies proliferated, as did telegraphy for terse, important, long-distance messages. Even robust electric typewriters did not exist: IBM did not introduce its iconic Selectric model until 1961. A “computer” was sometimes still understood in its archaic form- as a person, typically a female, who conducted mathematical operations using pencil and paper. Machine-readable (MICR) numbers still had not been invented; penmanship was an important topic in primary education. Handwriting forgery was a serious forensic concern. In an environment in which data flowed over electronic networks with such expense and difficulty, the fluid movement of data in digital form was barely conceivable.

The introduction and actual implementation of an idea that was practically speaking before its time- and which always had been- required a will and a way. The way was present in the reality of the general-purpose digital computer, invented barely a decade earlier and under increasingly intense development during the entire second half of the Twentieth century. The will required more historical serendipity, in the form of several brilliant- but extremely different- men who became the field’s founders.

John McCarthy invented and developed the first widely-used list-processing language, which was essential to the clear expression of AI’s concepts. Marvin Minsky, known for the concept of the society of mind, which sees intelligence as numerous agencies, or independent capacities. He is also the AI founder best-known to the general public, and the most involved with robotics and artificial vision. Allen Newell and Herbert Simon generally approached AI with a concern for its contributions to cognitive psychology. Newell and Simon, working with programmer Cliff Shaw, also produced the earliest AI programs.

Nothing in the academic environment or the world at large suggested that this enterprise would realize any of its goals soon. No matter; the conceptual watershed reached by this group’s simple insistence on inquiry into the idea itself mattered enormously.

This book tells the story of the establishment of AI by a handful of people during the 1950s. Both the larger historical environment and the staggering ingenuity of several people led to this idea being one that could finally be realized, after centuries as a dream.

The Present in the Past: Origins in AI’s Antiquity

“ Like the old woman in the story who described the world as resting on a rock, and then explained that rock to be supported by another rock, and finally when pushed with questions said it was rocks all the way down- he who believes this to be a radically moral universe must hold the moral order to rest either on an absolute and ultimate should, or on a series of shoulds all the way down” (12).

In looking for the origins of AI, it is tempting to say that it has been with humanity forever- or rocks or turtles or elephants “all the way down”. The idea of understanding intelligence systematically and emulating it in machinery- or biology in the form of human like automata- is ancient rather than recent. AI is a new science with an ancient heritage in philosophy and automata. Explaining the timelessness of its origins repudiates the ostensible delusions of its aspirations at that time.

The Ancient Greeks and the Earliest Cognitive Sciences

AI’s first conceptual precedents are found in Athenian Greece in the sixth century B.C.E. Attic Greece was not much concerned with numbers per se, but was fertile with other aspects of understanding intelligence and ideas of the mind. This society produced the earliest statements of the reality of abstract ideas; the first geometric proofs and efforts at logical forms for argumentation; and a rudimentary theory of the mind. Plato (429-347 B.C.E.) drew out the idea of Platonic absolutes or absolute forms, of all material objects, in The Dialogues, a series of confrontational conversations with his teacher, Socrates. This was the first effort at knowledge representation, or clear ways to speak about different sorts of ideas. Moreover, Plato’s early proofs seem to be efforts at formalized problem-solving. The work of Plato’s student, Aristotle (384-322 B.C.E.), features a systematic search for answers to questions of natural science. This approaches a prehistoric paraphrase of the initial state and the goal state, or what has come to be known in AI as search through all sorts of problem spaces.

If the reality of abstractions is central, so is the necessity of establishing rules or some other format for approaching problem-solving or representation of concepts. The concept of protocols was first touched upon with the Attic Greek concept of heuristics- defined by mid-20th century philosopher George Polya as “an adjective, [which] means serving to discover” (13). Finally, the philosopher Euclid’s representation of geometrical figures in imagined space has been established as part of the field of geometry (14).

Automata, Mechanical and Biological

Automaton: “a mechanism that is relatively self-operating, especially robot; a machine or control mechanism designed to follow automatically a predetermined sequence of operations or respond to encoded instructions.” (Webster’s Ninth New Collegiate Dictionary).

Recreating a human being has been one of history’s most persistent nostalgias. The word is derived from the Latin, in turn based on the Ancient Greek automatos, referring to a machine with the ability to move by its own force. Biological automata, are universal. Purported recreation of human body and intelligence is prefigured in the Babylonian Epic of Gilgamesh, and in the Hebrew conceptualization of humans as made by God from the earth. Mechanical automata, and prosthetics with automata-like features, abound in ancient and especially Greek myth and practice. According to legend, the Olympian god and blacksmith Hephaestus fashioned automata, as well as Achilles’ shield. The Greek inventor and philosopher Heron of Alexandria wrote a treatise on automata; such devices were often cleverly set in motion by falling water, heat, or atmospheric pressure (15). According to Greek mythology, the inventor Daedalus crafted wings of wax to enable himself and his son Icarus to try to escape Crete, after Daedalus had built the Minotaur’s labyrinth there. Icarus’ wings were melted when he flew too close to the sun. Daedalus fashioned automata as well as prostheses; he is said to have built a copper machine, Talos, to guard Crete.

The Golem

The medieval legend of the Golem, a man-like figure formed of earth and called into animation by the invocation of the name of the Lord (i.e., a schem), was a late-medieval essay in alchemy. According to mystical tradition, the Golem was created by Rabbi Loew of seventeenth-century Prague to protect the Jews during periods of persecution. As the Hebrew word Adam (‘from the earth’) itself states, man is made of earth, as is the Golem. But the Golem legend emphasizes the singularity of the Divine in the ability to create life, as the Golem is mis-shapen where mankind is not. The idea has echoed in every subsequent portrayal of anthropomorphic robots, most notably in Mary Shelley’s Frankenstein in the 19th century. The concept of the Golem has apparently been highly attractive to AI’s founders as well: John Von Neumann, Norbert Wiener, and Marvin Minsky all asserted their direct descent from Rabbi Loew (16).

The Ars Magna

The Golem was a legend, albeit one with considerable and lasting psychic reality. During the Enlightenment, efforts to conduct mathematical operations with machines would appear, as would the idea of conducting such operations to systematically produce ideas. Curiously, the earliest effort to generate logical statements systematically using a machine was invented far earlier.

The Ars Magna (or Great Art) was invented in the Thirteenth century by the pre-Reconquista Catalan theologian Raymond Lull (1232-1315). It was a tool which could generate all combinations of a limited number of axiomatic principles, or concepts which were “true.” These ‘true’ axioms were the patently desirable Catholic virtues or divine attributes- goodness, greatness, and eternity, etc. Lull’s invention was a wheel made up of two or more concentric circles. Each circle contained the fourteen accepted divine attributes. By rotating the circles, every potential different combination of factors- that is, one hundred and ninety-six twofold combinations- could be generated. Each of these elements could be combined with every other element to produce an exhaustive inventory of all true statements. The generation of combinations was syntactic rather than heuristic. All of the combinations, and not some selected or constrained result, were presented. The device avoided the eternal computational problem of a combinatorial explosion, simply because the very small number of inputs. Lull constructed similar tools for studying the seven deadly sins and other theological artifacts (17).

Because the Ars Magna presented different permutations rather than any novel knowledge, it was a pseudo-computational device. Thus it epitomized rather than transcended the Medieval dovetailing of theology and science. Notwithstanding this, it was the very first effort to think systematically using technology.

The Concept of Symbolic Languages and Thinking as Symbol Processing

The first person to envision symbolic computing, the ultimate sine qua non for AI, was Wilheim Gottfried Leibniz (1646-1716), inventor of calculus (along with Newton) and of the Stepped Reckoner calculator. One of Leibniz’ most intriguing theories is that of monads, atomic bits which express tiny aspects of given philosophical principles. This concept offers the idea of manipulating symbols systematically. Just like English philosopher Thomas Hobbes a hundred years earlier, who had declared much more briefly that ‘all thinking is but ratiocination’, Leibniz lacked practical digital computing and computer languages to demonstrate his obvious grasp of the concept.

In the absence of any objective economic need for a computer, the hints that both Leibniz and Hobbes proffered as to symbol processing remained as philosophy rather than computing. AI needed general-purpose digital computing, which was obviously far from existence before the Twentieth century.

Computing, Practical rather than Symbolic

The digital computing lineage, involving the manipulation of information through binary codes made up of rudimentary items such as punch cards, was intermittent. The practical lineage of computing, which did not intersect with Enlightenment philosophy, is just that- practical. Computing in the sense of processing large volumes of information mechanically was actually borne out of a need to weave cloth and to figure out rows of numbers for businesses and governments- that is, economic need, not intellectual curiosity. If we think of computers as tools to handle the raw manipulation of data itself, then we see that the history of computing originates much earlier than anyone might anticipate. Weaving patterns are the primeval form of programming. Paper or cloth markers provided the first mechanical means for processing (relatively) large volumes of information in the form of rows of thread. Punched cards started with the need for "easy storage of large amounts of information to be read not visually but mechanically”(18). Punched card usage was perfected by Joseph Marie Jacquard (1752-1834), who created an automatic version of the process for storing woven textile patterns on punched paper cards. Jacquard was further inspired by another French inventor, Gaspard de Prony, who found the idea of dividing large calculations amongst groups of less-educated workers in Adam Smith’s division of labor (19).

The punched card form of storage was so successful that both Charles Babbage, designer of the first general-purpose computer, and Herman Hollerith, the American engineer who founded IBM, used abstract programs stored on lightweight cardboard pieces. On the cards used by Hollerith, punched and unpunched sections conveyed differing light signals (20).

We can trace a direct, if broken, line from Babbage to the first digital computer, in the mid-1940s. British inventor Charles Babbage (1791-1871) was the first person who tried to build a digital computer, that is one in which information is held in a binary state code (either on or off), rather than a decimal code, and which is therefore immensely flexible. He nearly succeeded. His first computer, the Difference Engine, was a design for calculations to solve large polynomial equations. It was not built, for reasons which are best summarized as political and managerial, rather than technical. Working with Lady Ada Lovelace, who envisioned computer programs to run on computers, he designed a further machine that would sort cards according to binary signals. Babbage's second design for a computing machine, to be called the Analytical Engine, was also never built, again more for a want of management skills than because of technical impediments (21).

If the idea of digital computers continued to fascinate, so did the concept of automata. The art of automata had fallen latent in the European Middle Ages, and revived during the Renaissance. Scholar Roger Bacon (1214-1292) is said to have fashioned automata as well. Leonardo de Vinci thought of animals as complicated mechanical systems, that is, essentially automatons. The field of mechanical automata flourished in the Early Modern Era, as fantastic figures intended as parlor games for the aristocracy were much in demand (22). These were, like Babbage’s work, luxuries and experiments. Despite Babbage's vision, there was no vital economic need for digital computers in the mid-19th century. They had been glimpsed, albeit repeatedly, before their time. It was not so for the analogue side of computing. Analogue computers measure quantities rather than abstractions, and are therefore well-suited for counting and mathematical activities, and measurements of continuous functions (for instance, as in the mercury thermometer), fit within this line of activity. In the Twentieth century, computers turned from aristocratic parlor games into working accounting devices indispensable for industry, as the digital line was finally taken up again with the ENIAC project during the Second World War (23).

Conclusion

The thematic effort of AI in its formation was “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Having said this, we hasten to our story, which will start with the state of the art of thinking about computing, intelligence and machinery in the Thirties and during the Second World War.

Chapter 2. The Ether of Ideas in the Thirties and the War Years

Introduction

“ Where there is a will, there is a way”.

So it is said, but it’s more complex than that. The will for AI’s existence was present in the early part of the 20th century. Rossum’s Universal Robots, Karl Capek’s famous fictional work, was not an isolated opus. Science fiction stories, featuring spaceships, talking robots, and voyages to the Moon, which were then becoming popular, were fanciful rather than real. The way did not yet exist, though.

AI as it would be realized in the statements of the Dartmouth Conference required working general-purpose digital computers; software programs; and an idea, however basic, of how human intelligence worked. The state of the art in the years leading up to the Second World War lacked these things. However, this decade did provide the antecedents which would become practical once the computer had been developed. A variety of disciplines studied intelligence in machinery and in the abstract, without ever discussing intelligence in people. These include Cybernetics, a theory of intelligence in machines with a good deal of abstraction so that it could be applied to humans as well; neurology; information theory; formal logic; automata (primitive robotics); and theoretical automata, or machines consisting of formal language or logical programs that could emulate life functions.

1. Before the Computer: Intelligence as a Railroad Routing Station

As we saw in the first chapter, thinking about the human mind, and about human intelligence embodied in machinery, is a constant in human history. In the absence of raw materials, people will create stories about automata. Given scant raw materials, it seems, people will build automata. In the Thirties, practical automata were a popular hobby if not academic study (24). These electromechanical automata emulated biology rather than cognition. Science fiction typically predicted mechanized intelligence, in which highly anthropomorphic robots would carry out intelligent activities. This was presented in a ‘black box’ form, without any explanation of the basis, computational or otherwise, which would be used to implement it (25). There was no general-purpose machine that did ‘intelligent’ things. The shell of the idea of a computer existed, but there was no reality to substantiate it.

The idea of intelligence involving information processing or a program- that is, the computational metaphor for intelligence- did not and could not exist. It would not for years after the computer itself was extant and functional. Even as late as 1949, electrical engineer Edmund Berkeley does not venture far from the popular misconception when he suggests that the reader think of a mechanical brain as a sort of railroad station, in which information and sensory data are most processed by being routed to the correct location (26). It is not surprising that Berkeley uses this terminology- which discusses the routing of people and materiel- rather than the processing of information or goods. The latter would be an appropriate way to think metaphorically of computing, but metaphors of intelligence at the time did not support such a computational understanding. Intellectual historian Murray Eden discusses the weak metaphors used to describe intelligence:

“ In their introductory psychology text (1921, 1949), Robert Woodworth and Donald Marquis make no use of such concepts as message, information feedback, computation or control. A discussion of neurology is included but its relation to psychology does not go beyond the description given by Dunlap at least 35 years earlier. For these authors, the stimulus/response paradigm is preeminent... Woodworth and Marquis recognize the importance of organization but offer no means of investigating this property of the brain... We note again the analogic use of telephone cables and explosives. I find no reference to the analogy of the computer. Furthermore in the chapter on perception, there is no bibliographic entry later than 1943. It is fair to conclude that the authors did not believe that perceptual research during the twenty-four years prior to this edition was of particular import to beginning undergraduates.” (27)

Eden explains that the authors lacked the terminology in terms of which to describe mental functioning apart from the transmission of neural impulses- in both 1921 and 1949.

2. Warring Camps in the Field of Psychology

Psychology concerning thinking was deficient during the entire first half of the 20th century, especially during the arid InterWar period. The intellectual state of the art was characterized by Freudianism, early Behaviorism and its compatriot scientific management, and progress in neurology. Cognitive psychology, the systematic study of thinking, simply did not exist. Both Freud and the school of academic psychology known as Introspectionism studied the mind, but it did so mostly as regards emotions. Freudian psychology, studied in psychoanalytic institutes rather than in universities, held that the mind was all affect, and the academic discipline of Introspectionism that the mind was hardly knowable (28). A little Freud is a good thing, but this was carried much too far in both academic and intellectual circles and in the culture of the intelligentsia. Whatever progress Freudian psychology may have made to the nascent state of the art in developmental and affect psychology before its advent, it unfortunately detracted from efforts to understand the nature of thinking as cogitation. Despite Freud’s arduous training in physiology and his practice as the most dogged of scientists, in this respect his work played into the hands of anti-technological and even anti-intellectual sentiments (29).

Behaviorism at its Most Radical

Behaviorism, which triumphed and was indeed ubiquitous in academic psychology for half a century, smothered the study of thinking. Behaviorism insisted that human beings’ behavior could be predicted in a highly regular manner without reference to thinking (30). John Broadus Watson (1878-1958), the field’s founder, declared that psychology was as cut-and-dry as wrapping a gift:

“...it is a purely objective experimental branch of natural science.” (31)

This school discarded the evidence of cogitation, consciousness, emotion, deliberative, the ‘internal life’ of human beings in favor of animals as research subjects, and fixed and predictable inventories of responses to given stimuli as the scientific evidence to which all psychology should be reduced. Harvard professor B.F. Skinner maintained immense sway over the research agenda (32). Much academic psychology consisted of quantifications of the reflex arc, or idealized process by which people respond to given stimulus. The experimenter tried to find clear stimulii, which would elicit, consistently and measurably, a particular behavior. Experimenters typically used Classical or passive conditioning, as in Pavlovian stimulus and response tests, which elicit salivation. However, Radical Behaviorism, Skinner’s forte, relied on operant conditioning, in which the test subject is asked to do something. Operant conditioning was used increasingly as Skinner became more influential over the decades of mid-century. Because human and even much animal behavior necessarily incorporates far more complex information processing, this line of research was doomed to stall in its own limitations.

The two polar opposites of Behaviorism and psychoanalytic psychology, clenched together in eternal disdain, slowed work in cognitive and physiological psychology through at least 1950. The severe differences between these fields show no respite as we witness the intellectual world circa the late 1940s. Psychologist Howard Gardner’s authoritative history of the development of cognitive science clarifies the chilling and limiting effects of the astringencies of Behaviorism:

“ So long as Behaviorism held sway- that is during the 1920s, 1930s, and 1940s- questions about the nature of human language, planning, problem solving, imagination and the like could only be approached stealthily and with difficulty if they were tolerated at all.” (33)

3. Formal Logic and the Universal Turing Machine

Cognitive psychology was essential to AI. Until it was pursued, even the basic premise that mental phenomena could be studied was not effective. Yet while it languished, formal logic moved forward. Formal Logic, especially productive during the Thirties, was an essential infrastructure for the development of computer languages. Its creators were philosophers, who worked independently from the electrical engineers. However, these strains would merge, very quickly, with the appearance of theories of intelligent automata and with computers. Logical positivism, a German and Austrian philosophical school, carried forward the advances of formal logic from the 19th century. It stated that real objects in the world were verifiable and could be clearly stated in a formal language. Formal logic attempted to describe items in the world in relation to each other. During the course of the 19th century, and into the early 20th century, the work of Frege, Boole and Bertrand Russell clarified formal symbolic references for belonging in a group, for qualities which shared common elements of two groups, the existence of objects, and some of their semantic (descriptive) properties. This level of formal logic would need to be elaborated greatly to allow for further description. Later, AI would incorporate formal logic in the Lisp language, invented by John McCarthy, as one of its major early intellectual advances.

The progress of the field of logic toward binary formal languages went as far as the Principia Mathematica, published by Bertrand Russell and Alfred Whitehead during 1907 through 1910. Russell and Whitehead developed a formal language for the expression of objects, but did not develop a rich semantics for their description. This achievement hung in the air with little progress until the 1930s. At that time, other logicians set a cap on the further achievements of consistency in logic. But at the same time, logic also affirmed the possibility of computation.

David Hilbert, Schroeder, Russell and Whitehead, and the other mathematicians and logicians of the ‘logical positivist’ movement, had aimed to render formal logic, specifically first order predicate logic, as capable of expressing and indeed reiterating the objects in the real world (34). The caveat to that initiative was imposed by the Czech-German Kurt Godel (1906-1978), who trained in Vienna, moved to the USA in 1938 and settled to research at the Institute for Advanced Study. Godel's 1932 Habilitationsschrift (35) at the University of Vienna stated his most lasting contribution- the famous ‘yes, but’ which is known as the incompleteness theorem.

This work discovered that consistent logical systems may not always be able to verify themselves, nor to express every true sentence. These are Godel’s Theorems, or the Second and the First Incompleteness Theorems of 1931, respectively (36). In addition to the finding that there are some- albeit mostly hypothetical- limits to formal logic’s conquests, Godel’s work also proved the completeness of first order predicate calculus. Several years later, the American logician Alonzo Church demonstrated that FOPC is undecidable (37). Like Godel’s thesis, Church’s theorem (1936) does not hamper the application of formal logic, specifically FOPC, to computer languages for use in workable technology (38).



Скачать документ

Похожие документы:

  1. Self-assessment report of professional bachelor`s study programme „jurisprudence” of eihseba university college riga 2012 a pplication for accreditation of the study programme

    Документ
    ... '01 (Seventh Scandinavian Conference on Artificial Intelligence), University of Aarhus, Denmark, P.123-132 ... of the EURO Working Group on Fuzzy Sets and the Second International Conference on Soft and Intelligent Computing ...
  2. The epigram cake tactful humor impetus bible of bliss & love - christian country sports science longevity holistic water wit bible cover page saint bernard ii/queen elizabeth ii/ pope benedict xvi/mother angelica/14 th daliai lama/

    Документ
    ... mind, and with all your strength.’[a] This is the first commandment (31) And the second ... the computer-says a lot about man and woman intelligence but speaks volumes about the aptitude of ... the economy The origins of the Revolution Can be traced to the ...
  3. PAROLEE HIRES DETECTIVE TO FIND REAL MURDERER

    Документ
    ... follow the rather intelligent creatures' trail of poisonous love and revenge. The objects of their affection (and ... WITH A NEW COMPUTER, AN ABUSED GIRL, THE SURPRISING ORIGINS OF AN ORPHAN AND A CAMPING TRIP ...
  4. The omega file [greys nazis underground bases and the new world order] by branton [attention all abductees please download and study this document and

    Документ
    ... long before the end of the Second World War, the Nazi's commenced ... mind-control implants. Take careful note of the fascist or 'Nazi' origins of both the CIA and ... computer mainframe, a MASSIVE artificial intelligence... I have often wondered just when and ...
  5. The touch of Midas The Touch of Midas Science values and environment in Islam and the West Edited by Ziauddin Sardar The Other India Press Mapusa Goa Centre for Studies on Science Aligarh U P The Touch of Midas Edited by Ziauddin Sardar

    Документ
    ... artificial insemination to women of `superior’ intelligence who have sterile husbands. The racist potential of the ... the Second) greatly speeded up change in building techniques, because of shortages in materials and ...

Другие похожие документы..