The First AI Winter (1974–1980)

9 minutes, 2 links
From

editione1.0.2

Updated November 2, 2022

You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.

It’s difficult to be rigorous about whether a machine really ‘knows’, ‘thinks’, etc., because we’re hard put to define these things. We understand human mental processes only slightly better than a fish understands swimming.John McCarthy*

The First AI Winter started with funds drying up after many of the early promises did not pan out as expected. The most famous idea coming out of this era was the Chinese room argument, one that I personally disagree with, that states that artificial intelligence systems can never achieve human-level intelligence.

Lack of Funding

From 1974 to 1980, AI funding declined drastically, making this time known as the First AI Winter. The term AI winter was explicitly referencing nuclear winters, a name used to describe the aftermath of a nuclear attack when no one can live in the area due to the high radiation. In the same way, AI research was in such chaos that it would not receive funding for many years.

Critiques and financial setbacks, a consequence of the many unfulfilled promises during the early boom in AI, caused this era. From the beginning, AI researchers were not shy about making predictions of their future successes. The following statement by Herbert Simon in 1957 is often quoted, “It is not my aim to surprise or shock you … but the simplest way I can summarize is to say that there are now in the world machines that think, that can learn and that can create. Moreover, their ability to do these things is going to increase rapidly until—in a visible future—the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”*

Terms such as “visible future” can be interpreted in various ways, but Simon also made more concrete predictions. He said that within 10 years a computer would be a chess champion and a machine would prove a significant mathematical theorem. With Deep Blue’s victory over Kasparov in 1996 and the proof of the Four Color Theorem in 2005 using general-purpose theorem-proving AI, these predictions came true within 40 years, 30 years longer than predicted. Simon’s overconfidence was due to the promising performance of early AI systems on simple examples. However, in almost every case, these early systems turned out to fail miserably when applied to broader or more difficult problems.

The first type of complication arose because most early programs knew nothing of their subject matter but rather succeeded using simple syntactic manipulations. A typical story occurred in early machine translation efforts, which were generously funded by the US National Research Council in an attempt to speed up the translation of Russian scientific papers in the wake of the Sputnik launch in 1957. It was thought initially that simple syntactic transformations, based on the grammar rules of Russian and English and word replacements from an electronic dictionary, would suffice to preserve the exact meanings of sentences. The fact is that accurate translation requires background knowledge to resolve ambiguity and establish the content of the sentence. A report by ALPAC criticizing machine translation efforts caused another setback. After spending $20M, The National Academy of Sciences, Engineering, and Medicine ended support for AI research based on this report.

Much criticism also came from AI researchers themselves. In 1969, Minsky and Papert published a book-length critique of perceptrons, the basis of early neural networks.* They claimed that a neural network with more than one layer would not be powerful enough to be useful to replicate intelligence. Ironically, multilayer neural networks, also known as deep neural networks (DNNs), would eventually cause an enormous revolution in multiple tasks, including language translation and image recognition, and become the go-to machine learning technique for researchers.

In 1973, following the same pattern of criticism of AI research, a report known as the Lighthill Report, written by James Lighthill for the British Science Research Council, gave a very pessimistic forecast of the field.* It stated, “In no part of the field have discoveries made so far produced the major impact that was then promised.” Following this report and others, DARPA withdrew its funding from the Speech Understanding Research at CMU, canceling $3M of annual grants. Another significant setback for AI funding was because of the Mansfield Amendment, passed by Congress, which limited military funding for research that lacked a direct or apparent relationship to a specific military function.* It resulted in DARPA funding drying up for many AI projects.

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available

Critiques

Criticism came from everywhere, including philosophers. Hubert Dreyfus, an MIT Philosophy Professor, criticized what he called the two assumptions of AI: the biological and psychological assumptions.*

The biological assumption refers to the brain being analogous to computer hardware and the mind being equivalent to computer software. The psychological assumption is that the mind performs discrete computations on discrete representation or symbols. Unfortunately, these concerns were not taken seriously by AI researchers. Dreyfus was given the cold shoulder and later claimed that AI researchers “dared not be seen having lunch with me.”

The Chinese Room Argument

One of the strongest and most well-known arguments against machines ever having real intelligence marked the end of the AI Winter. In 1980, John Searle, a philosophy professor at the University of California, Berkeley, introduced the Chinese room argument as a response to the Turing test. This argument proposed that a computer program cannot give a computer a mind, understanding, or consciousness.

Searle compared a machine’s understanding of the Chinese language to the understanding of someone who does not know Chinese but can read the dictionary and translate every word from English to Chinese and vice versa. In the same way, a machine would not have real intelligence. His argument stated that even if the device passed the Turing test, it did not mean that the computer literally had intelligence. A computer that translates English to Chinese does not necessarily understand Chinese. It could indicate that it is merely simulating the intelligence needed to understand Chinese.

Searle used the term Strong AI to refer to a machine with real intelligence, the equivalent of understanding Chinese instead of only translating word by word. But in the case when computers do not have real intelligence, such as simply translating Chinese words instead of actually understanding the meanings of the words, the machine has Weak AI.

Figure: Representation of the Chinese room argument.

In the Chinese room argument, a person inside a room who has a dictionary and translates English sentences and spills out Chinese sentences does not really understand Chinese.

Searle said that there would not be a theoretical difference between him using a device that translates directly from a dictionary versus using one that understands Chinese. Each simply follows a program, step by step, which demonstrates a behavior described as intelligence. But in the end, the one who translates using a dictionary would not be able to understand Chinese, which is the same way a computer can appear to have intelligence but only the appearance of having it. Some argue that Strong AI and Weak AI are the same with no real theoretical difference between the two.

The problem with this argument is that there is not a clear boundary between Weak AI and Strong AI. How can you determine whether someone really understands Chinese or they are mimicking the behavior? If the machine can translate every possible sentence, is that understanding or mimicking?

The AI Boom (1980–1987)

It’s fair to say that we have advanced further in duplicating human thought than human movement.Garry Kasparov*

This era was marked by expert systems and increased funding in the 80s. The development of Cog, iRobot, and Roomba by Rodney Brooks and the creation of Gammonoid, the first software to win backgammon against a world champion both took place during this period. This era ended with Deep Blue, the first computer software to win against a world-champion chess player.

After the First AI Winter, research picked up with new techniques that showed great results, heating up investment in research and development in the area and sparking the creation of new AI applications in enterprises. Simply put, the 1980s saw the rebirth of AI.

You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!