The Second AI Winter (1987–1993)

7 minutes
From

editione1.0.2

Updated November 2, 2022

You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.

I am inclined to doubt that anything very resembling formal logic could be a good model for human reasoning.Marvin Minsky*

Beginning in 1987, funds once again dried up for several years. The era was marked by the qualification problem, which many AI systems encountered at the time.

An expert system requires a lot of data to create the knowledge base used by its inference engine, and unfortunately, storage was expensive in the 1980s. Even though personal computers grew in use during the decade, they had at most 44MB of storage in 1986. For comparison, a 3-minute MP3 music file is around 30MB. So, you couldn’t store much on these PCs.

Not only that, but the cost to develop these systems for each company was difficult to justify. Many corporations simply could not afford the costs of AI systems. Added to that were problems with limited computing power. Some AI startups, such as Lisp Machines and Symbolics, developed specialized computing hardware that could process specialized AI languages like Lisp, but the cost of the AI-specific equipment outweighed the promised business returns. Companies realized that they could use far cheaper hardware with less-intelligent systems but still obtain similar business outcomes.

A warning sign for the new wave of interest in AI was that expert systems were unable to solve specific, computationally hard logic problems, like predicting customer demand or determining the impact of resources from multiple, highly variable inputs. Newly introduced enterprise resource planning (ERP) applications started replacing expert systems. ERP systems dealt with problems like customer relationship management and supplier relationship management, and they proved very valuable to large enterprises.

The Qualification Problem

The qualification problem states that there is no way to predict all the possible outcomes and circumstances preventing the successful of an action, but the system still must recover from these unexpected failures. Reasoning agents in real-world environments rely on a solution to the qualification problem to make useful predictions.

For example, imagine that a program needs to drive a car with only if and then rules. A multitude of unexpected cases makes it impossible to handwrite all the rules for the application. Identifying cars and pedestrians is already extremely hard. A self-driving car not only needs to identify objects but also needs to drive around things (and people) based on its detection of such objects. Most people do not think of all the possible cases before they start writing the program. For example, if the program detects a human, is that human a pedestrian, a reflection, or someone riding in the bed of a pickup truck? The program also needs to be able to tell when vehicles tow other vehicles. These examples only scratch the surface of possible exceptions to the rules.

Unmet Promises

Expert systems fell prey to the qualification problem, and that caused a collapse of funding in AI funding because the systems could not achieve much of what it promised. The Second AI Winter began with the sudden collapse of the market for specialized AI hardware in 1987.* Desktop computers from IBM and Apple were steadily gaining market share. But 1987 became the turning point for these AI manufacturers when Apple’s and IBM’s computers became more powerful and cheaper than the specialized Lisp machines. Not only that, but Reagan’s Star Wars missile defense program experienced a huge slowdown because DARPA had invested heavily in AI solutions. This event, in turn, severely damaged Symbolics, one of the main Lisp machine makers, creating a cascading effect.

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available

Figure: One of the computers developed by the Fifth Generation Computer Systems program.

In addition to that, Fifth Generation Computer Systems, which was an initiative by the Japanese government to create a computer using massively parallel computing, was shut down during this period. The name of the project came from the fact that up until this time, there had been four generations of computing hardware:

  1. Vacuum tubes,

  2. Transistors and diodes,

  3. Integrated circuits, and

  4. Microprocessors.

The Japanese initiative represented a new generation of computers. Previously, computers focused on increasing the number of logic components in a single central processing unit (CPU), but Japan’s project, and others of its time, focused on boosting the number of CPUs for better performance. This enormous computer was intended to be a platform for future development in artificial intelligence. Its goal was to respond to natural language input and be capable of learning. But general-purpose Intel x86 machines and Sun workstations had begun surpassing specialized computer hardware. Because of that and the high cost of the projectβ€”around $500M in totalβ€”the Japanese cut the initiative after a decade. The project’s end marked a failure of the massively parallel processing approach to AI.

In the United States, most of the projects of this era were also not working as expected. Eventually, the first successful expert system, XCON, proved too expensive to maintain. The system was complicated to update, could not learn, and suffered from the qualification problem.

The Strategic Computing Initiative (SCI),* another large program developed by the US government from 1983 to 1993, was inspired by Japan’s Fifth Generation Computer Systems project. It focused on chip design, manufacturing, and computer architecture for AI systems. The integrated program included projects at companies and universities that were designed to eventually come together. Funded by DARPA, the effort β€œwas supposed to develop a machine that would run ten billion instructions per second to see, hear, speak, and think like a human.”* By the late 1980s, however, it was apparent that the initiative would not succeed in its AI goals, leading DARPA to cut funding β€œdeeply and brutally.” This event, in addition to the numerous companies that had gone out of business, led to the Second AI Winter. The beginning of probabilistic reasoning marked the end of this winter and provided an altogether new approach to AI.

Probabilistic Reasoning (1993–2011)

Probability is orderly opinion and inference from data is nothing other than the revision of such opinion in the light of relevant new information.*

Probabilistic reasoning was a fundamental shift from the way that problems were addressed previously. Instead of adding facts, researchers started using probabilities for the occurrence of facts and events, building networks of how the probability of each event occurring affects the probability of others. Each event has a probability associated with it, as does each sequence of events. These probabilities plus observations of the world are used to determine, for example, what is the state of the world and what actions are appropriate to take.

Probabilistic reasoning involves techniques that leverage the probability that events will occur. Judea Pearl’s influential work, in particular with Bayseian networks, gave new life to AI research and was central to this period. Maximum likelihood estimation was another important technique used in probabilistic reasoning. IBM Watson, the last successful system to use probabilistic reasoning, built on these foundations to beat the best humans at Jeopardy!

You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!