The Golden Years of AI (1956–1974)

11 minutes, 16 links
From

editione1.0.2

Updated November 2, 2022

You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.Edsger Dijkstra

The Golden Years of AI started with the development of Micro-Worlds by Marvin Minsky as well as John McCarthy’s development of Lisp, the first programming language optimized for artificial intelligence. This era was marked by the creation of the first chatbot, ELIZA, and Shakey, the first robot to move around on its own.

The years after the Dartmouth Conference were an era of discovery. The programs developed during this time were, to most people, simply astonishing. The next 18 years, from 1956 to 1974, were known as the Golden Years.* Most of the work developed in this era was done inside laboratories in universities across the United States. These years marked the development of the important AI labs at the Massachusetts Institute of Technology (MIT), Stanford, Carnegie Mellon University, and Yale. DARPA funded most of this research.*

MIT and Project MAC

MIT housed not a laboratory per se but what would be called Project MAC.* MAC was an acronym for Mathematics and Computation. The choice of creating a project instead of a lab stemmed from internal politics. Started by Robert Fano in July 1963, Project MAC would eventually turn into the Computer Science and Artificial Intelligence Lab (CSAIL) inside MIT. This project was responsible for research in the areas of artificial intelligence, operating systems, and theory of computation. DARPA provided a $2M grant for MIT’s Project MAC.

Marvin Minsky directed the AI Group inside Project MAC. John McCarthy was also a member of the group, and while there he created the high-level language Lisp in 1958, which became the dominant AI programming language for the next 30 years. At the time, credentialed computer scientists did not exist because universities did not have computer science programs yet. So, everyone involved in the project was either a mathematician, physicist, electrical engineer, or a dropout.

Figure: John McCarthy, Lisp language inventor.*

Project MAC was responsible for many inventions,* including the creation of the first computer-controlled robotic arm by Marvin Minsky and the first chess-playing* program. The program, developed by McCarthy’s students, beat beginner chess players and used the same main techniques as Deep Blue, the computer-chess program that would beat Grandmaster Garry Kasparov years later.

Micro-Worlds

The world is composed of many environments, each with different rules and knowledge. Russian grammar rules differ from those of English, which are entirely different from the standards for geometry. In 1970, Minsky and Seymour Papert suggested constraining their research into isolated areas; that is, they would focus on Micro-Worlds.* They concentrated on specific domains to see if programs could understand language in an artificially limited context. Most of the computer programs developed during the Golden Years focused on these Micro-Worlds.

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available

One such program was SHRDLU, which was written by Terry Winograd at the MIT AI Lab to understand natural language.* In this experiment, the computer worked with colored blocks using a robotic arm and a video camera. SHRDLU responded to commands typed in English, such as “Grasp the pyramid.” The goal of this process was to build one or more vertical stacks of blocks. Some blocks could not be placed on top of others, making the problem more complex.

But the tasks involved more than merely following commands. SHRDLU performed actions in order to answer questions correctly. For example, when the person typed, “Can a pyramid be supported by a pyramid?”, SHRDLU tried to stack two pyramids and failed. It then responded, “I can’t.” While many thought the SHRDLU program was a breakthrough, and it was considered a wildly successful demonstration of AI, Winograd realized that expanding outside the Micro-World for broader applications was impossible.

Figure: Marvin Minsky and his SHRDLU-controlled robotic arm.

Stanford

After McCarthy left MIT in 1962,* he became a professor at Stanford, where he started a lab called the Artificial Intelligence Center.* The laboratory focused most of its energy on speech recognition, and some of their work became the foundation for Siri, Apple’s virtual assistant.* The laboratory also worked on robotics and created one of the first robots, Shakey. Developed from 1966 to 1972, it was the first robot to break down large tasks into smaller ones and execute them without a human directing the smaller jobs.

Shakey’s actions included traveling from one location to another, opening and closing doors, turning light switches on and off, and pushing movable objects.* The robot occupied a custom-built Micro-World consisting of walls, doors, and a few simple wooden blocks. The team painted the baseboards on each wall so that Shakey could “see” where the walls met the floor.

Lisp was the language used for the planning system, and STRIPS, the computer program responsible for planning Shakey’s actions, would become the basis for most automated planners. The robot included a radio antenna, television camera, processors, and collision-detection sensors. The robot’s tall structure and its tendency to shake resulted in its name. Shakey worked in an extremely limited environment, something critics pointed out, but even with these simplifications, Shakey still operated disturbingly slowly.

Figure: Shakey, the first self-driving robot.

Carnegie Mellon University

Another prominent laboratory working on artificial intelligence was inside Carnegie Mellon University. At CMU, Bruce T. Lowerre developed Harpy, a speech recognition system.* This work started around 1971, and DARPA funded five years of the research. Harpy was a breakthrough at the time because it recognized complete sentences. One difficulty in speech is knowing when one word ends and another begins. For example, “euthanasia” could be misconstrued for “youth in Asia.” By 1976, Harpy could understand speech for 1,011 words from different speakers and translate it into text with a 90% accuracy rate.*

The Automatic Language Processing Committee (ALPAC) was created in 1964 by the US government “to evaluate the progress in computational linguistics in machine translation.”* By 1966, the committee reported it was “very skeptical of research done in machine translation so far, and emphasiz[ed] the need for basic research in computational linguistics” instead of AI systems. Because of this negative view, the government greatly reduced its funding.

Yale

At Yale, Roger Schank and his team used Micro-Worlds to explore language processing. In 1975, the group began a program called SAM, an acronym for Script Applier Mechanism, that was developed to answer questions about simple stories concerning stereotypical matters such as dining in a restaurant and traveling on the subway.

The program could infer information that was implicit in the story. For example, when asked, “What did John order?” SAM replied, “John ordered lasagna,” even though the story stated only that John went to a restaurant and ate lasagna.* Schank’s team worked on a few different projects, and in 1977, their work also included another computer program called FRUMP, which summarized wire-service news reports into three different languages.

Geometry Theorem Prover

At IBM, Nathaniel Rochester and his colleagues produced some of the first AI programs. In 1959, Herbert Gelernter constructed the Geometry Theorem Prover, a program capable of proving theorems that many students of mathematics found quite tricky. His program “exploited two important ideas. One was the explicit use of subgoals (sometimes called ‘reasoning backward’ or ‘divide and conquer’), and the other was the use of a diagram to close off futile search paths.”* Gelernter’s program created a list of goals, subgoals, sub-subgoals, and so on, expanding more broadly and deeply until the goals were solvable. The program then traversed this chain to prove the theorem true or false.

SAINT

Figure: SAINT.

A heuristic is a rule that helps to find a solution for a problem by making guesses about the best strategy to use given the state.*

In 1961, James Slagle wrote the program SAINT, Symbolic Automatic Integrator, which was responsible for solving simple algebra equations. The SAINT system performed integration through a “heuristic” processing system.

SAINT divided the problem into subproblems, searched those for possible solutions, and then tested them. As soon as these subproblems were solved, SAINT could resolve the main one as well.

SAINT became the foundation for Wolfram Mathematica, which is a valuable tool widely used today in the scientific, engineering, and computational fields. SAINT, however, was not the only program that addressed school problems. Others, such as Daniel Bobrow’s program called “word problems,” solved algebra problems described in simple sentences like, “The consumption of my car is 15 miles per gallon.”*

The First Chatbot, ELIZA

Figure: ELIZA software running on a computer.

Created by Joseph Weizenbaum in 1964, ELIZA was the first version of a chatbot.* It spammed people and did not pass the Turing test, but it was an early natural language processing program that demonstrated where AI could head in the future. It talked to anyone who typed sentences into a computer terminal with it installed.

ELIZA simply followed a few rules to try and identify the most important keywords in a sentence. With that information, the program attempted to reply to the questions based on that content. ELIZA disassembled the input and then reassembled it, creating a response using data entered by the user. For example, if the user entered, “You are very helpful.” ELIZA would take the input and first create the sentence, “What makes you think I am,” then it would add the rest from the deconstructed initial input, leading to the final sentence, “What makes you think I am very helpful?” If the program could not find such keywords, ELIZA responded with a remark that lacked content, like “Please go on.” or “I see.” ELIZA and today’s Alexa would not be too different from each other.

The First AI Winter (1974–1980)

It’s difficult to be rigorous about whether a machine really ‘knows’, ‘thinks’, etc., because we’re hard put to define these things. We understand human mental processes only slightly better than a fish understands swimming.John McCarthy*

The First AI Winter started with funds drying up after many of the early promises did not pan out as expected. The most famous idea coming out of this era was the Chinese room argument, one that I personally disagree with, that states that artificial intelligence systems can never achieve human-level intelligence.

Lack of Funding

You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!