You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.
It’s fair to say that we have advanced further in duplicating human thought than human movement.Garry Kasparov*
This era was marked by expert systems and increased funding in the 80s. The development of Cog, iRobot, and Roomba by Rodney Brooks and the creation of Gammonoid, the first software to win backgammon against a world champion both took place during this period. This era ended with Deep Blue, the first computer software to win against a world-champion chess player.
After the First AI Winter, research picked up with new techniques that showed great results, heating up investment in research and development in the area and sparking the creation of new AI applications in enterprises. Simply put, the 1980s saw the rebirth of AI.
The research up until 1980 focused on general-purpose search mechanisms trying to string together elementary reasoning steps to find complete solutions. Such approaches were called weak methods because, although they applied to general problems, they did not scale up to larger or more difficult situations.
The alternative to these weak methods was to use more powerful domain-specific knowledge in expert systems that allowed more reasoning steps and could better handle typically occurring cases in narrow areas of expertise.* One might say that to solve a hard problem, you almost have to know the answer already. This philosophy was the leading principle for the AI boom from around 1980 to 1987.
The basic components of an expert system are the knowledge base and the inference engine. The knowledge base consists of all the important data for the domain-specific task. For example, in chess, this knowledge includes all the game’s rules and the points that each piece represents when playing. The information in the knowledge base is typically obtained by surveying experts in the area in question.
The inference engine enables the expert system to draw conclusions through simple rules like: If something happens, then something else happens. Using the knowledge base with these simple rules, the inference engine figures out what the system should do based on what it observes. In a chess game, the system can decide which piece to move by analyzing which moves are possible and which are the best ones based on the pieces remaining on the board. The inference engine makes a decision based on this knowledge.
To understand the upsurge in AI, we must look at the state of computer hardware. Personal computers made huge strides from the early to mid-80s. While the Tandem/16 was available, the Apple II, TRS-80 Model I, and Commodore PET were marketed to single users for a much lower price and better sound and graphics, although with less power regarding memory. The target use for these computers was for word processing, video games, and school work. As the 1980s progressed, so did computers with the release of Lotus 1-2-3 in 1983 and the introduction of the Apple Macintosh and the modern graphical user interface (GUI) in 1984. Personal computing hardware exploded by leaps and bounds during this time. And, the same was true for AI.
Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
The domain-specific systems of expert systems also proliferated. Large corporations around the world began adopting these systems because they leveraged desktop computers rather than expensive mainframes. For example, expert systems helped Wall Street firms automate and simplify decision making in their electronic trading systems. Suddenly, some people started assuming that computers could be intelligent again.
AI research and development projects received over $1B, and universities around the world with AI departments cheered. Companies developed not only new software but also specialized hardware to meet the needs of these new AI applications. New computer languages, including Prolog and Lisp, were developed to address the needs of these applications. Prolog, for example, was built around the rule-based system with primitives, such as a “rule,” that define and build on these if and else statements.
Many AI companies were created, including hardware-specific companies like Symbolics and Lisp Machines and software-specific companies like Intellicorp and Aion. Venture capital firms, which invest in early-stage companies, emerged to fund these new tech startups with visions of billion-dollar exits. For the first time, technology firms received a dedicated pool of money, which sped up the development of AI.
In part, the explosion of expert systems was due to the success of XCON (for eXpert CONfigurer), which was written by John P. McDermott at CMU in 1978.* An example of a rule that XCON had in its repertoire was:
If: the current context is assigning devices to unibus modules and there is an unassigned dual port disk drive and the type of controller it requires is known and there are two such controllers neither of which has any devices assigned to it and the number of devices that these controllers can support is known
Then: assign the disk drive to each of the controllers and note that the two controllers have been associated and that each supports on device*
Digital Equipment Corporation (DEC) used the system to automatically select computer components that met user requirements, even though internal experts sometimes disagreed regarding the best configuration. Before XCON, individual components, such as cables and connections, had to be manually selected, resulting in mismatched components and missing or extra parts. By 1986, XCON had achieved 95% to 98% accuracy and saved $25M annually, arguably saving DEC from bankruptcy.
Figure: Rodney Brooks, with his two robots, Sawyer and Baxter.*
Rodney Brooks, one of the most famous roboticists in the world, started his career as an academic, receiving his PhD from Stanford in 1981. Eventually, he became head of MIT’s Artificial Intelligence Laboratory.
At MIT, Rodney Brooks* defined the subsumption architecture, a reactive robotic architecture.* Robots that followed this architecture guided themselves and moved around based on reactive behaviors. That meant that robots had rules on how to act based on the state of the world and would react to the world as it was at that moment. This was different from the standard way of programming robots, in which they created a model of the world and made decisions based on that potentially stale information.
For Brooks, robots had a set of sub-behaviors that were organized in a hierarchy. Robots interacted with the world and reacted to it based on those behaviors. This theory and the demonstration of his work made him famous worldwide and, consequently, slowed down robotics research in Japan. At the time, Japanese research focused on a different software architecture for robotics, and with the success of this new theory, investment in other types of robotics research dwindled, especially in Japan.
Based on the subsumption architecture, Brooks developed a robot that could grab a coke bottle, something that was unimaginable before. He believed that the key to achieving intelligence was to build a machine that experienced the world in the same way a human does. Brooks was considered a maverick in his field. He used to say, “Most of my colleagues here in the lab do very different things and have only contempt for my work,”* saying that most of the other researchers were pursuing GOFAI, or Good Old-Fashioned Artificial Intelligence, or “brain in the box.” Instead, he stated that his robots had full knowledge about the real world and interacted with it. He used to say that GOFAI is like intelligence that only has access to the Korean dictionary: it can have self-consistent definitions, but they are unconnected to the world in any way.
In his 1990 paper, “Elephants Don’t Play Chess,”* he took direct aim at the physical symbol system hypothesis,* which states that a system based on rules for managing symbols has the necessary and sufficient means for general intelligent actions. It implies that human thinking can be reduced to symbol processing and that machines can achieve artificial general intelligence with a symbolic system.
For example, a chess game can be reduced to a symbolic mathematical system where pieces become symbols that are manipulated in each turn. The physical symbol system hypothesis stated that all types of thinking boiled down to mathematical formulas that can be manipulated. Brooks disagreed with the theory, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.” Therefore, you could create intelligent machines without having to create a model of the world.
Brooks started his research working on robotic insects that could move around their environment. At the time, researchers could not build robotic insects that moved quickly, even though that seems trivial. Brooks argued that the reason why robots from other researchers were slow is that they were GOFAI robots that relied on a central computer with a three-dimensional map of the terrain. He argued that such a system was not necessary and was even cumbersome for achieving the task of making robots move around their environment.
Brooks’s robots were different. Instead of having a central processor, each of his insects’ legs contained a circuit with a few simple rules, such as telling the leg to swing if it was in the air or move backward if it was on the ground. With these rules tied together, plus the interaction of the body with the circuits, the robots would walk similarly to how actual insects walk. The computation that the robots performed was always coupled physically with their body.
Brooks’s initial plan was to move up the biological ladder of the animal kingdom. First, he would work on robots that looked like insects, then an iguana, a simple mammal, a cat, a monkey, and eventually a human. But he realized that his plan would take too long, so he jumped directly from insects to building a humanoid robot named Cog.*
Cog was composed of black and chrome metal beams, motors, wires, and a six-foot-tall rack of boards containing many processing chips, each as powerful as a Mac II. It had two eyes, each consisting of a pair of cameras: one fisheye lens to give a broad view of what was going on, and one normal lens that gave this humanoid a higher-resolution representation of what was directly in front of it.
Cog had only the basics from the software standpoint: some primitive vision, a little comprehensive hearing, some sound generation, and rough motor control. Instead of having the behavior programmed into it, Cog developed its behavior on its own by reacting to the environment. But it could do little besides wiggle its body and wave its arm. Ultimately, it ended up in a history museum in Boston.
In 1990, after his stint developing Cog, Professor Brooks started iRobot. The name is a tribute to Isaac Asimov’s science fiction book I, Robot. In the years ahead, the company built robots for the military to perform tasks like disarming bombs. Eventually, iRobot became well-known for creating home robots. One of its most famous robots is a vacuum cleaner named Roomba.
The way ants and bees search for food inspired the software behind Roomba.* When the robot starts, it moves in a spiral pattern. It spans out over a larger and larger area until it hits an object. When it encounters something, it follows the edge of that object for a period of time. Then, it crisscrosses, trying to determine the largest distance it can go without hitting something else. This process helps Roomba work out how large the space is. But if it goes too long without hitting a wall, the vacuum cleaner starts spiraling again because it figures it is in an open space. It constantly calculates how wide the area is.
Figure: A visualization of how the Roomba algorithm works.
Roomba combines this strategy with another one based on its underneath dirt sensors. When it detects dirt, it changes its behavior to cover the immediate area. It then searches for another dirty area on a straight path. According to iRobot, these different combined patterns create the most effective way to traverse a room. At least 10 million Roombas have been sold worldwide.
Brooks left iRobot to start a new company, Rethink Robotics, which specializes in making robots for manufacturing. In 2012, they developed their first robot, named Baxter, to help small manufacturers pack objects. Rethink Robotics introduced Sawyer in 2015 to perform more detailed tasks.
Professor Hans Berliner from CMU created a program called BKG 9.9 that in July 1979 played the world backgammon champion in Monte Carlo.* The program controlled a robot called Gammonoid, whose software was one of the largest examples of an expert system. Gammonoid’s software was running on a giant computer at CMU in Pittsburgh, Pennsylvania, 4,000 miles away, and gave the robot instructions via satellite communication. The winner of the human-versus-robot games would take home $5K. Not much was expected of the programmed robot because the players knew that the existing microprocessors could not play backgammon well. Why would a robot be any different?
The opening ceremony reinforced the view that the robot would not win. When appearing on the stage in front of everyone, the robot entangled itself in the curtains, delaying its appearance. Despite this, Gammonoid became the first computer program to win the world championship of a board or card game.* It won seven games to one. Luigi Villa, the human opponent, could hardly believe it and thought the program was lucky to have won two of the games, the third and the final one. But Professor Berlinger and some of his researchers had been working on this backgammon machine for years.
The former world champion, Paul Magriel, commented on the game, “Look at this play. I didn’t even consider this play … This is certainly not a human play. This machine is relentless. Oh, it’s aggressive. It’s a really courageous machine.” Artificial intelligence systems were starting to show their power.
With this new power came the birth of systems like HiTech and Deep Thought, which would eventually defeat chess masters. These were the precursors of Deep Blue, the system that would become the world-champion chess software, and they were all developed in laboratories at Carnegie Mellon University.
Deep Thought, initially developed by researcher Feng-hsiung Hsu in 1985, was sponsored and eventually bought by IBM. Deep Thought went on to win against all other computer programs in the 1988 American World Computer Chess Championship. The AI then competed against Bent Larsen, a Danish chess Grandmaster, and became the first computer program to win a chess match against a Grandmaster. The results impressed IBM, so they decided to bring development in-house, acquiring the program in 1995 and renaming it Deep Blue.
Figure: Garry Kasparov playing against IBM’s Deep Blue.
In the following year, 1996, it competed against the world chess champion, Garry Kasparov. Deep Blue became the first machine to win a chess game against a reigning world champion under regular time controls, although it ultimately lost the series. They played six matches; Deep Blue won one, drew two, and lost three.
The next year in New York City, Deep Blue—now unofficially called Deeper Blue—improved and won the match series against Garry Kasparov, winning two, drawing three, and losing one. It was the first computer program to win the chess world championship.
Deep Blue’s Brain
technical The software behind Deep Blue, which won against Kasparov, was running an algorithm called min-max search with a few additional tricks. It searched through millions or billions of possibilities to find the best move. Deep Blue was able to look at an average of 100 million chess positions per second while computing a move. It analyzed the current state and figured out the potential next moves based on the game rules. The program also calculated the possible “value” of the next play based on each player’s best moves after that move and what those would mean to the game. This process used an inference system on a knowledge base—an expert system.
Deep Blue’s algorithm minimized the possible loss for a worst-case scenario while maximizing the minimum gain for a potential win. It maximized the points that the software would get in the next moves and minimized the points that the opponent could get given a certain play. Hence the name min-max search. In a chess match, ways exist to identify the outcome of a state of a chessboard by looking at how many chess pieces remain on the table for each player. Each piece has a value attached to it. For example, a knight is worth three points, a rook five, and a queen ten. So, in a state where a player loses a rook and a knight, that position is worth eight points less for that player.
Figure: An example of the min-max algorithm. We can calculate the “value” of each board by looking at the value at the bottom positions in the tree. For example, when there is one white knight and one black knight and a black rook, the total value of the board for the white player is 3-(3+5)=-5. So, we can calculate the value of each board at the bottom level. The value of the board at the next level up is the minimum value of all the ones below it, giving -8 and -5 as the board values for the second row. And at the top row, we use the maximum value, which is -5.
In another example, pawns could be worth one point, bishops and knights three, and a queen nine. The figure above depicts a simplified version of a game and demonstrates how the min-max algorithm works. On the first play, the white bishop can take a black bishop or knight, and either move would mean that the opponent would lose three points. So, both moves would be the same if they were the only factors involved in the search for the best move. But the software also analyzes the moves that the opponent could make in response and determines what the best move is for the opponent. If the bishop takes the opponent’s knight on the first move (the board on the left), the best countermove for the opponent is to take the bishop with its tower. That is not so good for the white player, the computer, because black would take its bishop, removing the three points it gained. The other possible first move for the white player is to take the bishop (the right board). If it does that, the other player cannot take any white piece, which is very good for the white player, because it would be up three points. So, if the software analyzed only the next two moves, that would be the best move. Therefore, looking at only the possible next two moves, taking the bishop is the better of the two options.
Deep Blue, however, examined more than two moves ahead. It looked at all the possible future moves that it could, based on its time limit and the information it had, and chose the best move. This is why it needed to analyze so many moves per second. Deep Blue was the first such system to analyze that many possible future scenarios. This system and many of the future developments in artificial intelligence required a lot of computing power. That is no surprise because the human brain also has a very powerful computing capability. When playing chess, human players also examine future moves, but humans rely on their memory of past games and the best moves in a certain scenario. The development of more complex games like Go used memory like this, but for chess, it was not necessary since Deep Blue could analyze most of the possible future scenarios while playing the game. For more complex games, software simply cannot look at all possible scenarios in a timely manner.
The Second AI Winter (1987–1993)
I am inclined to doubt that anything very resembling formal logic could be a good model for human reasoning.Marvin Minsky*
Beginning in 1987, funds once again dried up for several years. The era was marked by the qualification problem, which many AI systems encountered at the time.
An expert system requires a lot of data to create the knowledge base used by its inference engine, and unfortunately, storage was expensive in the 1980s. Even though personal computers grew in use during the decade, they had at most 44MB of storage in 1986. For comparison, a 3-minute MP3 music file is around 30MB. So, you couldn’t store much on these PCs.
You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.