This is a preview.Get full access.

Making Things Think

How AI and Deep Learning Power the Products We Use

Giuliano Pezzolo Giacaglia

AI now shapes our lives, yet few people know how machine learning algorithms work. It’s time for a detailed and approachable explanation for the intelligent reader.
240 pages and 481 links

editione1.0.2

Updated November 2, 2022
Wes CowleyEditor

Introduction15 minutes, 2 links

It is the obvious which is so difficult to see most of the time. People say ‘It’s as plain as the nose on your face.’ But how much of the nose on your face can you see, unless someone holds a mirror up to you?Isaac Asimov, I, Robot*

In a blue room with three judges, the best Go player in the last decade, Lee Sedol, plays against an amateur, Aja Huang, who is assisted by an artificial intelligence (AI) system. Via a computer screen on Huang’s left, AlphaGo instructs him on where to place each piece on the board. The match is a mark in history for artificial intelligence. If Huang wins, it will be the first time an AI system has beaten the highest ranked Go player.

Many photographers and videographers stand in the room to stream the match to the millions watching, both live and on replay. Lee Sedol chooses the black stones, giving him the chance to start and his opponent seven and a half points as compensation.

The match between Sedol and AlphaGo started intensely. AlphaGo used strategies that only the very professional players use, and the commentators were surprised at how human it looked. But AlphaGo was far from human. It calculated all the best options and could predict and place each piece in the best spot on the board. As the match went on, Sedol began to feel more nervous. After a surprising move by the AI, Sedol looked at Huang’s face to try to understand what his opponent was feeling, a technique used by Go players. But this time, he could not read his opponent because AlphaGo had no expression.

Then, Huang placed a white stone in a location that seemed an error. Commentators did not understand why AlphaGo would make such a rookie mistake. But in fact, AlphaGo had made all the calculations, and it was about to win the game. Almost four hours after the match started, Sedol was unable to beat this superhuman being. He resigned, defeated. It was the first time that a computer had beaten the world champion of Go, representing an extraordinary achievement in the development of artificial intelligence. By the end of the March 2016 tournament, AlphaGo had beaten Sedol four out of the five games.

While the exact origins are unknown, Go dates from around 3,000 years ago. The rules are simple, but the board is a 19-by-19 grid with 361 points to place pieces, meaning the game has more possible positions than atoms in the universe. Therefore, the game is extremely hard to master. Go players tend to look down on chess players because of the exponential difference in complexity.

Chess is a game where Grandmasters already know the openings and strategies and, in a way, play not to make mistakes. Go, however, has many more options and requires thinking about the correct strategy and making the correct moves early on. Throughout Go’s history, three momentous shifts have taken place regarding how to play the game. Each of these eras represented a total change in the strategies used by Go’s best players.

Warlord Tokugawa led the first revolution in the 1600s, increasing the popularity of Go as well as raising the needed skill level.* The second transformation occurred in the 1930s. Go Seigen, one of the greatest Go players of the 20th century, and Kitani Minoru departed from the traditional opening moves and introduced Shinfuseki, making a profound impact on the game.*

The latest revolution happened in front of a global audience watching Sedol play AlphaGo. Unlike the first transformations, the third shift was brought about not by a human but rather by a computer program. AlphaGo not only beat Sedol, but it played in ways that humans had never seen or played before. It used strategies that would shape the way Go was played from then on.

It was not a coincidence that a computer program beat the best human player: it was due to the development of AI and, specifically, Go engines over the preceding 60 years. It was bound to happen.

Figure: Elo ratings of the most important Go AI programs.

This figure shows the Elo ratings—a way to measure the skill level of players in head-to-head competitions—of the different Go software systems. The colored ovals indicate the type of software used. Each technical advancement for Go engines represented a performance jump in the best of them. But even with the same Go engine, the best AI players performed better over time, showing the probable effect that better hardware had in how Go engines executed: the faster the computer, the better it played.

Go engines are only one example of the development of artificial intelligence systems. Some argue that the reason why AI works well in games is that game engines are their own simulations of the world. That means that AI systems can practice and learn in these virtual environments.

People may often misunderstand the term artificial intelligence for several reasons. Some associate AI with how it is presented on TV shows and movies, like The Jetsons or 2001: A Space Odyssey. Others link it with puppets that look like humans but do not present any intelligence. Yet, these are inaccurate representations of AI. In actuality, artificial intelligence encompasses several pieces of software we interact with daily, from the Spotify playlist generator to voice assistants like Alexa and Siri. People especially associate AI with robots, often those that walk and talk like humans. AI is like a brain, and a robot is just one possible container for it. And the vast majority of robots today don’t look or act like humans. Think of a mechanical arm in an assembly line. An AI may control the arm to, among other things, recognize parts and orient them correctly.

Computer science defines artificial intelligence (or AI) as the capability of a machine to exhibit behaviors commonly associated with human intelligence. It is called that to contrast with the natural intelligence found in humans and other animals. Computer scientists also use the term to refer to the study of how to build systems with artificial intelligence.

For example, an AI system can trade stocks or answer your requests and can run on a computer or phone. When people think of artificial intelligence systems, they typically compare them to human intelligence. Because computer science defines AI as an approach to solving particular tasks, one of the ways to compare it to human intelligence is to measure its ability to achieve specific functions in comparison to the best humans.

Software and hardware improvements are making AI systems perform much better in specific tasks. Below, you see some of the milestones at which artificial intelligence has outperformed humans and how these accomplishments are becoming more frequent over time.

Figure: Artificial intelligence systems continue to do better and better at different tasks, eventually outperforming humans.

Artificial general intelligence (AGI) is an AI system that outperforms humans across the board, that is, a computer system that can execute any given intellectual task better than a human.

For some tasks like Go, AI systems now perform better than humans. And, the trend shows that AI systems are working better than humans at harder and harder assignments and doing so more often. That is, the trend suggests that artificial general intelligence is within reach.

What to Expect in This Book

In the first section of Making Things Think, I talk about the history and evolution of AI, explaining in layperson’s terms how these systems work. I cover the critical technical developments that caused big jumps in performance but not some of the topics that are either too technical or not as relevant, such as k-NN regression, identification trees, and boosting. Following that, I talk about deep learning, the most active area of AI research today, and cover the development trends of those methods and the players involved. But more importantly, I explain why Big Data is vital to the field of AI. Without this data, artificial intelligence cannot succeed.

The next section describes how the human brain works and compares it to the latest AI systems. Understanding how biological brains work and how animals and humans learn with them can shed light on possible paths for AI research.

We then turn to robotics, with examples of how industry is using AI to push automation further into supply chains, households, and self-driving cars.

The next section contains examples of artificial intelligence systems in industries such as space, real estate, and even our judicial system. I describe the use of AI in specific real-world situations, linking it back to the information presented earlier in the book. The final section contains risks and impacts of AI systems. This section starts with how these systems can be used for surveillance, and it includes the economic impact of AI and ends with a discussion of the possibility of AGI.

Why Another Book about AI?

I was born and raised in São Paulo, Brazil. I was lucky enough to be one of the two Brazilians selected to the undergraduate program at MIT. Coming to the US to study was a dream come true. I was really into mathematics and ended up publishing some articles in the field,* but I ended up loving computer science, specifically focusing on artificial intelligence.

I’ve since spent almost a decade in the field of artificial intelligence, from my Masters in machine learning to my time working on a company that personalizes emails and ads for the largest e-commerce brands in the world. Over these years, I’ve realized how much these systems were affecting people’s everyday lives, from self-driving car software to recommending videos. One credible prediction is that artificial intelligence could scale from about $2.5 trillion to $87 trillion in enterprise value by 2030; for comparison, the internet has generated around $12 trillion dollars of enterprise value from 1997 to 2021.*

Even though these systems are everywhere and will become more important over time as they become more capable, few people have a concrete idea of how they work. On top of that, we see both fanfare and fear in the news about the capability of these systems. But the headlines often dramatize certain problems, focus on unrealistic scenarios, and neglect important facts about how AI and recent developments in machine learning work—and are likely to affect our lives.

As an engineer, I believe you should start with the facts. This book aims to explain how these systems work so you can have an informed opinion, and assess for yourself what is reality and what is not.

But to do that, you also need context. Looking to the past (including inaccurate predictions from the past) informs our view of the future. So the book covers the history of artificial intelligence, going over its evolution and how the systems have been developed. I hope this work can give an intelligent reader a practical and realistic understanding of the past, present, and possible future of artificial intelligence.

Acknowledgments

Some people say that “it takes a village” to raise a child. I believe that everything that we build “takes a village.” This book is no exception.

I want to thank my late grandmother Leticia for giving me the aid that I needed. Without her help, I wouldn’t have been able to graduate from MIT.

My mom made me believe in dreams, and my dad taught me the value of a strong work ethic. Thank you for raising me.

I want to thank my brother and his family for always supporting me when I need it, and my whole family for always helping me out. Thanks also go to my friends, especially two close friends, Aldo Pacchiano and Beneah Kombe, for being my extended family.

I would also like to thank Paul English for being an amazing mentor and an incredible person overall, and Homero Esmeraldo, Petr Kaplunovich, Faraz Khan, James Wang, Adam Cheyer, and Samantha Mason for revising this book.

Finally, I want to thank my wife, Juliana Santos, for being with me on the ups and downs, and especially on the downs. Thank you for being with me on this journey!

A Brief History of AI2 hours, 51 links

The advancement of artificial intelligence (AI) has not been a straight path—there have been periods of booms and busts. This first section discusses each of these eras in detail, starting with Alan Turing and the initial development of artificial intelligence at Bletchley Park in England, and continuing to the rise of deep learning.

The 1930s to the early 1950s saw the development of the Turing machine and the Turing test, which were fundamental in the early history of AI. The official birth of artificial intelligence was in the mid-1950s with the onset of the field of computer science and the creation of machine learning. The year 1956 ushered in the golden years of AI with Marvin Minsky’s Micro-Worlds.

For eight years, AI experienced a boom in funding and growth in university labs. Unfortunately, the government, as well as the public, became disenchanted with the lack of progress. While producing solid work, those in the field had overpromised and underdelivered. From 1974 to 1980, funding almost completely dried up, especially from the government. There was much criticism during this period, and some of the negative press came from AI researchers themselves.

In the 1980s, computer hardware was transitioning from mainframes to personal computers, and with this change, companies around the world adopted expert systems. Money flooded back into AI. The downside to expert systems was that they required a lot of data, and in the 1980s, storage was expensive. As a result, most corporations could not afford the cost of AI systems, so the field experienced its second bust. The rise of probabilistic reasoning ends the first section of Making Things Think at around the year 2001.

The Early Beginnings of AI (1932–1952)

I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.Alan Turing*

This chapter covers Alan Turing, the initial developments of artificial intelligence at Bletchley Park in England, and how they helped break Germany’s codes and win World War II. I also describe the development of the Turing machine and Turing test, which became the golden test for testing artificial intelligence systems for decades. We’ll also meet Arthur Samuel and Donald Michie, who started early developments in artificial intelligence systems and created engines for systems to play games.

Alan Turing

During the Second World War, the British and the Allies had the help of thousands of codebreakers located at Bletchley Park in the UK. In 1939, one of these sleuths, Alan Turing, a young mathematician and computer scientist, was responsible for the design of the electromechanical machine named the Bombe. The British used this device to break the German Enigma Cipher.* At the same location in 1943, Tommy Flowers, with contributions from Turing, designed the Colossus, a set of computers built with vacuum tubes, to help the Allies crack the Lorenz Cipher.* These two devices helped break the German codes and predict Germany’s strategy. According to US General Eisenhower, cracking the enemy codes was decisive for the Allies winning the war.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

The Birth of Artificial Intelligence (1952–1956)

If after I die, people want to write my biography, there is nothing simpler. They only need two dates: the date of my birth and the date of my death. Between one and another, every day is mine.Fernando Pessoa*

The birth of artificial intelligence was seen with the initial development of neural networks including Frank Rosenblatt’s creation of the perceptron model and the first demonstration of supervised learning. That led to the Georgetown-IBM experiment, an early language translation system. Finally, the end of the beginning was marked by the Dartmouth Conference, at which artificial intelligence was officially launched as a field in computer science, leading to the first government funding of AI.

Neural Networks

In 1943, Warren S. McCulloch, a neurophysiologist, and Walter Pitts, a mathematical prodigy, created the concept of artificial neural networks. They designed their system based on how our brains work and patterned it after the biological model of how neurons—brain cells—work with each other. Neurons interact with their extremities, firing signals via their axon across a synapse to neighboring neurons’ dendrites. Depending on the voltage of this electrical charge, the receiving neuron proceeds to either fire a new charge of electrical pulse to the next set of neurons, or not.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

The Golden Years of AI (1956–1974)

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.Edsger Dijkstra

The Golden Years of AI started with the development of Micro-Worlds by Marvin Minsky as well as John McCarthy’s development of Lisp, the first programming language optimized for artificial intelligence. This era was marked by the creation of the first chatbot, ELIZA, and Shakey, the first robot to move around on its own.

The years after the Dartmouth Conference were an era of discovery. The programs developed during this time were, to most people, simply astonishing. The next 18 years, from 1956 to 1974, were known as the Golden Years.* Most of the work developed in this era was done inside laboratories in universities across the United States. These years marked the development of the important AI labs at the Massachusetts Institute of Technology (MIT), Stanford, Carnegie Mellon University, and Yale. DARPA funded most of this research.*

MIT and Project MAC

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

The First AI Winter (1974–1980)

It’s difficult to be rigorous about whether a machine really ‘knows’, ‘thinks’, etc., because we’re hard put to define these things. We understand human mental processes only slightly better than a fish understands swimming.John McCarthy*

The First AI Winter started with funds drying up after many of the early promises did not pan out as expected. The most famous idea coming out of this era was the Chinese room argument, one that I personally disagree with, that states that artificial intelligence systems can never achieve human-level intelligence.

Lack of Funding

From 1974 to 1980, AI funding declined drastically, making this time known as the First AI Winter. The term AI winter was explicitly referencing nuclear winters, a name used to describe the aftermath of a nuclear attack when no one can live in the area due to the high radiation. In the same way, AI research was in such chaos that it would not receive funding for many years.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

The AI Boom (1980–1987)

It’s fair to say that we have advanced further in duplicating human thought than human movement.Garry Kasparov*

This era was marked by expert systems and increased funding in the 80s. The development of Cog, iRobot, and Roomba by Rodney Brooks and the creation of Gammonoid, the first software to win backgammon against a world champion both took place during this period. This era ended with Deep Blue, the first computer software to win against a world-champion chess player.

After the First AI Winter, research picked up with new techniques that showed great results, heating up investment in research and development in the area and sparking the creation of new AI applications in enterprises. Simply put, the 1980s saw the rebirth of AI.

The research up until 1980 focused on general-purpose search mechanisms trying to string together elementary reasoning steps to find complete solutions. Such approaches were called weak methods because, although they applied to general problems, they did not scale up to larger or more difficult situations.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

The Second AI Winter (1987–1993)

I am inclined to doubt that anything very resembling formal logic could be a good model for human reasoning.Marvin Minsky*

Beginning in 1987, funds once again dried up for several years. The era was marked by the qualification problem, which many AI systems encountered at the time.

An expert system requires a lot of data to create the knowledge base used by its inference engine, and unfortunately, storage was expensive in the 1980s. Even though personal computers grew in use during the decade, they had at most 44MB of storage in 1986. For comparison, a 3-minute MP3 music file is around 30MB. So, you couldn’t store much on these PCs.

Not only that, but the cost to develop these systems for each company was difficult to justify. Many corporations simply could not afford the costs of AI systems. Added to that were problems with limited computing power. Some AI startups, such as Lisp Machines and Symbolics, developed specialized computing hardware that could process specialized AI languages like Lisp, but the cost of the AI-specific equipment outweighed the promised business returns. Companies realized that they could use far cheaper hardware with less-intelligent systems but still obtain similar business outcomes.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Probabilistic Reasoning (1993–2011)

Probability is orderly opinion and inference from data is nothing other than the revision of such opinion in the light of relevant new information.*

Probabilistic reasoning was a fundamental shift from the way that problems were addressed previously. Instead of adding facts, researchers started using probabilities for the occurrence of facts and events, building networks of how the probability of each event occurring affects the probability of others. Each event has a probability associated with it, as does each sequence of events. These probabilities plus observations of the world are used to determine, for example, what is the state of the world and what actions are appropriate to take.

Probabilistic reasoning involves techniques that leverage the probability that events will occur. Judea Pearl’s influential work, in particular with Bayseian networks, gave new life to AI research and was central to this period. Maximum likelihood estimation was another important technique used in probabilistic reasoning. IBM Watson, the last successful system to use probabilistic reasoning, built on these foundations to beat the best humans at Jeopardy!

Judea Pearl

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

IBM Watson

Watson was a project developed from 2004 to 2011 by IBM to beat the best humans at the television game show Jeopardy! The project was one of the last successful systems to use probabilistic reasoning before deep learning became the go-to solution for most machine learning problems.

Since Deep Blue’s victory over Garry Kasparov in 1997, IBM had been searching for a new challenge. In 2004, Charles Lickel, an IBM Research Manager at the time, identified the project after a dinner with co-workers. Lickel noticed that most people in the restaurant were staring at the bar’s television. Jeopardy! was airing. As it turned out, Ken Jennings was playing his 74th match, the last game he won.

Figure: The computer that IBM used for IBM Watson’s Jeopardy! competition.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

A Brief Overview of Deep Learning2 hours, 79 links

The fundamental shift in solving problems that probabilistic reasoning brought to AI from 1993 to 2011 was a big step forward, but probability and statistics only took developers so far. Geoffrey Hinton created a breakthrough technique called backpropagation to usher in the next era of artificial intelligence: deep learning. His work with multilayer neural networks is the basis of modern-day AI development.

Deep learning is a class of machine learning methods that uses multilayer neural networks that are trained through techniques such as supervised, unsupervised, and reinforcement learning.

In 2012, Geoffrey Hinton and the students at his lab showed that deep neural networks, trained using backpropagation, beat the best algorithms in image recognition by a wide margin.

Right after that, deep learning took off by unlocking a ton of potential. Its first large-scale use was with Google Brain. Led by Andrew Ng, who fed 10 million YouTube videos to 1,000 computers, the system was able to recognize cats and detect faces without hard-coded rules.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

The Main Approaches to Machine Learning

I learned very early the difference between knowing the name of something and knowing something.Richard Feynman*

Machine learning algorithms usually learn by analyzing data and inferring what kind of model or parameters a model should have or by interacting with the environment and getting feedback from it. Humans can annotate this data or not, and the environment can be simulated or the real world.

The three main categories that machine learning algorithms can use to learn are supervised learning, unsupervised learning, and reinforcement learning. Other techniques can be used, such as evolution strategies or semi-supervised learning, but they are not as widely used or as successful as the three above techniques.

Supervised Learning

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Deep Neural Networks

I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.Geoffrey Hinton*

The Breakthrough

Deep learning is a type of machine learning algorithm that uses multilayer neural networks and backpropagation as a technique to train the neural networks. The field was created by Geoffrey Hinton, the great-great-grandson of George Boole, whose Boolean algebra is a keystone of digital computing.*

The evolution of deep learning was a long process, so we must go back in time to understand it. The technique first arose in the field of control theory in the 1950s. One of the first applications involved optimizing the thrusts of the Apollo spaceships as they headed to the moon.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Convolutional Neural Networks

There are many types of deep neural networks, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs), and each has different properties. For example, recurrent neural networks are deep neural networks in which neurons in higher layers connect back to the neurons in lower layers. Here, we’ll focus on convolutional neural networks, which are computationally more efficient and faster than most other architectures.* They are extremely relevant as they are used for state-of-the-art text translation, image recognition, and many other tasks.

Figure: A recurrent neural network, where one of the neurons feeds back into a previous layer.

The first time Yann LeCun revolutionized artificial intelligence was a false start.* By 1995, he had dedicated almost a decade to what was considered a bad idea according to many computer scientists: that mimicking some features of the brain would be the best way to make artificial intelligence algorithms better. But LeCun finally demonstrated that his approach could produce something strikingly smart and useful.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Google Brain: The First Large-Scale Neural Network

The brain sure as hell doesn’t work by somebody programming in rules.Geoffrey Hinton*

Google Brain started as a research project between Google employees Jeff Dean and Greg Corrado and Stanford Professor Andrew Ng in 2011.* But Google Brain turned into much more than simply a project. By acquiring companies such as DeepMind and key AI personnel like Geoffrey Hinton, Google has become a formidable player in advancing this field.

One of the early key milestones of deep neural networks resulted from the initial research led by Ng when he decided to process YouTube videos and feed them to a deep neural network.* Over the course of three days, he fed 10 million YouTube videos* to 1,000 computers with 16 cores each, using the 16,000 computer processors as a neural network to learn the common features in these videos. After being presented with a list of 20,000 different objects, the system recognized pictures of cats and around 3,000 other objects. It started to recognize 16% of the objects without any input from humans.

The same software that recognized cats was able to detect faces with 81.7% accuracy and human body parts with 76.7% accuracy.* With only the data, the neural network learned to recognize images. It was the first time that such a massive amount of data was used to train a neural network. This would become the standard practice for years to come. The researchers made an interesting observation, “It is worth noting that our network is still tiny compared to the human visual cortex, which is times larger in terms of the number of neurons and synapses.”*

DeepMind: Learning from Experience

Demis Hassabis was a child prodigy in chess, reaching the Master standard at age 13, the second highest-rated player in the World Under-14 category, and also “cashed at the World Series of Poker six times including in the Main Event.”* In 1994 at 18, he began his computer games career co-designing and programming the classic game Theme Park, which sold millions of copies.* He then became the head of AI development for an iconic game called Black & White at Lionhead Studios. Hassabis earned his PhD from the University College London in cognitive neuroscience in 2009.

Figure: Demis Hassabis, CEO of DeepMind.

In 2010, Hassabis co-founded DeepMind in London with the mission of “solving intelligence” and then using that intelligence to “solve everything else.” Early in its development, DeepMind focused on algorithms that mastered games, starting with games developed for Atari.* Google acquired DeepMind in 2014 for $525M.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

AlphaGo: Defeating the Best Go Players

In the Introduction, we discussed the Go competition between Lee Sedol and AlphaGo. Well, DeepMind developed AlphaGo with the goal of playing Go against the Grandmasters. October 2015 was the first time that software beat a human at Go, a game with has around positions, more possible positions than the number of moves in chess or even the total number of atoms in the universe (around ). In fact, if every atom in the universe were a universe itself, there would be fewer atoms than the number of positions in a Go game.

In many countries such as South Korea and China, Go is considered a national game, like football and basketball are in the US, and these countries have many professional Go players, who train from the age of 6.* If these players show promise in the game, they switch from a normal school to a special Go school where they play and study Go for 12 hours a day, 7 days a week. They live with their Go Master and other prodigy children. So, it is a serious matter for a computer program to challenge these players.

There are around 2,000 professional Go players in the world, along with roughly 40 million casual players. In an interview at the Google Campus,* Hassabis shared, “We knew that Go was much harder than chess.” He describes how he initially thought of building AlphaGo the same way that Deep Blue was built, that is, by building a system that did a brute-force search with a handcrafted set of rules.

But he realized that this technique would never work since the game is very contextual, meaning there was no way to create a program that could determine how one part of the board would affect other parts because of the huge number of possible states. At the same time, he realized that if he created an algorithm to beat the Master players in Go, then he would probably have made a significant advance in AI, more meaningful than Deep Blue.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

OpenAI

OpenAI, a research institute started by tech billionaires including Elon Musk, Sam Altman, Peter Thiel, and Reid Hoffman, had a new challenge. OpenAI was started to advance artificial intelligence and prevent the technology from turning dangerous. In 2016, led by CTO and co-founder, Greg Brockman, they started looking on Twitch, an internet gaming community, to find the most popular games that had an interface a software program could interact with and that ran on the Linux operating system. They selected Dota 2. The idea was to make a software program that could beat the best human player as well as the best teams in the world—a lofty goal. By a wide margin, Dota 2 would be the hardest game at which AI would beat humans.

At first glance, Dota 2 may look less cerebral than Go and chess because of its orcs and creatures. The game, however, is much more difficult than those strategy games because the board itself and the number of possible moves is much greater than the previous games. Not only that, but there are around 110 heroes, and each one has at least four moves. The average number of possible moves per turn for chess is around 20 and for Go, about 200. Dota 2 has on average 1,000 possible moves for every eighth of a second, and the average match lasts around 45 minutes. Dota 2 is no joke.

Figure: OpenAI team gathered at its headquarters in San Francisco, California.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Software 2.0

Computer languages of the future will be more concerned with goals and less with procedures specified by the programmer.Marvin Minsky*

The Software 2.0 paradigm started with the development of the first deep learning language, TensorFlow.

When creating deep neural networks, programmers write only a few lines of code and make the neural network learn the program itself instead of writing code by hand. This new coding style is called Software 2.0 because before the rise of deep learning, most of the AI programs were handwritten in programming languages such as Python and JavaScript. Humans wrote each line of code and also determined all of the program’s rules. In contrast, with the emergence of deep learning techniques, the new way that coders program these systems is by either specifying the goal of the program, like winning a Go game, or by providing data with the proper input and output, such as feeding the algorithm pictures of cats with “cat” tags and other pictures without cats with “not cat” tags.

Based on the goal, the programmer writes the skeleton of the program by defining the neural network architecture(s). Then, the programmer uses the computer hardware to find the exact neural network that best performs the specified goal and feeds it data to train the neural network. With traditional software, Software 1.0, most programs are stored as programmer-written code that can span thousands to a billion lines of code. For example, Google’s entire codebase has around two billion lines of code.* But in the new paradigm, the program is stored in memory as the weights of the neural architecture with few lines of code written by programmers. There are disadvantages to this new approach: software developers sometimes have to choose between using software that they understand but only works 90% of the time or a program that performs well in 99% of the cases but it is not as well understood.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Dueling Neural Networks

What I cannot create, I do not understand.Richard Feynman*

One of the past decade’s most important developments in deep learning is generative adversarial networks, developed by Ian Goodfellow. This new technology can also be used for ill intent, such as for generating fake images and videos.

Generative Adversarial Networks

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Data Is the New Oil

Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc. to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.Clive Humby*

Data is key to deep learning, and one of the most important datasets, ImageNet, created by Fei-Fei Li, marked the beginning of the field. It is used for training neural networks as well as to benchmark them against others.

Deep learning is a revolutionary field, but for it to work as intended, it requires data.* The term for these large datasets and the work around them is Big Data, which refers to the abundance of digital data. Data is as important for deep learning algorithms as the architecture of the network itself, the software. Acquiring and cleaning the data is one of the most valuable aspects of the work. Without data, neural networks cannot learn.*

Most of the time, researchers can use the data given to them directly, but there are many instances where the data is not clean. That means it cannot be used directly to train the neural network because it contains data that is not representative of what the algorithm wants to classify. Perhaps it contains bad data, like black-and-white images when you want to create a neural network to locate cats in colored images. Another problem is when the data is not appropriate. For example, when you want to classify images of people as male or female. There might be pictures without the needed tag or pictures that have the information corrupted with misspelled words like “ale” instead of “male.” Even though these might seem like crazy scenarios, they happen all the time. Handling these problems and cleaning up the data is known as data wrangling.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Data Privacy

Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.Edward Snowden*

In 2014, Tim received a request on his Facebook app to take a personality quiz called “This Is Your Digital Life.” He was offered a small amount of money and had to answer just a few questions about his personality. Tim was very excited to get money for this seemingly easy and harmless task, so he quickly accepted the invitation. Within five minutes of receiving the request on his phone, Tim logged in to the app, giving the company in charge of the quiz access to his public profile and all his friends’ public profiles. He completed the quiz within 10 minutes. A UK research facility collected the data, and Tim continued with his mundane day as a law clerk in one of the biggest law firms in Pennsylvania.

What Tim did not know was that he had just shared his and all of his friends’ data with Cambridge Analytica. This company used Tim’s data and data from 50 million other people to target political ads based on their psychographic profiles. Unlike demographic information such as age, income, and gender, psychographic profiles explain why people make purchases. The use of personal data on such a scale made this scheme, which Tim passively participated in, one of the biggest political scandals to date.

Data has become an essential part of deep learning algorithms.* Large corporations now store a lot of data from their users because that has become such a central part of building better models for their algorithms and, in turn, improving their products. For Google, it is essential to have users’ data in order to develop the best search algorithms. But as companies gather and keep all this data, it becomes a liability for them. If a person has pictures on their phone that they do not want anyone else to see, and if Apple or Google collects those pictures, their employees could have access to them and abuse the data. Even if these companies protect against their own employees having access to the data, a privacy breach could occur, allowing hackers access to people’s private data.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Deep Learning Processors

What’s not fully realized is that Moore’s Law was not the first but the fifth paradigm to bring exponential growth to computers. We had electromechanical calculators, relay-based computers, vacuum tubes, and transistors. Every time one paradigm ran out of steam, another took over.Ray Kurzweil*

The power of deep learning depends on the design as well as the training of the underlying neural networks. In recent years, neural networks have become complicated, often containing hundreds of layers. This imposes higher computational requirements, causing an investment boom in new microprocessors specialized for this field. The industry leader Nvidia earns at least $600M per quarter for selling its processors to data centers and companies like Amazon, Facebook, and Microsoft.

Facebook alone runs convolutional neural networks at least 2 billion times each day. That is just one example of how intensive the computing needs are for these processors. Tesla cars with Autopilot enabled also need enough computational power to run their software. To do so, Tesla cars need a super processor: a graphics processing unit (GPU).

Most of the computers that people use today, including smartphones, contain a central processing unit (CPU). This is the part of the machine where all the computation happens, that is, where the brain of the computer resides. A GPU is similar to a CPU because it is also made of electronic circuits, but it specializes in accelerating the creation of images in video games and other applications. But the same operations that games need in order to appear on people’s screens are also used to train neural networks and run them in the real world. So, GPUs are much more efficient for these tasks than CPUs. Because most of the computation needed is in the form of neural networks, Tesla added GPUs to its cars so that they can drive themselves through the streets.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

What Can AI Learn from Animal Brains?44 minutes, 38 links

To understand the development of AI algorithms and the way they improve as they learn over time, it is really important to take a step back from artificial intelligence systems and focus on how brains function. As it turns out, AI systems work much the same way as human brains. So, I must first explain, at least at a high level, how animal, and specifically human, brains work.

The most important piece is the theory of Learning to Learn, which describes how the brain learns the technique to learn new topics. The human brain learns and encodes information during sleep or at least in restful awake moments—converting short-term memory to long-term—through hippocampal, visual cortex, and amygdala replay. The brain also uses the same circuitry that decodes the information stored in the hippocampus, visual cortex, and amygdala to predict the future. Again, much like the human brain, AI systems decode previous information to create future scenes, like what may happen next in a video.

Biological Inspirations for Deep Learning

The truth is that a human is just a brief algorithm—10,247 lines. They are deceptively simple. Once you know them, their behavior is quite predictable.Westworld, season two finale (2018)

Animal Brains

Humans have long deemed ourselves as the pinnacle of cognitive abilities among animals. Something unique about our brains makes us able to question our existence and, at the same time, believe that we are king of the animal kingdom. We build roads, the internet, and even spaceships, and we are at the top of the food chain, so our brains must have something that no other brain has.* Our cognitive abilities allow us to stay at the top even though we are not the fastest, strongest, or largest animals.

The human brain is special, but sheer mass is not the reason why humans have more cognition than different animals. If that were the case, then elephants would be at the top of the pyramid because of their larger brains. But not all brains are the same.* Primates have a clear advantage over other mammals. Evolution resulted in an economical way in which neurons are added to their brains without the massive increase in average cell sizes seen in other animals.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Learning to Learn Theory

The algorithms that are winning in games like Go or Dota 2 use reinforcement learning to train multilayer neural networks. The animal brain also uses reinforcement learning via dopamine. But research shows that the human brain performs two types of reinforcement learning on top of each other. This new theory implements a technique called Learning to Learn, also called meta-reinforcement learning, which may benefit machine learning algorithms.

The Standard Model of Learning

Dopamine is the neurotransmitter associated with the feeling of desire and motivation.

Neurons release dopamine when a reward for an action is surprising. For example, when a dog receives a treat unexpectedly, dopamine is released in the brain. The reverse is also true. When the brain predicts a reward and the animal does not get it, then a dip in dopamine occurs. Simply put, dopamine serves as a way for the brain to learn through reinforcement learning.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Sleeping and Learning

And it is only after seeing man as his unconscious, revealed by his dreams, presents him to us that we shall understand him fully. For as Freud said to Putnam: ‘We are what we are because we have been what we have been.’André Tridon*

It is a well-known fact that memory formation and learning are related to sleep. A rested mind is more capable of learning concepts, and the human brain does not have as detailed a memory of yesterday as it has of the present day. In this chapter, I detail how the brain learns during sleep, describing hippocampal replay, visual cortex replay, and amygdala replay. They are all mechanisms the brain uses to convert short-term memory into long-term memory, encoding the knowledge stored throughout the day. The same circuitry responsible for decoding information from the neocortex to support memory recall is also used for imagination, which indicates that the brain does not record every moment and spends time learning during the night.

Complementary Learning Systems Theory

In 1995, the complementary learning systems (CLS) theory was introduced,* an idea that had its roots in earlier work by David Marr.* According to this theory, learning requires two complementary systems. The first one, found in the hippocampus, allows for rapid learning of the specifics of individual items and experience. The second, located in the neocortex, serves as the basis of the gradual acquisition of structured knowledge about the environments.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Predicting the Future

John Anderton: Why’d you catch that?

Danny Witwer: Because it was going to fall.

John Anderton: You’re certain?

Danny Witwer: Yeah.

John Anderton: But it didn’t fall. You caught it. The fact that you prevented it from happening doesn’t change the fact that it was going to happen.

Minority Report (2002)

Predictive Coding

A study in 1981 by James McClelland and David Rumelhart at the University of California, San Diego, showed that the human brain processes information by generating a hypothesis of the input and then updating it as the brain receives data from its senses.* They demonstrated that people are able to identify letters when situated in the context of words, compared to words without that semantic setting.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Roboticsan hour, 70 links

Robots in the Industry

The Master created humans first as the lowest type, most easily formed. Gradually, he replaced them by robots, the next higher step, and finally he created me to take the place of the last humans.Isaac Asimov, I, Robot*

When people talk about artificial intelligence, they often think of mobile robots. But in computer science, AI is the field focused on the development of the brain of not only such robots but of computers that want to achieve certain goals. These robots do not use any of the deep learning models that we talked about previously. Instead, they have encoded, handwritten software.

Boston Dynamics

In Florida, a few people watch a competition between robots to reach a specific goal while achieving the different objectives faster and more precisely than their opponents. One robot looks at a door with its sensors—cameras and lasers—to decide what to do next in order to open it. Using its robotic arm, it slowly pushes the door and goes to the other side. The team responsible for the robot cheers as it completes one of the tasks.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Machine Learning and Robotics

Will robots inherit the earth? Yes, but they will be our children.Marvin Minsky*

Robots cannot yet operate reliably in people’s homes and labs nor manipulate and pick up objects.* If we are to have robots in our day-to-day lives, it is essential to create robots that can robustly detect, localize, handle and move, and change the environment the way we want. We need robots that can pick up coffee cups, serve us, peel bananas, or even walk around without tripping or hitting walls. The problem is that human surroundings are complex, and robots today cannot pick up most objects. If you ask a robot to pick up something it has never seen before, it almost always fails. To accomplish that goal, it must solve several difficult problems.

For example, if you ask a robot to pick up a ruler, the robot first needs to determine which object is a ruler, where it is, and finally, calculate where to put its gripper based on that information. Or, if you want a robot to pick up a cup of coffee, the robot must decide where to pick it up. If the gripper picks up the cup from the bottom edge, it might tip over and spill. So, robots need to pick up different objects from different locations.

Amazon Picking Challenge

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Self-Driving Cars

Whether you think you can, or you think you can’t, you’re right.*

Many companies currently build technology for autonomous cars, and others are just entering the field. The three most transformative players in the space: Tesla, Google’s Waymo, and George Hotz’s Comma.ai. Each of these companies tackles the problem with very different approaches. In some ways, self-driving cars are robots that require solving both hardware and software problems. A self-driving car needs to identify its surrounding environment with cameras, radar, or other instruments. Its software needs to understand what is around the car, know its physical location, and plan the next steps it needs to take to reach its destination.

Tesla

Tesla, founded by Martin Eberhard and Marc Tarpenning in 2003, is known as the Apple of cars because of its revolutionary car design and outside-the-box thinking when creating its vehicles.* Tesla develops its cars based on first principles, from the air conditioning system that uses perpendicular vents to how they form their chassis and suspension. With its innovation and work, the Tesla Model 3 is the safest car in the world,* followed by the Tesla Model S and Model X.* But Tesla is not only innovative with their hardware, it also invests heavily in its Autopilot technology.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

The Future of Self-Driving Cars

But the worries about operatorless elevators were quite similar to the concerns we hear today about driverless cars.Garry Kasparov*

There is a lot of talk about self-driving cars and how they will one day replace truck drivers, and some say that the transition will happen all of a sudden. In fact, the change will happen in steps, and it will start in a few locations and then expand rapidly. For example, Tesla is releasing software updates that make their car more and more autonomous. It first started releasing software that let its cars drive on highways, and with a later software update, its cars were able to merge into traffic and change lanes. Waymo is now testing its self-driving cars in downtown Phoenix. But it might not be surprising if Waymo starts rolling out their service in other areas.

The industry talks about five levels of autonomy to compare different cars’ systems and their capabilities. Level 0 is when the driver is completely in control, and Level 5 is when the car drives itself and does not need driver assistance. The other levels range between these two. I am not going to delve into the details of each level because the boundaries are blurry at best, and I prefer to use other ways to compare them, such as disengagements per mile. However they are measured, as the systems improve, autonomous cars can prevent humans from making mistakes and help avoid accidents caused by other drivers.

Self-driving cars will reduce and nearly eliminate the number of car accidents, which kill around 1 million people globally every year. Already, the number of annual deaths per billion miles has decreased due to safety features and improvements in the vehicle designs, like the introduction of seatbelts and airbags. Cars are now more likely to incur the damage and absorb the impact from an accident, reducing the injuries to passengers.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Industry Applicationsan hour, 40 links

Voice Assistants (Siri)

Samantha: You know what’s interesting? I used to be so worried about not having a body, but now I truly love it. I’m growing in a way I couldn’t if I had a physical form. I mean, I’m not limited—I can be anywhere and everywhere simultaneously. I’m not tethered to time and space in a way that I would be if I was stuck in a body that’s inevitably going to die.Her (2013)

Voice assistants are becoming more and more ubiquitous. Smart speakers became popular after Amazon introduced Echo, a speaker with Alexa as the voice assistant, in November 2014. By 2017, tens of millions of smart speakers were in people’s homes, and every single one of them had voice as their main interface. Voice assistants are not only present in smart speakers but also in every smartphone. The most well-known one, Siri, powers the iPhone.

The debut impression of Apple’s Siri, the first voice assistant deployed to the mass market, occurred during a media event on October 4, 2011. Phil Schiller, Apple’s Senior Vice President of Marketing, introduced Siri by showing all its capabilities such as looking at the weather forecast, setting an alarm, and checking the stock market. That event was actually Siri’s second introduction. When first launched, Siri was a standalone app created by Siri, Inc. Apple bought the technology for $200M in April 2010.*

Siri was an offshoot from an SRI International Artificial Intelligence Center project. In 2003, DARPA led a 5-year, 500-person effort to build a virtual assistant, investing a total of $150M. At that time, CALO, Cognitive Assistant that Learns and Organizes, was the largest AI program in history. Adam Cheyer was a researcher at SRI for the CALO project, assembling all the pieces produced by the different research labs into a single assistant. The version Cheyer helped build, also called CALO at the time, was still in the prototype stage and was not ready for installation on people’s devices. Cheyer was in a privileged position to understand how CALO worked from end to end.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

AI in Medicine

I will use treatment to help the sick according to my ability and judgment, but never with a view to injury and wrongdoing.Hippocratic Oath

Sebastian Thrun

Sebastian Thrun, who grew up in Germany, was internationally known for his work with robotic systems and his contributions to probabilistic techniques. In 2005, Thrun, a Stanford professor, led the team that won the DARPA Grand Challenge for self-driving cars. During a sabbatical, he joined Google and co-developed Google Street View and started Google X. He co-founded Udacity, an online for-profit school, and is the current CEO of Kitty Hawk Corporation. But in 2017, he was drawn to the field of medicine. He was 49, the same age as his mother, Kristin (Grüner) Thrun, was at her death. Kristin, like most cancer patients, had no symptoms at first. By the time she went to the doctor, her cancer had already metastasized, spreading to her other organs. After that, Thrun became obsessed with the idea of detecting cancer in its earliest stages when doctors can remove it.

Early efforts to automate diagnosis resembled textbook knowledge. In the case of electrocardiograms (ECG or EKG), which show the heart’s electrical activity as lines on a screen, these programs tried to identify characteristic waveforms associated with different conditions like atrial fibrillation or a blockage of a blood vessel. The technique followed the path of the domain-specific expert systems of the 1980s.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

AI and Space

Imagination will often carry us to worlds that never were. But without it we go nowhere.Carl Sagan*

Crop Prediction

To analyze these images, however, the data needs proper classification. To solve this problem, Descartes Labs, a data analysis company, stitches together daily satellite images into a live map of the planet’s surface and automatically edits out any cloud cover.* With these cleaned-up images, they use deep learning to predict more accurately than the government the percentage of farms in the United States that will grow soy or corn.* Since the production of corn is a business worth around $67B, this information is extremely useful to economic forecasters at agribusiness companies who need to know how to predict seasonal outputs. The US Department of Agriculture (USDA) provided the prior benchmark for land use, but that technique used year-old data when released.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

AI in E-Commerce

If you double the number of experiments you do per year, you’re going to double your inventiveness.Jeff Bezos

Stitch Fix, an online clothing retailer started in 2011, provides a glimpse of how some businesses already use machine learning to create more effective solutions in the workplace. The company’s success in e-commerce reveals how AI and people can work together, with each side focused on its unique strengths.

Stitch Fix believes its algorithms provide the future for designing garments,* and, they have used that technology to bring their products to the market. Customers create an account on Stitch Fix’s website and answer detailed questions regarding things like their size, style preferences, and preferred colors.* The company then sends a clothing shipment to their home. Stitch Fix stores the information of what customers like and what they return.

The significant difference from a traditional e-commerce company is that customers do not choose the shipped items. Stitch Fix, like a conventional retailer, buys and holds its own inventory so that they have a wide stock. Using stored customer information, the company uses a personal stylist to select five items to ship to the customer. The customer tries them on in the comfort of their home, keeps them for a few days, and returns any unwanted items. The entire objective of the company is to excel at personal styling and send people things that they love. They seem to be succeeding, as Stitch Fix has more than 2 million active customers and a market capitalization of more than $2B.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

AI and the Law

Justice cannot be for one side alone, but must be for both.Eleanor Roosevelt

Law seems like a field unlikely to make use of artificial intelligence, but that is far from the truth. In this chapter, I want to show how machine learning impacts the most unlikely of fields. Judicata creates tools to help attorneys draft legal briefs and be more likely to win their cases.

Judges presiding over court cases should rule fairly in disputes for plaintiffs and defendants. A California study, however, showed that judges have a pro-prosecutor bias, meaning they typically rule in favor of the plaintiff. But no two people are equal, and that is, of course, true of judges.* While this bias is a general rule, it is not necessarily true of judges individually.

For example, let’s use California Justices Paul Halvonik and Charles Poochigian to show how different judges are. Justice Halvonik was six times more likely to decide in favor of an appellant than Justice Poochigian. This might be surprising, but it is more understandable given their backgrounds.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

AI and Real Estate

It amazes me how people are often more willing to act based on little or no data than to use data that is a challenge to assemble.Robert Shiller*

Homes are the most expensive possession the average American has, but they are also the hardest to trade.* It is difficult to sell a house in a hurry when someone needs the cash, but machine learning could help solve that. Keith Rabois, a tech veteran who served in executive roles at PayPal, LinkedIn, and Square, founded Opendoor to solve this problem. His premise is that hundreds of thousands of Americans value the certainty of a sale over obtaining the highest price. Opendoor charges a higher fee than a traditional real estate agent, but in return, it provides offers for houses extremely quickly. Opendoor’s motto is, “Get an offer on your home with the press of a button.”

Opendoor buys a home, fixes issues recommended by inspectors, and tries to sell it for a small profit.* To succeed, Opendoor must accurately and quickly price the homes it buys. If Opendoor prices the home too low, the sellers have no incentive to sell their house through the platform. If it prices the home too high, then it might lose money when selling the house. Opendoor needs to find the fair market price for each home.

Real estate is the largest asset class in the United States, accounting for $25 trillion, so Opendoor’s potential is huge. But for Opendoor to make the appropriate offer, it must use all the information it has about a house to determine the appropriate price. Opendoor focuses on the middle of the market and does not make offers on distressed or luxury houses because their prices are not predictable.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Risks and Impact of AI42 minutes, 39 links

Surveillance

If you want to keep a secret, you must also hide it from yourself.George Orwell, 1984*

On a Saturday evening, Ehmet woke up as on any other day and decided to go to the grocery store near his home. But on the way to the store, he was stopped by a police patrol. Through an app that uses face recognition, the police force identified him as one of the few thousand Uyghur that lived in the region. Ehmet was sent to one of the “re-education camps” with more than a million other Uyghur Muslims.*

Even though this seems like a dystopian future, where people are identified by an all-present surveillant state, it is already happening under the Chinese Communist Party. George Orwell’s novel 1984 couldn’t be closer to reality. This scenario is unlikely to happen in other countries, but in this chapter, I go over some companies that are using the power of AI to surveil citizens elsewhere.

One of these companies turning the dystopian version of the future into reality is Clearview AI. Police departments across the United States have been using Clearview AI’s facial recognition tool to identify citizens. In fact, the main immigration enforcement agency in the US, the Department of Justice, and retailers including Best Buy and Macy’s are among the thousands of government entities and companies around the world that have used Clearview AI’s database of billions of photos to identify citizens.*

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Recommendation Algorithms

We’ve all been there. You start watching a video on YouTube. Before you realize it, it’s 1 a.m., and you are watching videos about Greek philosophers and their influence in the modern world.

This is known as the “YouTube rabbit hole”—the process of watching YouTube videos nonstop. Most of these videos are presented by YouTube’s recommendation algorithm, which determines what to suggest you watch based on your and other users’ watch histories.

TikTok, Netflix, Twitter, Facebook, Instagram, Snapchat, and all services that present content have an underlying algorithm that distributes and determines the material presented to users. This is what drives YouTube’s rabbit hole.

For TikTok, an investigation done by the Wall Street Journal found that the app only needs one important piece of information to figure out what a user wants: the total amount of time a user lingers on a piece of content.* Through that powerful signal, TikTok can learn people’s interests and drive users into rabbit holes of content. YouTube’s and TikTok’s algorithms are all engagement-based, but according to Guillaume Chaslot, TikTok’s algorithms learn much faster.*

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Interpretability of Neural Networks

”By the help of microscopes, there is nothing so small, as to escape our inquiry; hence there is a new visible world discovered to the understanding.”Robert Hooke*

Mary spent the whole morning on her TikTok getting videos about how lamps work. Her TikTok feed is mostly that and cute videos of dogs. As with many who have interacted with TikTok or other social media apps, she never noticed that most of her social media feed is determined mostly by algorithms that tell her what to watch next.

This isn’t a problem when she is watching videos of dogs, but one day she was browsing around and started watching depressing videos, and the algorithm just reinforced that.

A neural network is behind the videos that she watches, recommending 70% of them.* And the algorithm is mostly a black box. That is, the humans that wrote the neural network don’t know its exact inner workings. Most of what they know is that using these algorithms increases engagement. But is that enough?

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Economic Impact of AI

We wanted flying cars, instead we got 140 characters.Peter Thiel*

Jennifer woke up early on Monday morning. Before going to work, she received a personalized message distilling all information that she needed to know for the day. She walked out of her house and hailed an autonomous car that was waiting for her. As her car rode from her home to her office, Jennifer’s AI assistant briefed her about her day and helped her make some decisions. She arrived at her office in just under ten minutes, going through an underground tunnel.

That’s a future that seems far off, but it might be closer than we think. Deep learning might make most of these predictions reality. It is starting to change the economy and might have a significant economic impact. ARK Invest, an investment firm based in New York, predicts that in 20 years, deep learning will create a $17 trillion market opportunity.* That is bigger than the economic impact that the internet had.

Even though these predictions are far off, deep learning is already having an impact on the world. It is already revolutionizing some fields in artificial intelligence. In the past seven years, machine learning models for vision and language have been completely overtaken by deep learning models. These new models outperform any other “old” artificial intelligence techniques. And every few months, a bigger and newer model outperforms state-of-the-art results.*

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Artificial General Intelligence

Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a … canvas into a beautiful masterpiece?

Robot Sonny: Can you?

I, Robot (2004)

Using the past as an indicator of the future, this final chapter addresses how artificial intelligence systems might evolve into artificial general intelligence. It explains the difference between knowing that versus knowing how. And given that the brain is a good indicator of how AI systems evolve, we know that for the animal kingdom there is a high correlation of intelligence to the number of pallial and cortical neurons. The same has been true for deep learning. The higher the number of neurons, the more performant a multilayer neural network is. While artificial neural networks still have a few orders of magnitude less neurons than the human brain, we are marching toward that milestone. Finally, we’ll talk about the Singularity, a point where artificial intelligence might be hard to control.

The Past as an Indicator of the Future

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Emerging Developments in Artificial Intelligence16 minutes, 67 links

A Landscape of Top Artificial Intelligence Teams

This chapter reflects recent developments and was last updated in October of 2022.

This landscape of the top artificial intelligence teams aims to track the most prominent teams developing products and tools in each of several areas. Tracking these teams gives a good starting point of the activity of where future development will be.

2022 has seen remarkable tools being developed by top teams, including some, especially DALL-E 2, that are so impressive they have gone viral. This builds on high-profile tools released to the public in the past two years, including GPT-3 in 2020 and GitHub CoPilot (based on GPT-3) in June 2021, which now enjoys widespread use by almost 2 million developers.

Loading chart…

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Notable Developments in AI in 2022

In 2022, we continue to see the growing size of neural networks, even though there hasn’t been a new development in neural networks as game-changing as Transformers in 2017. In 2021, Microsoft released a 135 billion parameter neural network model, and at the end of 2021, Nvidia together with Microsoft released an even larger model, called Megatron-Turing NLG, with 530 billion parameters. There is no reason to believe that the growth will stop any time soon. We haven’t seen a model of headline-grabbing size as of July 2022, but that could change by the end of the year.

Credit: Generated with DALL-E 2.

2022 has seen remarkable tools being developed by top teams. In April 2022, DALL-E 2, an AI system that can create realistic images and art from a description in natural language, was released. It took the world by storm. This builds on high-profile tools released to the public in the past two years, including GPT-3 in 2020 and GitHub CoPilot (based on GPT-3) in June 2021, which now enjoys widespread use by almost 2 million developers.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Further Readings in AI21 links

The resources here are a small subset of the full set of resources available on the web, selected for their breadth, notability, and depth on specific issues.

Machine Learning Blogs

Books

Technical Books

Courses and Programs

Year in Review Reports

Appendix15 minutes, 74 links

Short Biographies of Key Figures in AI

This list is far from complete but offers information about some of the people mentioned in this book.

Greg Brockman is the co-founder and CTO of OpenAI. He was formerly the CTO of Stripe.

Rodney Brooks is the co-founder of iRobot and Rethink Robotics, and former director of the MIT Computer Science and Artificial Intelligence Laboratory.

Adam Cheyer is an AI researcher and the developer of Siri. He has co-founded several startups, including Siri, Inc.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.

Notable AI Tools and Efforts

Amazon’s Alexa is a virtual assistant created by Amazon.

AlphaFold is a machine learning algorithm developed by DeepMind that predicts 3D protein structures based on only the DNA.

DARPA Grand Challenge was a competition for autonomous vehicles, that was funded by the Defense Advanced Research Projects Agency.

Google X is a research and development organization inside Google that is focused in what it calls moonshots.

This is a preview. Buy it now for lifetime access to expert knowledge, including future updates.
?L