Artificial General Intelligence

13 minutes, 4 links
From

editione1.0.2

Updated November 2, 2022

You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.

Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a … canvas into a beautiful masterpiece?

Robot Sonny: Can you?

β€”I, Robot (2004)

Using the past as an indicator of the future, this final chapter addresses how artificial intelligence systems might evolve into artificial general intelligence. It explains the difference between knowing that versus knowing how. And given that the brain is a good indicator of how AI systems evolve, we know that for the animal kingdom there is a high correlation of intelligence to the number of pallial and cortical neurons. The same has been true for deep learning. The higher the number of neurons, the more performant a multilayer neural network is. While artificial neural networks still have a few orders of magnitude less neurons than the human brain, we are marching toward that milestone. Finally, we’ll talk about the Singularity, a point where artificial intelligence might be hard to control.

The Past as an Indicator of the Future

Arthur C. Clarke has an interesting quote where he says, β€œAny advanced technology is indistinguishable from magic.”* If you were to go back to the 1800s, it would be unthinkable to imagine cars traveling at 100 mph on the highway or living with handheld devices for connecting with people on the other side of the planet.

Since the Dartmouth Conference and the creation of the artificial intelligence field, great strides have been made. The original dream many had of computers, which was to perform any intellectual task better than humans, is much closer than before. Though, some argue that this may never happen or is still in the very distant future.

The past, however, may be a good indication of the future. Software is better than the best humans at playing checkers, chess, Jeopardy!, Atari, Go, and Dota 2. It already performs text translation for a few languages better than the average human. Today, these systems improve the lives of millions of people in areas like transportation, e-commerce, music, media, and many others. Adaptive systems help people drive on highways and streets, preventing accidents.

At first, it may be hard to imagine computer systems performing what once were cerebral tasks like designing and engineering systems or writing a legal brief. But at one time, it was also hard to imagine systems triumphing over the best humans at chess. People claim that robots do not have imagination or will never accomplish tasks that only humans can perform. Others say that computers cannot explain why something happens and will never be able to.

Knowing That versus Knowing How

The problem is that for many tasks humans cannot explain why or how something happens, even though they might know how to do it. A child knows that a bicycle has two wheels, its tires have air, and you ride it by pushing the pedals forward in circles. But this information is different than knowing how to ride a bicycle. The first kind of knowledge is usually called β€œknowing that,” while the skill of riding the bike is β€œknowing how.”

These two kinds of knowledge are independent of each other, but they might help each other. Knowing that you need to push the pedals forward can help a person ride a bike. But β€œknowing how” cannot be reduced to β€œknowing that.” Knowing how to ride a bike does not imply that you understand how it works. In the same way, computers and humans perform different tasks that require them to know how to do it but not β€œknow that.” Many rules apply to the pronunciation of certain words in English. People know how to pronounce the words, but they cannot explain why. A person who has access to a Chinese dictionary may actually understand Chinese with the help of that resource. Computers, in the same way, perform tasks and may not be able to explain the details. Asking why computers do what they do might be the same as asking why someone swings a bat the way they do when playing baseball.

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available

It is hard to predict how everything will play out in the future and what will come next. But looking at the advances of the different subfields of artificial intelligence and their performance over time may be the best predictor of what might be possible in the future. Given that, let’s look at the advances in the different fields of AI and how they stack up. From natural language processing and speech recognition to computer vision, systems are improving linearly, with no signs of stopping.

Figure: AI advances on different benchmarks over time.* First Image: Top-5 accuracy asks whether the correct label is in at least the classifier’s top five predictions. It shows that the error rate has improved from around 85% in 2013 to almost 99% in 2020. Second Image: CityScapes Challenge. Cityscapes is a large-scale dataset of diverse urban street scenes across 50 different cities recorded during the daytime. This task requires an algorithm to predict the per-pixel semantic labeling of the image. Third Image: SuperGLUE Benchmark. SuperGLUE is a single-metric benchmark that evaluates the performance of a model on a series of language understanding tasks on established datasets. Fourth Image: Visual Question Answering Challenge: Accuracy. The VQA challenge, introduced in 2015, requires machines to provide an accurate natural language answer, given an image and a natural language question about the image based on a public dataset.

Data Growth in AI

Algorithms can only solve problems like self-driving cars and winning Go games if they have the correct data. For these algorithms to exist, it is essential to have properly labeled data. In research circles, significant efforts are underway to reduce the size of the datasets needed to create the appropriate algorithms, but even with this work, there is still a need for large datasets.

Figure: Dataset size comparison with the number of seconds that a human lives from birth to college graduation.

Datasets are already comparable in size to what humans capture during their lifetime. The figure above compares the size of the datasets used to train computers to the number of seconds from birth to college graduation of a human on a logarithmic scale. One of the datasets in the figure is Fei-Fei Li’s ImageNet described earlier in this book. The last dataset in the picture is used by Google to create their model for understanding street numbers on the faΓ§ades of houses and buildings.

There is an entire field of research studies on how to combine machine learning models with how humans can fix and change labeled data. But it is clear that the amount of data that we can capture in our datasets is already equivalent to what humans do over their lifetime.

Computation Growth

But machine learning software does not depend solely on data. Another piece of the puzzle is computational power. One way of analyzing the computational power of neural networks deployed today versus what human brains use is to look at the size of the neural networks in these models. The figure below compares them on a logarithmic scale.

Figure: Comparison of the model size of neural networks and the number of neurons and connections of animals and humans.

Neural networks shown in this figure were used to detect and transcribe images for self-driving cars. The figure below compares the scale of both the number of neurons and the connections per neuron. Both are important factors for neural network performance. Artificial neural networks are still orders of magnitude away from the size of the human brain, but they are starting to become competitive to some mammals.*

Figure: 122 years of Moore’s Law: Calculations per second per constant dollar. This is an exponential/log scale, so a straight line is an exponential; each y-axis tick is 100x. This graph covers a 10,000,000,000,000,000,000x improvement in computation/$.

The price of computation has declined over time, and the incremental computation power available to society has increased. The amount of computing power one can get for every dollar spent has been increasing exponentially. In fact, in an earlier section, I showed that the amount of compute used in the largest AI training runs has been doubling every 3.5 months. Some argue that computing power cannot continue this trend due to physics constraints. Past trends, however, do not support this theory. Money and resources in the area have increased over time as well. More and more people work in the field, developing better algorithms and hardware. And, we know the power of the human brain has a limit that can be achieved because it satisfies the constraints of physics.

The Singularity

With more computing power and improved software, it may be that AI systems eventually surpass human intelligence. The point at which these systems become smarter and more capable than humans is called the Singularity. For every task, these systems will be better than humans. When computers outperform humans, some people argue that they can then continue to become better and better. In other words, if we make them as smart as us, there is no reason to believe that they cannot make themselves better, in a spiral of ever-improving machines, resulting in superintelligence.

Some predict that the Singularity will come as soon as 2045. Nick Bostrom and Vincent C. MΓΌller conducted a survey of hundreds of AI experts at a series of conferences and asked by what year the Singularity (or human-level machine intelligence) will happen with a 10% chance, 50% chance, and 90% chance. The responses were the following:

  • Median optimistic year (10% likelihood): 2022

  • Median realistic year (50% likelihood): 2040

  • Median pessimistic year (90% likelihood): 2075*

So, that means that AI experts believe there is a good chance that machines will be as smart as humans in around 20 years.

This is a controversial topic, as there are experts, including John Carmack, who believe that we will start to have signs of AGI in a decade from now.* But others, such as Kevin Kelly, argue that believing that there will be an β€œArtificial General Intelligence” is a myth.* Either way, if the pessimistic timetable for achieving it is any indication, we will know by the end of the century whether it is starting to materialize.

The Singularity and Society?

If the Singularity is as near as many predict and it results in artificial general intelligence that surpasses human intelligence, the consequences are unthinkable to society as we now know it. Imagine that dogs created humans. Would dogs understand the result of creating such creatures in their lives? I doubt it. In the same way, humans are unlikely to understand this level of intelligence, even if we initially created it.

​controversy​Optimists argue that because of the surge of the Singularity, solutions to problems previously deemed impossible will soon be obvious, and this superintelligence will solve many societal problems, such as mortality. Pessimists, however, say that as soon as we achieve superintelligence, then human society as we know it will become extinct. There would be no reason for humans to exist. The truth is that it is hard to predict what will come after the creation of such technology, though many agree that it is near.

If you found this post worthwhile, please share!