What Can AI Learn from Animal Brains?

43 minutes, 38 links
From

editione1.0.2

Updated November 2, 2022

You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.

To understand the development of AI algorithms and the way they improve as they learn over time, it is really important to take a step back from artificial intelligence systems and focus on how brains function. As it turns out, AI systems work much the same way as human brains. So, I must first explain, at least at a high level, how animal, and specifically human, brains work.

The most important piece is the theory of Learning to Learn, which describes how the brain learns the technique to learn new topics. The human brain learns and encodes information during sleep or at least in restful awake momentsβ€”converting short-term memory to long-termβ€”through hippocampal, visual cortex, and amygdala replay. The brain also uses the same circuitry that decodes the information stored in the hippocampus, visual cortex, and amygdala to predict the future. Again, much like the human brain, AI systems decode previous information to create future scenes, like what may happen next in a video.

Biological Inspirations for Deep Learning

The truth is that a human is just a brief algorithmβ€”10,247 lines. They are deceptively simple. Once you know them, their behavior is quite predictable.Westworld, season two finale (2018)

Animal Brains

Humans have long deemed ourselves as the pinnacle of cognitive abilities among animals. Something unique about our brains makes us able to question our existence and, at the same time, believe that we are king of the animal kingdom. We build roads, the internet, and even spaceships, and we are at the top of the food chain, so our brains must have something that no other brain has.* Our cognitive abilities allow us to stay at the top even though we are not the fastest, strongest, or largest animals.

The human brain is special, but sheer mass is not the reason why humans have more cognition than different animals. If that were the case, then elephants would be at the top of the pyramid because of their larger brains. But not all brains are the same.* Primates have a clear advantage over other mammals. Evolution resulted in an economical way in which neurons are added to their brains without the massive increase in average cell sizes seen in other animals.

Primates also have another advantage over other mammals in the ability to use complex tools. Humans aren’t the only primates who can do this: chimpanzees, for example, use sprigs to perform many tasks, from scratching their back to digging termites. Tool use isn’t restricted to primates, either. Crows also use sticks to extract prey from their hiding spaces. And, they can even make their sticks into better tools like by making a carving hook at the end of a twig to better reach their prey.*

Other animals also have similar cognitive abilities as humans. Chimpanzees and gorillas, which cannot vocalize for anatomical reasons, learn to communicate with sign language. A chimpanzee in Japan named Ai (meaning β€œlove” in Japanese) plays games on a computer better than the average human.* With her extensive chimpanzee research, Jane Goodall showed that they could understand other chimpanzees’ and humans’ mental states and deceive others based on their behavior.* Even birds seem to know other individuals’ mental states. For example, magpies fetch food in the presence of onlookers and then move it to a secret location as soon as the onlookers are gone. Birds can also learn language. Alex,* an African gray parrot owned by psychologist Irene Pepperberg,* learned to produce words that symbolize objects.* Chimpanzees, elephants,* dolphins,* and even magpies* appear to recognize themselves in the mirror.*

So, what makes humans smarter than chimpanzees that are, in turn, smarter than elephants? Professor Suzana Herculano-Houzel’s research showed that the number of neurons in the mammalian cerebral cortex and the bird pallium has a high correlation with their cognitive capability.*

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available

The cerebral cortex and bird pallium are the outermost part of the brain and more evolutionary advanced than other brain regions. The more neurons in these specific regions, regardless of brain or body size, the better a species performs at the same task. For example, birds have a large number of neurons compressed in their brain compared to mammals, even though the size of their brains are smaller.

Not only that, but the size of the neocortex, the largest and most modern part of the cortex, is also a constraint for group size in animals, meaning in social relationships.

Robin Dunbar suggests that a cognitive limit exists for the number of people you can maintain a relationship with. His work led to what is called Dunbar’s number, and he posits that the answer is 150 based on the size of the human brain and the number of cortical neurons.*

Figure: Animals’ cognitive ability and the respective number of cortical and pallial neurons in their brains.* This image shows that there is a clear correlation between cognitive ability and performance, and the number of cortical or pallial neurons. The performance % on the y axis is the completion of a simple task.

There is a simple answer for how our brains can be at the same time similar to others in its evolutionary constraints and yet so advanced to create language and develop tools as complex as we do. Being primates bestows upon humans the advantage of a large number of neurons packed into a small cerebral cortex.*

The Connection between Brains and Artificial Intelligence

What do animal brains have to do with AI systems and humans? First, the cognitive capacity of some animals suggests that we are not as unique as some think. While some argue that there are certain capabilities that only humans can perform, they have been proven wrong time and again. Second, the correlation of cognitive ability and the number of neurons might be an indication that neural networks will perform better as the number of artificial neurons increases. These artificial neural networks, of course, need the correct data and right type of software, as discussed in the previous section.

While the number of neurons affects animals’ cognitive ability, their brains have much more neurons than most deep learning models. Today’s neural networks have around 1 million neurons, about the same number as a honeybee. It might not be a coincidence that as neural networks increase in size, the better they perform at different tasks. As they approach the number of neurons in a human brain, around 100 billion neurons, it could be that they will perform all human tasks with the same capability.

Application of Animal Brains to AI

A clear correlation exists between the cognitive capacity of animals and the number of pallial or cortical neurons. Therefore, it follows that the number of neurons in an artificial neural network should affect the performance of these models since neural networks were designed based on how neurons interact with each other.

A neural network can represent any kind of program, and neural networks that have a larger number of neurons and layers can represent more complex programs. Because more complex problems require more complicated programs, larger neural networks are the solution. As machine learning evolved to make more efficient algorithms, neural networks needed more layers and neurons. But with that advancement came the problem of figuring out the weights of all these neurons.

With 1,000 connections, at least configurations are possible, assuming that each weight can be either 0 or 1. Since the weights are usually real numbers between 0 and 1, the number of configurations is infinite. So, figuring out the weights became intractable, but backpropagation solved this problem. That technique helped researchers determine the weights by changing them on the last layer first, and then going down the layers until reaching the first one. This made the problem more tractable and allowed developers and researchers to use multilayer neural networks for different algorithms. By the way, this work was conducted independently from research in neuroscience.

Years of research demonstrated that the backpropagation technique used in computer science also happens in the brain. Neuroscientists have models that might show that the human brain could employ a similar method for learning, and the brain performs the same learning algorithm that researchers created to update their artificial neural networks. Short pulses of dopamine* are released onto many dendrites, driving synaptic learning in the human brainβ€”part of the neuron-prediction error from a failure to predict what was expected. In deep learning, backpropagation works by updating the neural network weights based on the prediction error of the model’s output compared to the expected output. Both the brain and artificial neural networks use these errors to update the weights or synapses. Research on the brain and in computer science seem to converge. It is as if mechanical engineers developed airplanes merely to figure out that birds use the same technique. In this case, computer scientists developed artificial neural networks that demonstrate how brains work.

Human brains* and AI algorithms developed separately and over time, but they still perform in similar ways. It might not be a coincidence that billions of years of evolution led to better-performing algorithms as well as improved techniques to learn and interact with the environment. Therefore, it is valuable to understand how the brain operates and compare it to the software that computer scientists develop.

Learning to Learn Theory

The algorithms that are winning in games like Go or Dota 2 use reinforcement learning to train multilayer neural networks. The animal brain also uses reinforcement learning via dopamine. But research shows that the human brain performs two types of reinforcement learning on top of each other. This new theory implements a technique called Learning to Learn, also called meta-reinforcement learning, which may benefit machine learning algorithms.

The Standard Model of Learning

Dopamine is the neurotransmitter associated with the feeling of desire and motivation.

Neurons release dopamine when a reward for an action is surprising. For example, when a dog receives a treat unexpectedly, dopamine is released in the brain. The reverse is also true. When the brain predicts a reward and the animal does not get it, then a dip in dopamine occurs. Simply put, dopamine serves as a way for the brain to learn through reinforcement learning.

These dopamine fluctuations are what scientists call signaling a reward prediction error. There is a burst of dopamine when things are better than expected and a dip when things are worse. Dozens of studies show that the burst of dopamine, when it reaches the striatum, adjusts the strength of synaptic connections. How does that drive behavior? When you execute an action in a particular situation, if an unexpected reward occurs, then you strengthen the association between that situation and action. Intuition says that if you do something and are pleasantly surprised, then you should do that thing more often in the future. And if you do something and are unpleasantly surprised, then you should do it less often.

Inside people’s brains, the levels of dopamine increase when there is a difference between the predicted reward and the reward for a task. But dopamine also rises when it predicts that a reward is about to happen. So, it tricks people’s brains into doing work even if the reward does not come. For example, when you train a dog to do something like come to you when you blow a whistle, dopamine is what drives the synaptic change. You teach your dog to come when called by rewarding him, like giving him a treat, when he does what you want. After a while, you no longer need to reward the dog because his brain releases dopamine, expecting the reward (treat). Dopamine is part of what is known as model-free reinforcement learning.

Model-Free versus Model-Based Reinforcement Learning

But that is not the only system in people’s brains benefiting from reinforcement learning. The prefrontal cortex, the part of the cortex that is at the very front of the brain, also uses reinforcement learning rewards in its activities, or dynamics.

The prefrontal cortex together with the rest of the brain has two circuits that create what is called Learning to Learn. Model-based learning occurs via dopamine and model-free learning acts on top of that circuit in the prefrontal cortex.

One way to describe the difference between model-free and model-based reinforcement learning is that the latter uses a model of the task, meaning an internal representation of task contingencies. If I do this, then this will happen, or if I do that, then the other thing will happen. Model-free learning, however, does not do that. It only responds to the strengthening or weakening of stimulus-response associations. Model-free learning does not know what is going to happen next and simply reacts to what is happening now. That is why a dog can learn, with dopamine, how to come when called even if you stop giving it treats. It had no model of the event but learned that the stimulus, like whistling, is a good thing.

Inferred Value

If the dopamine learning mechanism is model-free, then it should not reflect something called inferred value. I explain what that means with the following experiment will help explain this concept

A monkey looks at a central fixed point and sees targets to the left and right. If the monkey moves its eyes to a target, it is given a reward or not, depending on what side he was asked to look toward. Sometimes the left is rewarded and other times the right. These reward contingencies remain the same for a while and then reverse in a way not signaled to the animal, except by the rewards themselves. So, let’s say that the left is rewarded all the time and the right is not, but suddenly, the right is rewarded all the time and that continues for a while.

Initially, the monkey received a reward for looking left, and the brain immediately received dopamine. In this case, if the monkey looks right, dopamine is not released because the monkey is not going to get a reward. But at the moment of reversal, the monkey thinks it will receive a reward for looking left, but it receives nothing. When the target changes to the right, the monkey receives a reward for that new task. Once the animal understands the new task, then looking to the left should no longer trigger the dopamine response because the animal has experience and evidence to say that there is a reversal. The task that used to excite dopamine disappoints the dopamine system, and the target that did not previously stimulate the dopamine system now does. The animal has experienced a stimulus-reward association, and the dopamine system adjusts to that.

But consider a different scenario. The animal was rewarded for looking left, but in the next trial, the right is the target. It has no experience with the right in this new regime. But what you find is that if the right was not rewarded before and the animal infers that the right should be rewarded, then dopamine is released. Since the monkey knows that there has been a reversal now, it can tell that the next target should be rewarded. This is a model-based inference since it draws on the knowledge of the task, and that presumed reward is called inferred value.

Two-Step Tasks

Given the concept of inferred value, it is possible to determine that some parts of the brain learn via model-free and others from model-based reinforcement learning. The dopamine response clearly does not show inferred value because it is not based on a model of the task, but the brain still performs model-based reinforcement learning in its prefrontal cortex circuitry. The technique to show this is called a two-step task and works as follows.

Let’s say you play a game where you drive a car. The only two actions are turning left or right. If you turn left, then you die and lose the game. But if you turn right, then you continue playing the game.

If the driver plays the game again, a model-free system says, β€œIf I turned right and did not die last time, then I should turn right again. Turning right is β€˜good.’” A model-based system will understand the task at hand and will turn right when the road goes to the right and turn left when the road goes to the left. Therefore, someone who learns driving using a model-free reinforcement learning algorithm will never learn how to drive these roads properly. But a driver who learns to drive with a model-based algorithm will do just fine.

This simple task gives us a way of teasing apart model-free and model-based action selection. If you plot the behavior of the beginning of the trial, then you can show whether the system is a model-free or model-based reinforcement learning algorithm. The two-step task shows the fingerprint of the algorithm.

Studies with humans and even animals, including rats, that measure brain signals in the two-step task show that the prefrontal cortex presents the model-based pattern. In 2015, Nathaniel Daw demonstrated that behavior in the human prefrontal circuit via brain signals and the two-step task.* This implies that the prefrontal circuit learns from its own autonomous reinforcement learning procedure, which is distinct from the reinforcement learning algorithm used to set the neural network weightsβ€”the dopamine-based model-free reinforcement learning.

Model-Free and Model-Based Learning, Working Together

These two types of circuits work together to form what is known as Learning to Learn. Dopamine works on top of the prefrontal cortex as part of a model-free reinforcement learning system to update the circuit connections, while the prefrontal cortex circuit learns via model-based reinforcement learning.

The type of reinforcement learning implemented in the prefrontal circuit can be executed even when the synaptic weights are frozen. That means that the neural circuitry in the brain does not update the synapses’ weights to implement reinforcement learning.

It is different from the reinforcement learning algorithm accomplished by dopamine that trains the synaptic weights in the prefrontal cortex. In the prefrontal circuit, the task structure sculpts the learned reinforcement learning algorithm, which means that each task will have a different type of model-based reinforcement learning algorithm that runs in the prefrontal circuit.

In a different type of experiment, monkeys have two targets, A and B, in front of them and the reward probability between the two targets changes over time.* The monkey looks at the center point between the targets, and then it chooses to stare at one target or the other and receives a reward after a minute or so. This experiment showed that the brain has the two types of reinforcement learning algorithms working together, a model-free dopamine-based one on top of a model-based algorithm.

With that in mind, Matthew Botvinick designed a deep learning neural network that had the same characteristics as the brains of monkeys, that is, that learned to learn.

The results showed that if you train a deep learning system on this task using a reinforcement learning algorithm and without any additional assumptions, the network itself instantiated a separate reinforcement learning algorithm; that is, the network imitated what was found in the brain.*

Sleeping and Learning

And it is only after seeing man as his unconscious, revealed by his dreams, presents him to us that we shall understand him fully. For as Freud said to Putnam: β€˜We are what we are because we have been what we have been.’AndrΓ© Tridon*

It is a well-known fact that memory formation and learning are related to sleep. A rested mind is more capable of learning concepts, and the human brain does not have as detailed a memory of yesterday as it has of the present day. In this chapter, I detail how the brain learns during sleep, describing hippocampal replay, visual cortex replay, and amygdala replay. They are all mechanisms the brain uses to convert short-term memory into long-term memory, encoding the knowledge stored throughout the day. The same circuitry responsible for decoding information from the neocortex to support memory recall is also used for imagination, which indicates that the brain does not record every moment and spends time learning during the night.

Complementary Learning Systems Theory

In 1995, the complementary learning systems (CLS) theory was introduced,* an idea that had its roots in earlier work by David Marr.* According to this theory, learning requires two complementary systems. The first one, found in the hippocampus, allows for rapid learning of the specifics of individual items and experience. The second, located in the neocortex, serves as the basis of the gradual acquisition of structured knowledge about the environments.

The neocortex gradually acquires structured knowledge,* and the hippocampus quickly learns the particulars. The fact that bilateral damage to the hippocampus profoundly affects memory for new information but leaves language, general knowledge, and acquired cognitive skills intact supports this theory. Episodic memory, that is the memory related to collections of past personal experiences occurring at a particular time and place, is widely accepted to depend on the hippocampus.

Figure: Hippocampus location inside the human brain.

Hippocampal Replay

The hippocampus is responsible for spatial memory (where am I?), declarative memory (knowing what), explicit memory (recalling last night’s dinner), and recollection (retrieval of additional information about a particular item like the color of your mother’s phone).

Hippocampal replay is the process by which, during sleep or awake rest, the same cells in the hippocampus activated during an initial activity are activated during sleep in the same order, or the completely reverse order, but at a much faster speed. Hippocampal replay has been shown to have a causal role in memory consolidation.

Howard Eichenbaum and Neal J. Cohen captured this view in 1988 with their suggestion that these hippocampal neurons should be called relational cells rather than the narrower term β€œplace cells.”*

The hippocampus is an essential part of how memories form.* When a human experiences a new situation, the information about it is encoded and registered in both the hippocampus and cortical regions. Memory is retained in the hippocampus for up to a week after the initial learning. During this stage, the hippocampus teaches the neocortex more and more about the information. This process is called the hippocampal replay. For example, during the day, a mouse is trapped in a labyrinth and learns the path to get out. That night, the hippocampus replays the same neurons that were fired in the hippocampus and encodes the spatial information into the neocortex. The next time that the mouse is in the same labyrinth, it will know where to go based on the encoded information.

In this theory, the hippocampus, where synapses change quickly, is in charge of storing memories temporarily, whereas neocortical synapses change over time. Lesions made in the hippocampus and associated structures in animals are associated with deficits in spatial working memory and a failure to recognize familiar environments. Hence, consolidation may be an active process by which new memory traces are selected and incorporated into the existing corpus of knowledge at variable rates and with differential success according to their content.

Visual Cortex Replay

The visual cortex presents the same kind of replay and acts in synchrony with the hippocampus.* Experiments show that the temporarily structured replay occurs in the visual cortex and hippocampus in an organized way called frames. The multicell firing sequences evoked by awake experiences replay during these frames in both regions. Not only that, but replay events in the sensory cortex and hippocampus are coordinated to reflect the same experience.

Amygdala Replay

Frightening awake rats reactivates their brain’s fear center, the amygdala, when they next go to sleep.* In 2017, scientists at New York University (NYU), GyΓΆrgy BuzsΓ‘ki and Gabrielle Girardeau, demonstrated this by adding rats to a maze and then giving them an unpleasant but harmless experience such as a puff of air.* From then on, the rats feared that place. β€œThey slowed down before the location of the air puff, then [ran] super fast away from it.” The team also recorded the activity at the amygdala cells, which showed the same pattern of firing as the hippocampus. Their amygdalae became more active when they mentally revisited the fearsome spot.* These events may happen in order to store retained information in a different, lower-level part of the brain as well as in the neocortex, which is a more evolutionarily advanced part of the brain.

BuzsΓ‘ki noted that it is unclear if the rats experienced this as a dream or if the experience led to nightmares. β€œWe can’t ask them.” He went on to say, β€œIt has been fairly well documented that trauma leads to bad dreams. People are scared to go to sleep.”

Memory Recall versus Memory Formation

When people have new experiences, the memory formed by them is stored in the brain in different parts of the hippocampus and other brain structures. Different areas of the brain store different parts of the memory, like the location of where the event happened and the emotions associated with it.*

For a long time, neuroscientists who studied the brain believed that when we recall memories, our brains activate the same hippocampal circuit as when the memories initially formed. But a study in 2017,* conducted by neuroscientists at MIT, showed that recalling a memory requires a detour circuit, called a subiculum, that branches off from the original memory circuit.*

β€œThis study addresses one of the most fundamental questions in brain researchβ€”namely how episodic memories are formed and retrievedβ€”and provides evidence for an unexpected answer: differential circuits for retrieval and formation,” says Susumu Tonegawa, the Picower Professor of Biology and Neuroscience.*

The study also has potential insights regarding Alzheimer’s and the subiculum circuit. While researchers did not specifically study the disease, they found that mice with early-stage Alzheimer’s had difficulty recalling memories although they continued to create new ones.

In 2007, a study published by Demis Hassabis showed that patients with damage to their hippocampus could not imagine themselves in new experiences.* The finding shows that there is a clear link between the constructive process of imagination and episodic memory recall. We’ll discuss that further in the next chapter.

Sleep’s Relation to Deep Learning

All low-level parts of the brainβ€”including the hippocampus, visual cortex, and amygdalaβ€”replay during sleep to encode information. That is why it is easy to remember what you had for lunch on the same day but hard to remember what you ate yesterday. Short-term memories in the lower levels stay until your brain stores them and encodes all the knowledge during sleep. The neocortex stores relevant information encoded and compacted.

Deep neural networks also serve as a way of encoding information. For example, when a deep neural network classifies an image, it encodes it into the classified objects because the image contains more bits of data than merely a tag. An apple can look a thousand different ways, but they are all called apples. Turning short-term memory into long-term memory involves compressing all the information, including visual, tactile, and any other sensory material into compact data. So, someone can say that they ate a juicy apple yesterday but not remember all of the details of how the apple looked or tasted.

Memory recall and imagination serve as a way of decoding information from the higher parts of the brain, including the neocortex, into the lower parts of the brain, including the amygdala, visual cortex, and hippocampus. Memory recall and imagination may be only decoding the information that is stored in the neocortex.

Predicting the Future

John Anderton: Why’d you catch that?

Danny Witwer: Because it was going to fall.

John Anderton: You’re certain?

Danny Witwer: Yeah.

John Anderton: But it didn’t fall. You caught it. The fact that you prevented it from happening doesn’t change the fact that it was going to happen.

β€”Minority Report (2002)

Predictive Coding

A study in 1981 by James McClelland and David Rumelhart at the University of California, San Diego, showed that the human brain processes information by generating a hypothesis of the input and then updating it as the brain receives data from its senses.* They demonstrated that people are able to identify letters when situated in the context of words, compared to words without that semantic setting.

In 1999, neuroscientists Rajesh Rao and Dana Ballard created a computational model of vision that replicated many well-established receptive field effects.* The paper demonstrated that there could be a generative model of a scene (top-down processing) that received feedback via error signals (how much the visual input varied from prediction), which in turn led to updating the prediction. The process of creating the generative model of the scene is called predictive coding, whereby the brain creates higher-level information and fills in the gaps of what the sensory input generates.

Figure: An example of a sentence that has flipped words. The brain uses predictive coding to correct them.

An example of predictive coding is when you read a sentence that contains a word that is reversed or contains a letter in the middle that should not be there, like in the above image. The brain erases the error, and the sentence seems correct. This happens because the brain expects that the wording is correct when it is first encountered. As our brain processes the sentence, it predicts what should be written and sends that information downstream to the lower levels of the brain. Predictive coding works not only on sentences but also in many different systems inside the brain.

The Blind Spot

Figure: Predictive coding works in the brain, predicting which images are in the blind spot in people’s eyes.

The human eye has a blind spot, which is caused by the lack of visual receptors inside the retina where the optic nerve, which transmits information to the visual cortex, is located. This blind spot does not produce an image in people’s brains, but they do not notice the gap because the human brain fills it in in the same way the brain updates an incorrect word in a sentence. The human brain expects the missing part of the image even though it is not there. The brain takes care of filling in images and correcting words subconsciously.

Figure: Demonstration of the blind spot. Close one eye and focus the other on the letter R. Place your eye a distance from the screen approximately equal to three times the distance between the R and the L. Move your eye towards or away from the screen until you notice the letter L disappear.

To demonstrate that the blind spot is present in your eyes, place your eyes a distance equivalent to three times the distance between the R and L in the figure above. Close one of your eyes and focus the other eye on the appropriate letter. If the right eye is open, focus on the R, or vice versa. Move your closer or farther from the screen until the other letter disappears. The letter will disappear due to the eye’s blind spot.*

Predictive Coding in Software

Yann LeCun, the Chief Artificial Intelligence Scientist at Facebook AI Research and founder of CNNs, is working on making predictive coding work in computers.*

In computer science, predictive coding is a model of neural networks that generates and updates a model of the environment, predicting what will happen next.

LeCun’s technique is called predictive learning, which alludes to the fact that it is trying to predict what is going to happen in the near future as well as fill in the gaps when information is incomplete or incorrect.* He developed the technique using generative adversarial networks to create a video of what is most likely to happen in the future. To achieve that, LeCun’s software analyzed video frames and, based on those, created the next frames of the video. The technique minimizes how different the generated frames are from the analyzed video frames, a measurement known as distance. For example, if the generated frames contain an image of a cat and the original frames do not, then the distance between the frames will be high. If they contain very similar elements, then the distance is small. Currently, the technique can predict up to the next eight frames in the future, but it is not too unthinkable to see a future where machines can predict future outcomes better than humans.

Figure: The first frame comes from a real video, and a machine predicts the next step of the video in the second frame.

Planning and Future Thinking

The hippocampus is not only responsible for remembering but also for planning and future thinking, that is, constructing potential scenarios. Patients with hippocampal damage have difficulty imagining the future and are unable to describe fictitious scenes. Moreover, functional magnetic resonance imaging (fMRI) indicates multiple brain areas, including the hippocampus, engaged during remembering as well as imagining events.

Research shows that reversed hippocampal replay more frequently represents novel as opposed to familiar environments. This effect, measured by coactivations of cell pairs, was more pronounced on the first day of exposure to a novel environment than on subsequent days.

A Theory of Encoding and Decoding

Generative adversarial networks serve as a way to construct images and scenarios. In a way, however, GANs decode information. Techniques exist to generate images based on a few parameters. For example, they can generate images of a smiling woman.

Similarly, the process of remembering or imagining the future, which is done by the hippocampus, is sometimes activated by the prefrontal cortex and is seen as decoding information with parameters. GANs consist of two neural networks, one that encodes information and the other that decodes it. In the same way, the human brain has two circuits that encode information from the hippocampus to the prefrontal cortex and decode information in the other direction. It will be no surprise if the same mechanism that trains GANs (and autoencoders) is done in the human brain.

GANs could serve to simulate the real world and are already used to create reproduced images and videos. The problem is that most of the best AI systems are made for game engines. Some argue that the reason why AI systems work so well in games is that game engines are their own version of the world. That means that AI systems can practice and learn in a virtual environment.

In the real world, for example, a self-driving system cannot drive a car off a cliff thousands of times to learn. In fact, a car driving off a cliff is already fatal, and a system that drives off a cliff once cannot work in the real world. Some say that to train an artificial intelligence system, it is necessary to train it in a simulated world. For supervised and unsupervised learning algorithms, the system must see at least 1,000 examples of what it’s trying to learn. Reinforcement learning algorithms also must practice and learn through many cases. Either researchers must create more efficient algorithms that can learn with fewer examples or reproduce many situations in which the system can acquire experience.

For games, you can use the game engine itself to train the system since all the constraints are defined there and already simulate many of these possible scenarios. So, if you design an AI agent to perform in a game, the agent can play multiple different variations that it wants to test and figure out the best move it should make in the future.

The problem with AI agents in the real world is that they are much more challenging to simulate compared to a game. No clear way exists of creating the real world and testing a few hypotheses. GANs might help solve this problem. LeCun is already using them to create future predictions of video frames. They may end up being used for more long-term predictions of the future. And, it would not be a coincidence that the brain also uses the same system for imagination and memory recall.

Humans may run simulations in their minds of possible scenarios and learn from those scenes. For example, they can imagine driving a car and the different situations that would arise based on the actions that they take. What would happen if they turn left instead of right? Some people argue that for computers to function as well as humans, they need to perform something similar. That means that they, with a few variables like turning left or right, can simulate and imagine the scenario and play it out to figure out the best action to take in the future based on that situation.

Roboticsan hour, 70 links

Robots in the Industry

The Master created humans first as the lowest type, most easily formed. Gradually, he replaced them by robots, the next higher step, and finally he created me to take the place of the last humans.Isaac Asimov, I, Robot*

When people talk about artificial intelligence, they often think of mobile robots. But in computer science, AI is the field focused on the development of the brain of not only such robots but of computers that want to achieve certain goals. These robots do not use any of the deep learning models that we talked about previously. Instead, they have encoded, handwritten software.

You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!