AI in Medicine

14 minutes, 12 links
From

editione1.0.2

Updated November 2, 2022

You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.

I will use treatment to help the sick according to my ability and judgment, but never with a view to injury and wrongdoing.Hippocratic Oath

Sebastian Thrun

Sebastian Thrun, who grew up in Germany, was internationally known for his work with robotic systems and his contributions to probabilistic techniques. In 2005, Thrun, a Stanford professor, led the team that won the DARPA Grand Challenge for self-driving cars. During a sabbatical, he joined Google and co-developed Google Street View and started Google X. He co-founded Udacity, an online for-profit school, and is the current CEO of Kitty Hawk Corporation. But in 2017, he was drawn to the field of medicine. He was 49, the same age as his mother, Kristin (GrΓΌner) Thrun, was at her death. Kristin, like most cancer patients, had no symptoms at first. By the time she went to the doctor, her cancer had already metastasized, spreading to her other organs. After that, Thrun became obsessed with the idea of detecting cancer in its earliest stages when doctors can remove it.

Early efforts to automate diagnosis resembled textbook knowledge. In the case of electrocardiograms (ECG or EKG), which show the heart’s electrical activity as lines on a screen, these programs tried to identify characteristic waveforms associated with different conditions like atrial fibrillation or a blockage of a blood vessel. The technique followed the path of the domain-specific expert systems of the 1980s.

In mammography, doctors used the same method for breast cancer detection. The software flagged an area that fit a certain condition and marked the area as suspicious so that radiologists would review it. These systems did not learn over time: after seeing thousands of x-rays, the system was no better at classifying them. In 2007, a study compared the accuracy of mammography before and after the implementation of this technology. The results showed that after aided mammography, was introduced, the rate of biopsies increased and the detection of small, invasive breast cancers decreased.

Thrun knew he could outperform these first-generation diagnostic algorithms by using deep learning instead of rule-based algorithms. With two former Stanford students, he began exploring the most common class of skin cancer, keratinocyte carcinoma, and melanoma, the most dangerous type of skin cancer. First, they had to gather a large number of images to identify the disease. They found 18 online repositories of skin lesion images that were already classified by dermatologists. This data contained around 130,000 photos of acne, rashes, insect bites, and cancers. Of those images, 2,000 lesions were biopsied and identified with the cancer types he was looking for, meaning they had been diagnosed with near certainty.

Figure: Sebastian Thrun.

Thrun’s team ran their deep learning software to classify the data and then checked whether it actually classified the images correctly. They used three categoriesβ€”benign lesions, malignant lesions, and non-cancerous growths. The team began with an untrained network, but that did not perform so well. So, they used an already trained neural network to classify images, and it learned faster and better. The system was correct 77% of the time. As a comparison, two certified dermatologists tested the same samples, and they were only successful 66% of the time.

Then, they widened the study to 25 dermatologists and used a gold standard test set with around 2,000 images. In almost every test, the computer program outperformed the doctors. Thrun showed that deep learning techniques could diagnose skin cancer better than most doctors.

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available

Machine Learning in Radiology

Thrun is not the only one using deep learning to help advance the medical field. Andrew Ng, an adjunct professor at Stanford University and founder of Google Brain, leads a company, DeepLearning.AI, that teaches online AI courses. His company has also shown that deep learning algorithms can identify arrhythmias from an electrocardiogram better than experts.* Along the same lines, the Apple Watch 4 introduced a feature that performs an EKG scan. Previously, this was an expensive exam, so providing millions of people with a free test is significant for society.

Ng also created software using deep learning to diagnose pneumonia better than the average radiologist.* Early detection of pneumonia can prevent some of the 50,000 deaths the disease causes in the US each year. Pneumonia is the single largest infectious cause of death for children worldwide, killing almost a million children under the age of five in 2015.*

Deep learning systems for breast and heart imaging are commercially available,* but they are not running deep learning algorithms, which could improve detection greatly. Geoffrey Hinton, one of the creators of deep learning, said in an interview with The New Yorker, β€œIt’s just completely obvious that in five years deep learning is going to do better than radiologists. … It might be ten years. I said this at a hospital. It did not go down too well.”* He believes that deep learning algorithms will also be used to helpβ€”and possibly even replaceβ€”radiologists reading x-rays, CT scans, and MRIs. Hinton is passionate about using deep learning to help diagnose patients because his wife was diagnosed with advanced pancreatic cancer. His son was later diagnosed with melanoma, but after a biopsy, it turned out to be basal cell carcinoma, a far less serious cancer

Fighting Cancer with Deep Learning

Cancer is still a major problem for society. In 2018, around 1.7 million people in the US were diagnosed with cancer, and 600,000 people died of it. Many drugs exist for every type of cancer, and some cancers even have more than one. The five-year survival rate for many cancers has increased dramatically in the past years, reaching 80% to 100% in some cases, with surgery and drug treatments. But the earlier cancer is detected, the higher the likelihood of survival. Preventing cancer from spreading into other organs and areas of the body is key. The problem is that diagnosing cancer is problematic. Many of the screening methods do not have high accuracy. Some young women disapprove of mammograms because of the many false positives, which create unnecessary worry and stress.

To increase survival rates, it is extremely important to detect cancer as early as possible, but finding an affordable method is difficult. Today’s process usually involves doctors screening patients with different techniques, including checking their skin to see patterns or tests like the digital rectal exam. Depending on the symptoms and type of cancer, the next step may involve a biopsy of the affected area, extracting the tumor tissue. Unfortunately, patients may have cancerous cells that have not yet spread, making detection even harder. And, a biopsy is typically a dangerous and expensive procedure. Around 14% of patients who have a lung biopsy suffer a collapsed lung.*

Freenome, a startup founded in Silicon Valley, is trying to detect cancer early on using a new technique called liquid biopsy.* This test sequences DNA from a few drops of blood. Freenome uses cell-free DNA, which are DNA fragments that are free-floating in people’s blood, to help diagnose cancer patientsβ€”Freenome’s name comes from shortening β€œcell-free genome.” Cell-free DNA mutates every 20 minutes, making it unique. People’s genome changes over time, and uninherited cancer comes from mutations and genomic instabilities that accumulate over time. Cell-free DNA flows through the bloodstream, and fragments of cancerous cells in one area may indicate cancer in another region of the body.*

Freenome’s approach is to look for various changes in cell-free DNA. Instead of only looking at DNA of tumor cells, Freenome has learned to decode complex signals coming from other cells in the immune system that change because of a tumor elsewhere. Their technology looks for the changes in DNA over time to see if there is a significant change compared to a baseline. It is hard, however, to detect cancer based on changes coded in someone’s DNA. There are around 3 billion bases in DNA, leading to a total of possible genomes. So, figuring out if a mutation in one of these genes is caused by another cell that has cancer is extremely hard. Using deep learning, Freenome’s system identifies the relevant parts in the DNA that a doctor or researcher would not be able to recognize. Who could have imagined that deep learning would play such an integral role in identifying cancer? My hope is that this technology will eventually lead to curing cancer.

Figure: Cost per genome over time versus how the price would be if it followed Moore’s Law.*

The first part of the problem involves checking people’s DNA with a simple blood test. While drawing the blood is simple, the test has been extremely expensive to carry out. But over time, genome sequencing has become cheaper and cheaper. In 2001, the cost per genome sequenced was in the order of $100M, but in 2020, the price has decreased to only $1K.* This trend shows no sign of slowing. If the price continues to follow the curve, it will be commonplace for patients to sequence their genome for a few dollars.* It may seem like science fiction now, but in a few years, we could detect cancer early on with only a few blood drops.

Protein Folding

Proteins are large and complex molecules that are essential for sustaining life. Humans require them for everything like sensing light to turning food into energy. Genes translate into amino acids, which turn into proteins. But each protein has a different 3D structure, which determines what it can do. Some have a Y shape while others have a circular form. Therefore, identifying the 3D structure of a protein, given its genetic sequence, is of extreme importance for scientists because it can help them ascertain what each protein does. Distinguishing what the 3D structure of a protein looks like, which is determined by how the forces between the amino acids act, is an immensely complex problem known as the protein folding problem. Counting all possible configurations of a protein would take longer than the age of the universe.

But DeepMind tackled this problem with AlphaFold* by submitting it to CASP, a biennial assessment of protein structure prediction methods. CASP stands for Critical Assessment of Techniques for Protein Structure Prediction.* DeepMind trained their deep learning system using highly available data that maps genomic sequences to the corresponding proteins with their 3D structures.

Given a gene sequence, it is easy to map that to the sequence of amino acids inside the generated protein. With that sequence, DeepMind then created two multilayer neural networks. One predicted the distance of every pair of amino acids in that protein. The second neural network predicted the angles between chemical bonds connecting these amino acids. So, these two networks predicted which proteins’ 3D structures would be the closest to the one that these genes would generate. Given the closest protein structure, it used an iterative process and replaced some of the protein structures with new ones created using a generative adversarial network based on the gene sequence.* If the newly created protein structure had a higher score than the former protein structure, then that part of the protein was replaced. With this technique, AlphaFold determined protein structures much better than the next best contestant in the competition as well as all previous algorithms.

The Prolongation of Life

​caution​But with this new technology, we must return to a discussion of ethics. As long as humans have inhabited this Earth, we have searched for the fountain of youth, immortality. While some people see the quality of life as most important, others see longevity as key. Elizabeth Holmes and her role at the Theranos lab clearly demonstrate the risk of blindly accepting technology before being scientifically proven. Personally, I believe that AI plays a vital role in both increasing longevity as well as quality of life, but we must maintain strict testing and adherence to scientific principles.*

AI and Space

Imagination will often carry us to worlds that never were. But without it we go nowhere.Carl Sagan*

Crop Prediction

To analyze these images, however, the data needs proper classification. To solve this problem, Descartes Labs, a data analysis company, stitches together daily satellite images into a live map of the planet’s surface and automatically edits out any cloud cover.* With these cleaned-up images, they use deep learning to predict more accurately than the government the percentage of farms in the United States that will grow soy or corn.* Since the production of corn is a business worth around $67B, this information is extremely useful to economic forecasters at agribusiness companies who need to know how to predict seasonal outputs. The US Department of Agriculture (USDA) provided the prior benchmark for land use, but that technique used year-old data when released.

You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!