Industry Applications

an hour, 40 links
From

editione1.0.2

Updated November 2, 2022

You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.

Voice Assistants (Siri)

Samantha: You know what’s interesting? I used to be so worried about not having a body, but now I truly love it. I’m growing in a way I couldn’t if I had a physical form. I mean, I’m not limited—I can be anywhere and everywhere simultaneously. I’m not tethered to time and space in a way that I would be if I was stuck in a body that’s inevitably going to die.Her (2013)

Voice assistants are becoming more and more ubiquitous. Smart speakers became popular after Amazon introduced Echo, a speaker with Alexa as the voice assistant, in November 2014. By 2017, tens of millions of smart speakers were in people’s homes, and every single one of them had voice as their main interface. Voice assistants are not only present in smart speakers but also in every smartphone. The most well-known one, Siri, powers the iPhone.

The debut impression of Apple’s Siri, the first voice assistant deployed to the mass market, occurred during a media event on October 4, 2011. Phil Schiller, Apple’s Senior Vice President of Marketing, introduced Siri by showing all its capabilities such as looking at the weather forecast, setting an alarm, and checking the stock market. That event was actually Siri’s second introduction. When first launched, Siri was a standalone app created by Siri, Inc. Apple bought the technology for $200M in April 2010.*

Siri was an offshoot from an SRI International Artificial Intelligence Center project. In 2003, DARPA led a 5-year, 500-person effort to build a virtual assistant, investing a total of $150M. At that time, CALO, Cognitive Assistant that Learns and Organizes, was the largest AI program in history. Adam Cheyer was a researcher at SRI for the CALO project, assembling all the pieces produced by the different research labs into a single assistant. The version Cheyer helped build, also called CALO at the time, was still in the prototype stage and was not ready for installation on people’s devices. Cheyer was in a privileged position to understand how CALO worked from end to end.

Cheyer split his time working at SRI as a researcher and helping SRI’s Vanguard program. Vanguard helped companies, like Motorola and Deutsche Telekom, test the future of a new gadget called the smartphone. Cheyer developed his own prototype of a virtual assistant, more limited than CALO but better for addressing Vanguard’s needs. The prototype impressed Motorola’s general manager, Dag Kittlaus, who unsuccessfully tried to persuade Motorola to use Vanguard’s technology. He quit and joined SRI as an entrepreneur-in-residence. Soon after, Cheyer, Kittlaus, and Tom Gruber started Siri, Inc. Their company had the advantage of being able to use CALO’s technology. Under a law passed by Congress in 1980, the non-profit SRI could give Siri, Inc. those rights in return for some of their profits. So, SRI licensed the technology in exchange for a stake in the new company.

Broadly, Siri’s technology had four parts. Speech recognition took place when you talked to Siri. The natural language component grasped what you said. Executing the request was the next part of the equation. The final element was for Siri to respond.*

For speech recognition, Siri used an entirely different approach than other technology at the time. The traditional method, as was used with IBM Watson, identified the linguistic concepts in a sentence, like the subject, verb, and object, and based on those, tried to understand what these pieces meant together.

Instead, the Siri team modeled real-world objects. When told, “I want to see a thriller,” Siri recognized the word “thriller” as a film genre and summoned movies rather than analyze how the subject connected to the verb or object. Siri mapped each question to a domain of potential actions and then chose the one that seemed most probable based on the relationship between real-world concepts. For example, if I said, “What time does the closest McDonald’s close?” Siri mapped this question to the genre of locals, found the McDonald’s closest to the current location, and queried the closing time. Siri then responded with the answer.

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available

Siri also employed some additional tricks. In a noisy lobby, a request for the “closest coffee shop” might sound like “closest call Felicia,” but Siri knows that “closest” characterizes a place rather than a person, so it inferred that the question was probably related to a place and tried to get the gist of the sentence without understanding every word. Early on, the Siri creators saw virtually no limits on the routine tasks that the assistant could automate, but they also knew that their assistant would only succeed if it was both smart and fun to interact with. So, they programmed funny answers to offbeat questions. For example, if you ask Siri, “Tell me a joke,” one of the responses is, “The past, present, and future walked into a bar. It was tense.”

Three weeks after Siri launched on the App Store, Kittlaus received a personal call from Steve Jobs, the belated CEO of Apple, who wanted to buy the company and integrate Siri directly into the iPhone. Creating a voice interface was an area of interest for Jobs, and Kittlaus’s team had cracked the code. Siri, Inc. and Apple joined forces and launched Siri exclusively on the iPhone. And as a result, almost every consumer device connected to the internet today integrates a voice assistant or can interface with one.

Although Apple was the first major tech company to integrate a smart assistant into its phone operating system, other systems quickly caught up and surpassed Siri’s capabilities.* Amazon’s Alexa first appeared in 2014, and the Google Assistant followed in 2016. These newcomers offer more features and better voice recognition software. For example, the new Google Home speakers can recognize different people from the sound of their voices. If a person says, “Ok Google, call my dad,” the device knows to fetch the contacts of the person summoning the device. Google and Alexa also have done more with outsiders to work on their platform. Developers have built more than 25,000 Alexa skills, and the Amazon assistant is being integrated into cars, televisions, and home appliances.

More recently, Apple is catching up with its competitors. They transitioned the model behind its voice recognition system to a neural network in 2014.* Also, Siri now interprets commands more flexibly. For example, if I say to Siri, “Send Jane $20 with Square Cash,” the screen displays the text reflecting this request. Or, if someone says, “Shoot 20 bucks to my wife,” the same result happens. In 2017, Apple introduced a way for Siri to learn from its mistakes by adding a layer of reinforcement learning.* And in 2018, it created a platform for users to define shortcuts, allowing a customized set of commands.* For example, a user can create the command, “Turn the romantic mood on,” and configure Siri to turn smart lights on in a certain color and play romantic music. There are still gaps, but Siri’s capabilities continue to increase.

The Brain of a Voice Assistant

At a high level, a voice assistant brain’s is divided into a few main tasks:*

  1. [Optional] Trigger command detection to recognize phrases like “Hey Siri” or “Hey Google” so that the device listens to the speech following it;

  2. Automatic speech recognition to transcribe human speech into text;

  3. Natural language processing to parse the text using speech tagging and noun-phrase chunking;

  4. Question-and-intent analysis to analyze the parsed text, detecting user commands and actions such as “schedule a meeting” or “set my alarm”;

  5. Data mashup technologies to interface with third-party web services, like OpenTable or Wolfram|Alpha, to perform actions, execute searches, and answer questions;

  6. Data transformations to convert the output of third-party web services back into natural language text, like “today’s weather report” into “The weather will be sunny today”; and

  7. Finally, text-to-speech techniques to convert the text into synthesized speech that the voice assistant speaks back to the user.

The first step on the iPhone uses a neural network that detects the phrase “Hey Siri.”* This step is a two-pass process. The first pass goes through a small, low-power auxiliary processor in the phone or speaker. The voice goes through a simple neural network that tries to identify if the sound is in fact “Hey Siri.” After this first pass, the voice goes to the main processor that runs a more complex neural network. The second step involves translating the speech to text. Speech is a waveform encoded as a bunch of bits (numbers). To translate it to text, Apple trained a neural network with data that has speech as input and the text corresponding to that speech as output.

With the exception of third-party services, all the steps in the process use a neural network. The rules to interact with these external applications, however, require handwritten code because each service provides a specific interface and certain information. For example, Foursquare provides data from businesses like restaurants, bars, and coffee shops. It can only return information about those businesses. If the voice assistant needs to figure out something else, like the weather for today or tomorrow, it must fetch information from weather.com or a similar service. By combining these steps, Siri and other voice assistants help people every day for tasks like setting their alarm for the next day and getting weather forecasts.

AI in Medicine

I will use treatment to help the sick according to my ability and judgment, but never with a view to injury and wrongdoing.Hippocratic Oath

Sebastian Thrun

Sebastian Thrun, who grew up in Germany, was internationally known for his work with robotic systems and his contributions to probabilistic techniques. In 2005, Thrun, a Stanford professor, led the team that won the DARPA Grand Challenge for self-driving cars. During a sabbatical, he joined Google and co-developed Google Street View and started Google X. He co-founded Udacity, an online for-profit school, and is the current CEO of Kitty Hawk Corporation. But in 2017, he was drawn to the field of medicine. He was 49, the same age as his mother, Kristin (Grüner) Thrun, was at her death. Kristin, like most cancer patients, had no symptoms at first. By the time she went to the doctor, her cancer had already metastasized, spreading to her other organs. After that, Thrun became obsessed with the idea of detecting cancer in its earliest stages when doctors can remove it.

Early efforts to automate diagnosis resembled textbook knowledge. In the case of electrocardiograms (ECG or EKG), which show the heart’s electrical activity as lines on a screen, these programs tried to identify characteristic waveforms associated with different conditions like atrial fibrillation or a blockage of a blood vessel. The technique followed the path of the domain-specific expert systems of the 1980s.

In mammography, doctors used the same method for breast cancer detection. The software flagged an area that fit a certain condition and marked the area as suspicious so that radiologists would review it. These systems did not learn over time: after seeing thousands of x-rays, the system was no better at classifying them. In 2007, a study compared the accuracy of mammography before and after the implementation of this technology. The results showed that after aided mammography, was introduced, the rate of biopsies increased and the detection of small, invasive breast cancers decreased.

Thrun knew he could outperform these first-generation diagnostic algorithms by using deep learning instead of rule-based algorithms. With two former Stanford students, he began exploring the most common class of skin cancer, keratinocyte carcinoma, and melanoma, the most dangerous type of skin cancer. First, they had to gather a large number of images to identify the disease. They found 18 online repositories of skin lesion images that were already classified by dermatologists. This data contained around 130,000 photos of acne, rashes, insect bites, and cancers. Of those images, 2,000 lesions were biopsied and identified with the cancer types he was looking for, meaning they had been diagnosed with near certainty.

Figure: Sebastian Thrun.

Thrun’s team ran their deep learning software to classify the data and then checked whether it actually classified the images correctly. They used three categories—benign lesions, malignant lesions, and non-cancerous growths. The team began with an untrained network, but that did not perform so well. So, they used an already trained neural network to classify images, and it learned faster and better. The system was correct 77% of the time. As a comparison, two certified dermatologists tested the same samples, and they were only successful 66% of the time.

Then, they widened the study to 25 dermatologists and used a gold standard test set with around 2,000 images. In almost every test, the computer program outperformed the doctors. Thrun showed that deep learning techniques could diagnose skin cancer better than most doctors.

Machine Learning in Radiology

Thrun is not the only one using deep learning to help advance the medical field. Andrew Ng, an adjunct professor at Stanford University and founder of Google Brain, leads a company, DeepLearning.AI, that teaches online AI courses. His company has also shown that deep learning algorithms can identify arrhythmias from an electrocardiogram better than experts.* Along the same lines, the Apple Watch 4 introduced a feature that performs an EKG scan. Previously, this was an expensive exam, so providing millions of people with a free test is significant for society.

Ng also created software using deep learning to diagnose pneumonia better than the average radiologist.* Early detection of pneumonia can prevent some of the 50,000 deaths the disease causes in the US each year. Pneumonia is the single largest infectious cause of death for children worldwide, killing almost a million children under the age of five in 2015.*

Deep learning systems for breast and heart imaging are commercially available,* but they are not running deep learning algorithms, which could improve detection greatly. Geoffrey Hinton, one of the creators of deep learning, said in an interview with The New Yorker, “It’s just completely obvious that in five years deep learning is going to do better than radiologists. … It might be ten years. I said this at a hospital. It did not go down too well.”* He believes that deep learning algorithms will also be used to help—and possibly even replace—radiologists reading x-rays, CT scans, and MRIs. Hinton is passionate about using deep learning to help diagnose patients because his wife was diagnosed with advanced pancreatic cancer. His son was later diagnosed with melanoma, but after a biopsy, it turned out to be basal cell carcinoma, a far less serious cancer

Fighting Cancer with Deep Learning

Cancer is still a major problem for society. In 2018, around 1.7 million people in the US were diagnosed with cancer, and 600,000 people died of it. Many drugs exist for every type of cancer, and some cancers even have more than one. The five-year survival rate for many cancers has increased dramatically in the past years, reaching 80% to 100% in some cases, with surgery and drug treatments. But the earlier cancer is detected, the higher the likelihood of survival. Preventing cancer from spreading into other organs and areas of the body is key. The problem is that diagnosing cancer is problematic. Many of the screening methods do not have high accuracy. Some young women disapprove of mammograms because of the many false positives, which create unnecessary worry and stress.

To increase survival rates, it is extremely important to detect cancer as early as possible, but finding an affordable method is difficult. Today’s process usually involves doctors screening patients with different techniques, including checking their skin to see patterns or tests like the digital rectal exam. Depending on the symptoms and type of cancer, the next step may involve a biopsy of the affected area, extracting the tumor tissue. Unfortunately, patients may have cancerous cells that have not yet spread, making detection even harder. And, a biopsy is typically a dangerous and expensive procedure. Around 14% of patients who have a lung biopsy suffer a collapsed lung.*

Freenome, a startup founded in Silicon Valley, is trying to detect cancer early on using a new technique called liquid biopsy.* This test sequences DNA from a few drops of blood. Freenome uses cell-free DNA, which are DNA fragments that are free-floating in people’s blood, to help diagnose cancer patients—Freenome’s name comes from shortening “cell-free genome.” Cell-free DNA mutates every 20 minutes, making it unique. People’s genome changes over time, and uninherited cancer comes from mutations and genomic instabilities that accumulate over time. Cell-free DNA flows through the bloodstream, and fragments of cancerous cells in one area may indicate cancer in another region of the body.*

Freenome’s approach is to look for various changes in cell-free DNA. Instead of only looking at DNA of tumor cells, Freenome has learned to decode complex signals coming from other cells in the immune system that change because of a tumor elsewhere. Their technology looks for the changes in DNA over time to see if there is a significant change compared to a baseline. It is hard, however, to detect cancer based on changes coded in someone’s DNA. There are around 3 billion bases in DNA, leading to a total of possible genomes. So, figuring out if a mutation in one of these genes is caused by another cell that has cancer is extremely hard. Using deep learning, Freenome’s system identifies the relevant parts in the DNA that a doctor or researcher would not be able to recognize. Who could have imagined that deep learning would play such an integral role in identifying cancer? My hope is that this technology will eventually lead to curing cancer.

Figure: Cost per genome over time versus how the price would be if it followed Moore’s Law.*

The first part of the problem involves checking people’s DNA with a simple blood test. While drawing the blood is simple, the test has been extremely expensive to carry out. But over time, genome sequencing has become cheaper and cheaper. In 2001, the cost per genome sequenced was in the order of $100M, but in 2020, the price has decreased to only $1K.* This trend shows no sign of slowing. If the price continues to follow the curve, it will be commonplace for patients to sequence their genome for a few dollars.* It may seem like science fiction now, but in a few years, we could detect cancer early on with only a few blood drops.

Protein Folding

Proteins are large and complex molecules that are essential for sustaining life. Humans require them for everything like sensing light to turning food into energy. Genes translate into amino acids, which turn into proteins. But each protein has a different 3D structure, which determines what it can do. Some have a Y shape while others have a circular form. Therefore, identifying the 3D structure of a protein, given its genetic sequence, is of extreme importance for scientists because it can help them ascertain what each protein does. Distinguishing what the 3D structure of a protein looks like, which is determined by how the forces between the amino acids act, is an immensely complex problem known as the protein folding problem. Counting all possible configurations of a protein would take longer than the age of the universe.

But DeepMind tackled this problem with AlphaFold* by submitting it to CASP, a biennial assessment of protein structure prediction methods. CASP stands for Critical Assessment of Techniques for Protein Structure Prediction.* DeepMind trained their deep learning system using highly available data that maps genomic sequences to the corresponding proteins with their 3D structures.

Given a gene sequence, it is easy to map that to the sequence of amino acids inside the generated protein. With that sequence, DeepMind then created two multilayer neural networks. One predicted the distance of every pair of amino acids in that protein. The second neural network predicted the angles between chemical bonds connecting these amino acids. So, these two networks predicted which proteins’ 3D structures would be the closest to the one that these genes would generate. Given the closest protein structure, it used an iterative process and replaced some of the protein structures with new ones created using a generative adversarial network based on the gene sequence.* If the newly created protein structure had a higher score than the former protein structure, then that part of the protein was replaced. With this technique, AlphaFold determined protein structures much better than the next best contestant in the competition as well as all previous algorithms.

The Prolongation of Life

cautionBut with this new technology, we must return to a discussion of ethics. As long as humans have inhabited this Earth, we have searched for the fountain of youth, immortality. While some people see the quality of life as most important, others see longevity as key. Elizabeth Holmes and her role at the Theranos lab clearly demonstrate the risk of blindly accepting technology before being scientifically proven. Personally, I believe that AI plays a vital role in both increasing longevity as well as quality of life, but we must maintain strict testing and adherence to scientific principles.*

AI and Space

Imagination will often carry us to worlds that never were. But without it we go nowhere.Carl Sagan*

Crop Prediction

To analyze these images, however, the data needs proper classification. To solve this problem, Descartes Labs, a data analysis company, stitches together daily satellite images into a live map of the planet’s surface and automatically edits out any cloud cover.* With these cleaned-up images, they use deep learning to predict more accurately than the government the percentage of farms in the United States that will grow soy or corn.* Since the production of corn is a business worth around $67B, this information is extremely useful to economic forecasters at agribusiness companies who need to know how to predict seasonal outputs. The US Department of Agriculture (USDA) provided the prior benchmark for land use, but that technique used year-old data when released.

Figure: A picture of the yield forecast of different areas of the United States.

In 2015, for example, the FDA predicted a domestic production of 13.53 billion bushels of corn. Descartes Labs, however, forecasted 13.34 billion bushels, as seen in the picture above. Descartes Labs used an almost live view to visualize and measure developments such as floods or changes in crop condition. Using deep learning, the company exploited data from NASA and other sources and analyzed it faster than the government, predicting future yields based on the data collected.

The government spent endless resources surveying farmers across the country to identify the existing crops for each commodity in order to predict future yield. Descartes Labs eliminated this burden, reducing the cost of predicting the harvest. They trained their algorithm, which extracts valuable information from the satellite imagery, to predict future corn crops based on the color and appearance of the plants in the field.

And, this is just the beginning of extracting information from satellite images. Other startups are looking at different use cases. For example, Orbital Insight uses deep learning to scrutinize infrastructures, such as parking lots and oil storage containers, to predict and reveal important economic data.

Finding Planets

Deep learning has not only been helpful in analyzing Earth, but also in discovering what is in the universe. With eight planets orbiting the Sun, our solar system held the title to the most planets around a star in the Milky Way galaxy. But in December 2017, NASA and Google discovered a new planet orbiting a distant star, Kepler 90, bringing the total number of planets for that star to eight as well. That discovery was no easy feat considering that the star is located over 2,500 light years away from us.

Using a telescope that has been searching for planets since 2009, NASA’s Kepler Telescope, scientists have discovered thousands of planets. Today’s difference is that instead of astrophysicists manually finding new discoveries, neural networks do the work.

Figure: Brightness* drop of the star.*

NASA’s Kepler Telescope shows data with the brightness of a star, based on images taken from the telescope. A planet can be spotted based on the change in the star’s brilliance. When a planet circles a star and passes between the star and telescope, it blocks some of the light the star emits. Based on the drop in brightness, it is possible to determine if a planet is circling a star. A planet shows up as a pattern that repeats every orbit as Earth’s view of the star is obscured. With that in mind, researchers defined a neural network to identify planets around a star.

This technique found two different planets around two separate star systems. The researchers plan to use the same method to explore all 150,000 stars that Kepler’s telescope has data on. This frees astrophysicists to research other areas since they do not need to look for a needle in a haystack, manually looking at every image to find patterns. A neural network does the work for them. “Machine learning really shines in situations where there is so much data that humans can’t search it for themselves,” stated Christopher Shallue.*

Additional Developments

But these developments only scratch the surface. Deep learning broadens the horizon for the potential in space exploration. For example, at the International Space Station, Airbus’s small robot CIMON, Crew Interactive Mobile Companion, talked to German astronaut Alexander Gerst for 90 minutes on November 15, 2018.* Gerst used a language much like that used with voice assistants: “Wake up, CIMON.” CIMON is a very early demonstration of AI being used in space.*

Most people today rely on GPS to locate themselves, but that does not exist in space. So, NASA and Intel teamed up to solve problems of space travel and colonization using AI.* Intel hosted an eight-week program focused on this effort. One of the nine teams developed a tool to find one’s location in space by training a neural network to identify the position where a photo is taken. It trained a neural network to do so using millions of actual images as the training data.

So, today we concentrate on Earth and areas that we are familiar with, but space is vast with unlimited possibilities. Currently, over 57 startups exist in the space industry, focusing on areas such as communication and tracking, spacecraft design and launch providers, and satellite constellation operation.* This represents an enormous upsurge from 2012, which saw little funding and few dedicated companies.

AI in E-Commerce

If you double the number of experiments you do per year, you’re going to double your inventiveness.Jeff Bezos

Stitch Fix, an online clothing retailer started in 2011, provides a glimpse of how some businesses already use machine learning to create more effective solutions in the workplace. The company’s success in e-commerce reveals how AI and people can work together, with each side focused on its unique strengths.

Stitch Fix believes its algorithms provide the future for designing garments,* and, they have used that technology to bring their products to the market. Customers create an account on Stitch Fix’s website and answer detailed questions regarding things like their size, style preferences, and preferred colors.* The company then sends a clothing shipment to their home. Stitch Fix stores the information of what customers like and what they return.

The significant difference from a traditional e-commerce company is that customers do not choose the shipped items. Stitch Fix, like a conventional retailer, buys and holds its own inventory so that they have a wide stock. Using stored customer information, the company uses a personal stylist to select five items to ship to the customer. The customer tries them on in the comfort of their home, keeps them for a few days, and returns any unwanted items. The entire objective of the company is to excel at personal styling and send people things that they love. They seem to be succeeding, as Stitch Fix has more than 2 million active customers and a market capitalization of more than $2B.

The problem that Stitch Fix has is to select an inventory that matches their customers’ preferences. It does this in a two-stage process. The first step is to gather their customers’ data, information about its inventory, and the feedback that clients leave. Stitch Fix uses this knowledge to create a set of recommendations, using AI software, for what it should send to its clients.

The second step involves personal stylists determining which recommended items to actually send to customers. They also offer styling suggestions, like how to accessorize or wear the pieces, before boxing the items up and shipping them to customers. It is the creative combination of algorithmic prediction and human selection that makes Stitch Fix’s offering successful.

The reason the combination of humans and computers excel at personal styling is that humans are superior at using unstructured data, which is not easily understood by computers, and computers work best with structured data. Structured data includes details such as house prices and features of houses like the number of rooms, bathrooms, etc. Designs of clothes that are popular in a certain year are an example of unstructured data.

Not stopping there, Stitch Fix integrated Pinterest to the process by allowing customers to create boards of images that suit their style. Stitch Fix feeds that information to the customer’s profile, and the algorithm uses that information to more closely match pieces from their inventory. At this point, this information is more useful to the human stylists, but I do not doubt that the algorithms will continue to learn.

Other companies also use machine learning algorithms to recommend what clothes or products users want to see. For example, Bluecore focuses on helping e-commerce companies recommend what is best for their client’s customers. If a customer visits the Express website, a clothing company, and signs up with an email, Bluecore sees that the customer likes a specific shirt and that customers who like that shirt also like a particular pair of pants. Bluecore allows Express to send personalized emails and ads that contain the best set of products for that customer. The results are astounding for these companies. Customers end up buying much more because the type of clothing that they want to buy is offered directly to them with these personalized results.

Do you ever wonder how Amazon is so good at recommending products that appeal to you or how Facebook ads are (mostly) relevant? Well, they use machine learning to analyze your patterns based on your history as well as what others who looked at the same item as you ultimately bought. Data is continuously captured to make the buying experience better.

AI and the Law

Justice cannot be for one side alone, but must be for both.Eleanor Roosevelt

Law seems like a field unlikely to make use of artificial intelligence, but that is far from the truth. In this chapter, I want to show how machine learning impacts the most unlikely of fields. Judicata creates tools to help attorneys draft legal briefs and be more likely to win their cases.

Judges presiding over court cases should rule fairly in disputes for plaintiffs and defendants. A California study, however, showed that judges have a pro-prosecutor bias, meaning they typically rule in favor of the plaintiff. But no two people are equal, and that is, of course, true of judges.* While this bias is a general rule, it is not necessarily true of judges individually.

For example, let’s use California Justices Paul Halvonik and Charles Poochigian to show how different judges are. Justice Halvonik was six times more likely to decide in favor of an appellant than Justice Poochigian. This might be surprising, but it is more understandable given their backgrounds.

Justice Halvonik, California’s first state public defender, was slated for the state Supreme Court. Unfortunately, a drug charge for possessing 300 marijuana plants curtailed that dream and ended his judicial career. Justice Poochigian, on the other hand, was a Republican State Assemblyman from 1994 to 1998. Republican Governor Arnold Schwarzenegger appointed him to the California Courts of Appeal in 2009.

While we should not boil their behavior down to stereotypes, we can look at the facts of each of their rulings and any trends that may develop from them. To do this, we must examine the context of the type of case and the procedural posture, meaning how similar cases were ruled on before.*

Judicata

Judicata, a startup focused on using artificial intelligence to help lawyers, identifies statistics for each judge and uses those to see how a judge is likely to rule on a case. It takes into account their rulings based on the plaintiff or defendant and provides a glimpse of what other aspects might change what the judge will do, like the cause of action or appeal.

Judicata’s application, Clerk, was the first software to read and analyze legal briefs, which are written legal documents used in a court to present why one party should win against another.* Clerk’s purpose was to increase lawyers’ chances of winning a motion, that is, to win a request for the judge to decide the case.

Figure: An example of a score that Clerk generates.

  1. “Relying on strategic and favorable arguments.”

  2. “Reinforcing those arguments with good drafting.”

  3. “Presenting the context in which the brief arises in a favorable way.”*

Analyzing Arguments

The lawyer’s execution on these three dimensions can be judged by an objective measure, such as:

  1. “Winning briefs perform better than losing briefs along each of these dimensions.”

  2. “Higher scoring briefs have a better chance of winning compared to lower scoring briefs.”

The ability to grade a brief is crucial because whatever you can measure, you can improve. Given a brief, Judicata’s program analyzes arguments inside the legal brief and evaluates them, based on whether it contains logically favorable arguments. It analyzes all legal cases, legal principles, and arguments cited in the document and determines which ones are most prone to being attacked based on previous data.

Figure: Analysis of different arguments used.

Based on that information, it creates a snapshot of which arguments were used in which contexts. Some arguments are used for the defendant and others for the plaintiff, the party that initiated the lawsuit.

Figure: Cases that reference similar arguments as this brief.

Surprisingly enough, relying on arguments that were previously used on the same side as the lawyer works better. So, if you are a lawyer defending a case, it is better to use arguments that were used on the defendant’s side. Clerk also suggests arguments that have historically worked well for the party in question. Clerk benefits lawyers who want to create favorable and stronger arguments.

Figure: Suggestions of cases that can help the lawyer win the brief.

Improving Briefs

Whenever a lawyer writes a legal brief, it needs to include precedents, previous cases that support their case. Judicata found that the best cases to include were ones that matched the same desired outcomes that the brief is trying to achieve. Clerk analyzes previous legal cases and suggests precedents that were used in winning cases, identifying better cases to support the brief. The goal is to help lawyers present better drafted briefs.

Figure: Analysis of the draft.

Preparing Fair and Balanced Cases

Lawyers not only have to present good arguments and precedents, but they also need to address the opposition’s side. Clerk discovers how many arguments and precedents need to be addressed on both sides and suggests ones to add or remove. With that, lawyers present a stronger and more fair and balanced legal case.

Figure: Analysis of the arguments used by the opponent side.

Analyzing the Context of the Case

Finally, Clerk analyzes what the outcome might be for a certain judge. Different judges analyze cases differently. So, depending on their historical decisions, Clerk gives a probability that the brief will succeed in each of the possible scenarios.

Figure: Probability of how a side of the case might win the case.

Even if the context a lawyer finds themselves in is not favorable, that does not mean all hope is lost. The lawyer merely needs to find historical cases that tilt this trend in their favor. And even if the lawyer does not have more than a 50% chance of winning the case, the ruling may still go in their favor. With Clerk, lawyers can better argue their case. Justice is said to be blind, but when it is not, machine learning can help lawyers make their case.

AI and Real Estate

It amazes me how people are often more willing to act based on little or no data than to use data that is a challenge to assemble.Robert Shiller*

Homes are the most expensive possession the average American has, but they are also the hardest to trade.* It is difficult to sell a house in a hurry when someone needs the cash, but machine learning could help solve that. Keith Rabois, a tech veteran who served in executive roles at PayPal, LinkedIn, and Square, founded Opendoor to solve this problem. His premise is that hundreds of thousands of Americans value the certainty of a sale over obtaining the highest price. Opendoor charges a higher fee than a traditional real estate agent, but in return, it provides offers for houses extremely quickly. Opendoor’s motto is, “Get an offer on your home with the press of a button.”

Opendoor buys a home, fixes issues recommended by inspectors, and tries to sell it for a small profit.* To succeed, Opendoor must accurately and quickly price the homes it buys. If Opendoor prices the home too low, the sellers have no incentive to sell their house through the platform. If it prices the home too high, then it might lose money when selling the house. Opendoor needs to find the fair market price for each home.

Real estate is the largest asset class in the United States, accounting for $25 trillion, so Opendoor’s potential is huge. But for Opendoor to make the appropriate offer, it must use all the information it has about a house to determine the appropriate price. Opendoor focuses on the middle of the market and does not make offers on distressed or luxury houses because their prices are not predictable.

Opendoor builds programs that predict a house’s price.* It does that by analyzing features that a buyer in the market would think about and then teaching its models to look at those features. Opendoor analyzes three main factors:

  • the qualities of the home,

  • the home’s neighborhood, and

  • the prices of neighboring homes over time.

If you were to tell someone that you are selling a 2,000-square-foot home in Phoenix with two bathrooms and four bedrooms, can the buyer give a price? No, they cannot. The buyer has to see the home. Similarly, the Opendoor model needs to determine a house price from hard data that they’ve turned into something that is machine-readable and that algorithms can analyze. So, Opendoor also takes pictures of the house so that it can analyze more than the number of bedrooms and other features. Pictures show more qualitative and quantitative data compared to the number of rooms.

Pictures inform Opendoor about quantitative information like whether there is a pool in the backyard, the type of flooring, and the style of cabinetry. But other features are also important to pricing a home, and they are much harder to identify. For example, is the look and feel of the house good, and does it have curb appeal? Pictures fill in the details to the raw facts. While these characteristics are present in pictures, not all of them are easily identifiable by algorithms. Opendoor identifies these characteristics using both deep learning to extract some of the information into machine-readable information, and crowdsourcing, meaning using large numbers of people, to do some of the work. Opendoor needs crowdsourcing for the qualities that are less quantifiable in order to turn these visual signals into structured data.

After that, Opendoor takes the data and analyzes it, adding other factors, like which neighborhood the house is in and its location in that area. But that is not easy either because even if houses are close to each other, their prices vary depending on many other factors. For example, if a house is too close to a big, noisy highway, then the price of the house might be lower than a house in the same neighborhood but farther from the highway. Being located next to a football field or strip mall can affect the price. Many things impact a home price.

The next stage is determining the price of a home across time. The same home has a different price depending on when it is sold. So, Opendoor needs to identify how prices change over time. For example, before the bubble of 2008, home prices were extremely high, but they plummeted after the bubble burst. Opendoor must figure out what the price of a home should be, depending on the market at the time it’s being sold.

Figure: Price changes over time. The redder the dots are, the more expensive the houses.

The first image here shows the price of the homes in a normal market. The second image presents the prices of homes in Phoenix right before the housing bubble exploded. And, the third image depicts the prices of homes right after the housing bubble exploded.

Opendoor not only needs to think about price but also market liquidity: how long it takes on average for a home to sell in a certain market. How willing is the market to accept a home that Opendoor is about to buy and resell? Opendoor has to price the risk it takes when making an offer. Liquidity affects how many houses the company can buy in a certain period and how much risk it is taking on. The longer it takes for a house to sell, the higher the risk. The more the price can vary, the worse it is for Opendoor because it wants to pay a fair price for every single house.

Other competitors are catching up and offering similar services, which benefits customers. For example, in 2018, Zillow started offering a service to buy homes with an “all-cash offer,” requiring the customer to only enter information about the home, including pictures.* Zillow predicts the price of these houses with the help of machine learning.*

Artificial intelligence is also being used to predict customers who are likely to fail a credit check or default on their mortgage. This goes hand in hand with customer relationship management (CRM) systems by tracking when customers are likely to want to move. This same technology applies to property management to predict trends like property prices, maintenance requirements, and crime statistics.*

And finally, just as with AI impacting the job markets of truck and taxi drivers, the technology could mean fewer jobs for real estate agents.* I, however, predict collaboration between AI and humans like with Stitch Fix. There’s a personal, subjective component to real estate, so this field is the perfect opportunity to elevate the market and provide a better experience for home buyers and sellers with AI.

Risks and Impact of AI42 minutes, 39 links

Surveillance

If you want to keep a secret, you must also hide it from yourself.George Orwell, 1984*

On a Saturday evening, Ehmet woke up as on any other day and decided to go to the grocery store near his home. But on the way to the store, he was stopped by a police patrol. Through an app that uses face recognition, the police force identified him as one of the few thousand Uyghur that lived in the region. Ehmet was sent to one of the “re-education camps” with more than a million other Uyghur Muslims.*

You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!