Robots in the Industry

11 minutes, 14 links
From

editione1.0.2

Updated November 2, 2022

You’re reading an excerpt of Making Things Think: How AI and Deep Learning Power the Products We Use, by Giuliano Giacaglia. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, plus future updates.

The Master created humans first as the lowest type, most easily formed. Gradually, he replaced them by robots, the next higher step, and finally he created me to take the place of the last humans.Isaac Asimov, I, Robot*

When people talk about artificial intelligence, they often think of mobile robots. But in computer science, AI is the field focused on the development of the brain of not only such robots but of computers that want to achieve certain goals. These robots do not use any of the deep learning models that we talked about previously. Instead, they have encoded, handwritten software.

Boston Dynamics

In Florida, a few people watch a competition between robots to reach a specific goal while achieving the different objectives faster and more precisely than their opponents. One robot looks at a door with its sensors—cameras and lasers—to decide what to do next in order to open it. Using its robotic arm, it slowly pushes the door and goes to the other side. The team responsible for the robot cheers as it completes one of the tasks.

This story might sound like science fiction or from a distant future, but the US Defense Advanced Research Projects Agency (DARPA) organized that competition, the DARPA Robotics Challenge (DRC), in December 2013. Boston Dynamics created the robot that opened the door, Atlas, but many other robots also attempted these tasks. And for each robot, the development teams that programmed them eagerly watched.* The DRC’s goal was for robots to perform independent jobs inspired by situations dangerous to humans, like a nuclear power plant failure. The competition tested the robots’ agility, sensing, and manipulation capabilities. Upon first glance, the work seems pretty straightforward, like walking over terrain and opening doors, but they are difficult for robots to achieve. The most challenging assignment was to walk over an uneven surface because it is hard for robots to stay balanced. Most of the robots in the competition failed and did not complete many of the tasks because they malfunctioned or the job was too hard. Atlas achieved the most tasks of any of the competitors.

DARPA program manager, Gill Pratt, said of the prototype, “A 1-year-old child can barely walk, a 1-year-old child falls down a lot, this is where we are right now.”* Boston Dynamics revealed Atlas on July 11, 2013. At the first public appearance, the New York Times stated, “A striking example of how computers are beginning to grow legs and move around in the physical world,” describing the robot as “a giant—though shaky—step toward the long-anticipated age of humanoid robots.”*

Boston Dynamics has the bold goal of making robots that are better than animals in mobility, dexterity, and perception. By building machines with dynamic movement and balance, their robots can go almost anywhere, on any terrain on Earth. They also want their robots to manipulate objects, hold them steady, and walk around without dropping them. And, they are approaching their goals as time progresses. Atlas continues to improve with lighter hardware, more capabilities, and improved software.

Figure: The second version of Atlas.

Atlas was much more advanced than the first robots from the 1960s like Stanford’s Shakey. But Boston Dynamics wanted to improve their robot, so they designed a second version—Atlas, The Next Generation. They first released a YouTube video of it in February 2016 during which it walked on snow. Subsequent videos showed Atlas doing a backflip and jumping over a dog lying in the grass.*

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.
Now Available

To build this updated version, Boston Dynamics used 3D printing to make parts of the robot look more like an animal. For example, its upper leg, which has hydraulic pathways, actuators, and filters, are all embedded and printed as one piece. That was not possible before 3D printing. They designed the structure using the knowledge of Atlas’s loads and behaviors, based on data from previous interactions of the original Atlas robots with the environment. They also added software simulations. With the 3D-printing technique, Boston Dynamics transformed what was once a big, bulky, and slow robot weighing around 375 pounds into a much slimmer version at 165 pounds.*

Boston Dynamics is not only focused on building humanoid robots, but it is also developing different looking cyborgs as well. They have two robotic dogs, Spot and SpotMini.* Like Atlas, the dogs can enter areas unsafe for humans in order to clear out the space. Using cameras, the dogs look at the terrain, assess the elevation of the floor, and figure out where they can step and how to climb to another region.* These robotic machines continue to improve and become more agile and less clunky. The latest version dances to Bruno Mars’s hit song “Uptown Funk.” I believe this is only the beginning of the robotic revolution. Spot and other robots may end up in our everyday lives.

Kiva Systems

Giants like Amazon have been working on robots to increase their companies’ productivity. At an Amazon warehouse, small robots help packers for the online retail giant.* These automated machines cruise around the warehouse floor, delivering shelves full of items to humans, who then pick, pack, and ship the items without taking more than a couple of steps.

Figure: A Kiva robot in an Amazon warehouse.

This automation is a considerable change for Amazon, where humans used to select and pack items themselves with only the help of conveyor belts and forklifts. With the introduction of Kiva Systems’ robots, the Amazon warehouse processes completely changed. Now, humans stand in a set location, and robots move around the warehouse, alleviating most of the manual labor.

This change occurred when Amazon acquired Mick Mountz’s Kiva Systems for $775M in 2012.* After working years in business processes at Webvan, a now-defunct e-commerce startup, Mick realized that one of the reasons for the downfall was due to the high costs of order fulfillment.* In 2001, after the dot-com bubble exploded, the company filed for bankruptcy and later became part of Amazon. Mick found a better way to handle orders inside warehouses and started Kiva Systems with the help of robotics experts.

In a typical warehouse, humans fill orders by wandering through rows of shelves, often carrying portable radio-frequency scanners to locate products. Computer systems and conveyor belts sped things up but only to a point. With the help of robots, however, workers at Amazon process items three times faster and do not need to search for products. When an order comes into Amazon.com, a robot drives around a grid of shelves, locates the correct shelf, lifts the shelf onto its back, and delivers it to a human worker.* The person then completes the process by picking up the order, packing it, and shipping it. Humans do not get much rest, so to avoid human error, a red laser flashes on the item so that the human knows what to pick up. The robot, then, returns the shelf to the grid. As soon as the robot takes away the shelf, another one arrives so that the human is always working.

The Robot Operating System

To function, robots need an operating system that can distill high-level instructions down to the hardware. This requirement is the same as for standard computers which need to communicate with their hard drive and display. Robots need to pass information to their components, like arms, cameras, and wheels. In 2007, Scott Hassan, an early Google engineer who previously worked with Larry Page and Sergey Brin, started Willow Garage to advance robotics. The team developed the Robot Operating System (ROS) for its own robots, one of which was the Personal Robot 2 (PR2). Ultimately, they shared the open-source operating system with other companies before closing their doors in 2014.*

The PR2 had two strong arms that performed delicate tasks like turning a page in a book. It contained pressure sensors in the arms as well as stereo cameras, a light detection and ranging (LIDAR) sensor, and inertial measurement sensors.* These sensors provided data for the robot to navigate in complex environments. Willow Garage developed ROS to understand the signals from these sensors as well as to control them.

Figure: Personal Robot 2.

ROS included a middle layer, which communicated between the software written by developers and the hardware, as well as software for object recognition and many other tasks.* It provided a standard platform for programming different hardware and a growing array of packages that gave robots new capabilities. The platform included libraries and algorithms for vision, navigation, and manipulation, among other things.

ROS enabled hobbyists and researchers to more easily develop applications on top of hardware. With ROS, robots play instruments, control high-flying acrobatic machines, walk, and fold laundry.* Currently, ROS is under development by other hardware businesses like self-driving car companies. The newest version of the software, ROS 2.0, has many new capabilities including real-time control and the ability to manage multiple robots. As these systems improve, we may eventually have robots performing our house cleaning chores.

Machine Learning and Robotics

Will robots inherit the earth? Yes, but they will be our children.Marvin Minsky*

Robots cannot yet operate reliably in people’s homes and labs nor manipulate and pick up objects.* If we are to have robots in our day-to-day lives, it is essential to create robots that can robustly detect, localize, handle and move, and change the environment the way we want. We need robots that can pick up coffee cups, serve us, peel bananas, or even walk around without tripping or hitting walls. The problem is that human surroundings are complex, and robots today cannot pick up most objects. If you ask a robot to pick up something it has never seen before, it almost always fails. To accomplish that goal, it must solve several difficult problems.

For example, if you ask a robot to pick up a ruler, the robot first needs to determine which object is a ruler, where it is, and finally, calculate where to put its gripper based on that information. Or, if you want a robot to pick up a cup of coffee, the robot must decide where to pick it up. If the gripper picks up the cup from the bottom edge, it might tip over and spill. So, robots need to pick up different objects from different locations.

You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!