Contributed by Bob Chabot
Onboard Supercomputers Accelerate Automobile Development
Deep learning and artificial intelligence move autonomous driving forward
The auto industry is on the threshold of introducing advanced autonomous driving, much sooner than many expected just a few years ago. While advanced driver assistance systems, software and algorithms have been capable of managing "structured" data — the traditional bits and bytes — for some time, onboard vehicle computers lacked sufficient processing speed to manage "unstructured" data (such as images, speech and video) quickly enough to ensure safety and reliability. In addition, they lacked the artificial intelligence (AI) to learn and adapt "on the go" in ever-changing driving environments. That's changed significantly in the past year.
When NVIDIA Corp. introduced its vehicle supercomputer — DRIVE PX 2 — at the 2016 Consumer Electronics Show last January, few realized the industry was on the cusp of change. Priced at under $10,000, the PX 2 uses 12 central processing cores (CPUs), four videographic processing units (GPUs) and other technology that, in conjunction, can perform between 8- and 24-trillion operations per second inside a single vehicle. The supercomputer also features AI, which allows it to dynamically self-train in image recognition and dynamic mapping — a process also known as "deep learning" — that boosts safety, speeds up traffic flow and makes fully autonomous driving possible sooner.
NVIDIA's end-to-end solution consists of NVIDIA DIGITS, DRIVE PX 2 and DRIVENET. Combined, they facilitate self-training by the deep neural network, as well as deployment of data output of that network inside the vehicle and with the cloud. Of note:
- DIGITS is a tool for developing, training and enabling neural networks.
- DRIVE PX 2 deploys the output of those networks in a vehicle.
- DRIVENET is NVIDIA's cloud-based Web proprietary deep neural network. It has the equivalent of 37 million neurons and has learned to recognize more than 120 million objects so far.
- NVIDIA says automakers and other companies can develop and control their own neural network and/or leverage DRIVENET as well.
Huang explained that DRIVE PX 2 uses between one and four onboard GPUs (depending on the number of operations an automaker needs) and an external cloud-based data center to facilitate deep learning in two ways. The external data center can process, interpret and instantly communicate (i.e. train) massive volumes of structured and unstructured data. Secondly, the onboard supercomputer senses and interprets real time data in conjunction with previous training to quickly and accurately combine imagery and other data. (All images — NVIDIA Corp.)
GPUs are Changing the Auto Industry
"DRIVE PX 2 is the world's first onboard artificial intelligence supercomputer that requires less space than a school lunchbox, yet has computational processing power equivalent to 150 MacBook 13-inch Pro laptops, but at a fraction of the cost," explained NVIDIA CEO Jen-Hsun Huang. "Since its introduction, more than 50 early access development partners have adopted this AI platform for autonomous driving applications. These partners include automakers (Audi, BMW, Daimler, Ford and Volvo), Tier One suppliers, intelligent transportation firms, software developers and research institutions."
"There's no possible way to write conventional software that can handle the infinite number of things that can happen while driving," added Danny Shapiro, NVIDIA's Senior Director of Automotive. "For example, no ECU from an ADAS system can have code written for it to fully enable autonomous vehicle functionality. Instead of traditional computer vision algorithms, the supercomputer leverages artificial intelligence as the way forward.
"The combination of a supercomputer AI platform in the vehicle (as opposed to distributed computing at each sensor) plus deep learning software (which enables the onboard artificial intelligence network to be trained) puts fully autonomous driving within our grasp."
Understanding the World the Way Human Drivers Do
Low-cost cameras and sensors are giving cars the ability to suck in huge amounts of information. DRIVE PX 2's advanced computer vision technology and 'stackable' GPU processors form an artificial neural network that learns to see patterns, recognize and identify objects across a broad array of situations. In essence, the innovative technologies transform data into 3D digital maps that vehicles can use to navigate the world around them.
Deep Learning Software: What is it and How Does it Work?
In its current list of the 10 most important breakthrough technologies, The Massachusetts Institute of Technology describes deep learning as follows: "Deep-learning software attempts to mimic the activity in layers of neurons in the human neocortex, the wrinkly 80 percent of the brain where thinking occurs.
The software learns, in a very real sense, to recognize and remember sounds, images, and other data patterns as digital representations within an artificial neural network."
"The new DRIVE PX 2 supercomputer, when paired with vision technologies, gives vehicles an uncanny level of self-awareness," Huang noted. Specifically, the deep learning AI capabilities enable it to:
- Quickly learn how to address the challenges of everyday driving, such as unexpected road debris, erratic drivers and construction zones.
- Address problem areas where traditional computer vision techniques are insufficient, such as poor weather conditions (e.g. rain, snow and fog) or difficult lighting conditions (e.g. sunrise, sunset and extreme darkness).
- Simultaneously process multiple inputs (12 video cameras, lidar, radar and ultrasonar and other sensors) to accurately detect and identify objects, determine where the vehicle is relative to the world around it, and then calculate its optimal path for safe travel.
- Address the full breadth of autonomous driving algorithms, including sensor fusion, localization and path planning using DRIVE PX 2's multi-precision GPU architecture capable of up to 24 trillion operations per second.
- Provide drivers with complete 360-degree situational awareness. NVIDIA demonstrated its new dashboard component, which leverages data to inform drivers about their surrounding driving environment, whether or not in fully automated mode.
"The increasing processing power enables self-driving cars to know enough about their environment — and interpret it — to safely drive in traffic," Haung continued. "Deep learning enables a neural network to discern many levels of abstraction, ranging from simple concepts to complex ones. Each layer categorizes some kind of information. Deep learning stacks these layers. It refines information and passes it along to the next level. This enables the supercomputer to learn what computer scientists call a 'hierarchical representation.'"
DRIVE PX 2 is built to turn the information sucked up by sensors and captured by lidar, radar and cameras mounted all around a vehicle into self-awareness. Currently, Audi, BMW, Daimler, Ford, and Volvo are all using NVIDIA's DRIVE PX to develop autonomous driving systems.
Watch the video to see NVIDIA's Huang explain the technologies underlying DRIVE PX2. "The vehicle of the future is going to be both software defined and remotely updateable instantaneously. It will leverage radar, ultrasound, cameras and other technologies to build a dynamic environment model around the car to keep it situationally aware and capable of planning a safe path."
An Open Platform for the Automotive Industry to Build Upon
"I am not aware of any other company that has a supercomputer built specifically for the automobile that also has an end-to-end deep learning training and implementation," Huang advised. "Each of our early access partners has the flexibility to design their vehicle around a single centralized supercomputer, combining different blends of sensors. Our partners are doing research and experimenting; they are free to do development and fine tuning that they can't do with a black box."
"We've already received a lot of feedback from automakers, Tier One suppliers and university researchers," Shapiro shared. "They've wanted more onboard computing power for some time, but did not want to allocate limited and valuable space to house four or five different PCs inside the vehicle. DRIVE PX 2 is a scalable system that gives automakers and other users flexibility: The base level unit has one GPU that is capable of handling up to 8 trillion deep learning operations per second, while the four-GPU system is capable of 24 trillion deep learning operations a second, in a unit smaller than a lunchbox."
According to Huang and Shapiro, in addition to fully autonomous driving, there are other automotive applications that an onboard supercomputer featuring AI and deep learning neural networks can fulfill. They cited three examples:
- In combination, smart cities and vehicle-to-infrastructure communications could resolve traffic congestion problems. Instead of trying to figure out the timings of lights, we can use deep learning and the system would, in real time, be able to learn how to optimize the signal timings to reduce congestion and re-route traffic without having to hard code program it.
- Design simulation is another application. The deep learning neural network could also be used to optimize and come up with better ways of designing things. For instance, it could create models simulating different engine designs and combustion events to come up with more efficient engines.
- Another example is cyber security, as people are increasingly concerned about vehicles being hacked. The system could use deep learning to monitor network traffic and detect hacking anomalies. Should a breach be detected, the system would be able to shut it down.
"We cannot realize the full potential of our vision unless we can solve city driving," Haung cautioned. "It doesn't matter how well we program a vehicle's maps and sensors, if the vehicle doesn't know how to deal with changes on the fly, such as when a child jumps out on the road, or when people don't follow traffic rules. The automotive partners we are working with are making this potential a reality, by developing self-driving cars that train themselves over time for all sorts of unexpected scenarios. It's the making of a brave new world."
[Editor's note: Read MOTOR Magazine's August 2016 issue for the latest automotive diagnostic and service insights.]