Gaming

Nvidia Once Ruled Video Games, Now It May Rule Self-Driving Cars

From 3D worlds IRL ones.

by Mike Brown
Youtube

Nvidia is a name that has cropped up a lot in the world of autonomous cars, and rightfully so. While the chip-maker is likely better-known for producing high-end graphics to power pixel-pushing gameplay, it’s also started providing the hardware and software that could help bring about the future of driving.

“It’s a cool story how we ended up there,” Bea Longworth, head of corporate, enterprise and automotive PR at Nvidia, tells Inverse. “Our CEO likes to describe it as ‘serendipity meets destiny’.”

Although Nvidia was founded in 1993, it was with the release of the GeForce 256 in 1999 that the company would take its first steps toward autonomous driving. The company described the GeForce 256 as the world’s first “graphics processing unit.” Nvidia defines the term as “a single chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines, that is capable of processing a minimum of 10 million polygons per second.” The chip was lauded over by industry professionals and used in in 3D video games — think Quake III Arena — up until 2006:

Quake III Arena

“The sheer power of Nvidia’s next generation GPU gives us greater freedom when designing characters and worlds,” Darren Falcus, vice president and general manager of Acclaim Studios Teesside, said at the time of the chip’s introduction. “The technology not only allows us to incorporate unbelievably detailed visuals, but it also offers the ability to add more robust artificial intelligence, level design, and more.”

The sort of computing problems Nvidia was trying to solve with the GPU were notably similar to the ones involved in artificial intelligence. Producing 3D graphics requires a processor that can perform multiple calculations at once, known as parallel processing. “Deep learning,” a form of A.I. useful for teaching computers how to drive, has similar requirements.

An autonomous car needs to take in information from a several different cameras (Teslas have eight), and sometimes, lidar, radar, and ultrasonic sensors. It needs to make split-second decisions based on all this information, and it needs to learn from its experiences to improve. Over the years, Nvidia had been developing chips for one use — navigating 3D worlds in video games — that turned out to be very useful for a totally different scenario.

At the Consumer Electronics Show in 2015, Nvidia entered the fray. The Drive PX computer used two Tegra X1-based chips to deliver one teraflop of power. By comparison, the Xbox One has 1.31 teraflops of graphics power. Nvidia didn’t stand still, though: at the following year’s conference, it announced the Drive PX 2 with a staggering eight teraflops of power.

The Drive PX 2.

Nvidia

The Drive PX 2 is a performance beast. The company claims it has enough processing power to deliver so-called level 3 autonomy — that means a driver has to pay attention at all times, but the car can make actual driving decisions rather than just following the road.

In October, Tesla switched to using the Drive PX 2 in its vehicles, using its own software solution to power the semi-autonomous Autopilot mode. In the future, Tesla wants to upgrade the software to support fully autonomous driving from A to B. Currently-shipping Teslas only include a single Drive PX 2, though, and Tesla CEO Elon Musk has hinted in the past that the company may need to upgrade the onboard computer to enable the feature.

A Tesla Model S with Autopilot driving the car.

Prior to this rollout, Tesla was using chips from a company called Mobileye. Intel announced plans to buy Tesla’s former partner in March for $15.3 billion. Since breaking up with Tesla, Mobileye has struck deals with BMW and Volkswagen to collaborate on map technology.

“Mobileye is really focused on computer vision,” Longworth says. “The way that our approach differs is that we’re really focused on using artificial intelligence to solve the whole problem, because we believe that that’s a more flexible approach.”

Computer vision is all about teaching a computer to understand pictures and video. To explain how this is different from Nvidia’s approach, Longworth gives the example of a car learning to understand street signs. Under a computer vision system, the developers would code the system to recognize street signs and act on them accordingly. If the car was taken to a country with unfamiliar street signs, though, it would require starting from scratch prior to deployment. With Nvidia’s deep learning approach, the developers just need to retrain the system on these new signs.

Nvidia's illustration of the system in action.

Mobileye disagrees. When Nvidia unveiled the Drive PX in 2016, Mobileye co-founder Amnon Shashua dismissed the $10,000 computer as overpriced and lacking real-world benefits.

“I think it is not relevant to our space,” Shashua said at the time. “It is nice if you are a professor with a few students.”

The current progress on both sides sounds exciting, but considering the autonomous car is still in development, it appears Nvidia’s work is not done. Computer chips are always improving, but the company would not reveal when we can expect a Drive PX 3.

“We don’t comment on unannounced products,” Longworth says. “However, I would say that one of the things we brought from the gaming part of our business is we don’t sit around for long. With the graphics cards that go in your PC, we have a pretty aggressive schedule of new products coming out, and that’s a little bit different from the way the automotive industry traditionally works.”

Nvidia is just one of the many ways the car industry is changing how it works.

Related Tags