Building driverless cars is a slow and expensive business. After years of effort and billions of dollars of investment, the technology is still stuck in the pilot phase. Raquel Urtasun thinks she can do better. 

Last year, frustrated by the pace of the industry, Urtasun left Uber, where she led the ride-hailing firm’s self-driving research for four years, to set up her own company, called Waabi. “Right now most approaches to self-driving are just too slow to make progress,” says Urtasun, who divides her time between the driverless-car industry and the University of Toronto. “We need a radically different one.”

Waabi has now revealed the controversial new shortcut to autonomous vehicles that Urtasun is betting on. The big idea? Ditch the cars.

For the last six months Waabi has been building a super-realistic virtual environment, called Waabi World. Instead of training an AI driver in real vehicles, Waabi plans to do it almost entirely inside the simulation. The plan is that the AI won’t be tested in real vehicles on real roads until a final round of fine-tuning. 

The problem is that for an AI to learn to handle the chaos of real roads, it has to be exposed to the full range of events that it might encounter. That’s why driverless-car firms have spent the last decade driving millions of miles on streets around the world. A few, like Cruise and Waymo, have begun testing vehicles without human drivers in a handful of quiet urban environments in the US. But progress is still slow. “Why haven’t we seen an expansion of these small pilots? Why aren’t those vehicles everywhere?” asks Urtasun.

Urtasun makes bold claims for the head of a company that not only hasn’t road-tested its tech, but doesn’t even have any real vehicles. But by avoiding most of the costs of road-testing the software in real vehicles, she hopes to build an AI driver more quickly and cheaply than her competitors, giving the whole industry a much-needed boost. 

Virtual drivers

Waabi is not the first company to develop realistic virtual worlds to test self-driving software. In the last few years, simulation has become a mainstay for driverless-car firms. But the question is whether simulation alone will be enough to help the industry overcome the final technical barriers that have stopped it from becoming a viable proposition. “No one has yet built the Matrix for self-driving cars,” says Jesse Levinson, cofounder and CTO of Zoox, an autonomous-vehicle startup bought by Amazon in 2020.

Related work from others:  Latest from MIT Tech Review - Trust large language models at your own peril

In fact, nearly all autonomous-vehicle companies now use simulation in some form. It speeds up testing, exposing the AI to a wider range of scenarios than it would see on real roads, and it cuts costs. But most companies combine simulation with real-world testing, typically looping back and forth between real and virtual roads. 

Waabi World is planning to take the use of simulation to another level. The world itself is generated and controlled by AI, which acts as both driving instructor and stage manager—identifying the AI driver’s weaknesses and then rearranging the virtual environment to test them. Waabi World teaches multiple AI drivers different abilities at the same time before combining them into a single skill set. It all happens nonstop and without human input, says Urtasun. 

Rare events

Driverless-car companies use simulation to help them test how the neural networks controlling the vehicles handle rare events—a bike courier cutting in front, a truck the color of the sky blocking the way, or a chicken crossing the road—and then tweak them accordingly.

“When you have an event that happens rarely, it takes thousands of road miles to test it properly,” says Sid Gandhi, who works on simulation at Cruise, a company that’s begun testing fully autonomous vehicles on a limited number of roads in San Francisco. That’s because rare—or long-tail—events might happen only one time in a thousand. “As we work on solving the long tail, we’ll rely less and less on real-world testing,” he says.

Each time Cruise upgrades its software, it runs hundreds of thousands of simulations to test it. According to Gandhi, the firm will generate thousands of scenarios based on specific real-world situations that their cars have trouble with, tweaking details to cover a range of potential scenarios. It can also use real-world camera data from its cars to make the simulations more realistic.

Engineers can then change the road layouts, swap in different types of vehicles, or change the number of pedestrians. Finally, it uses its own self-driving algorithms to control other vehicles in the simulation so that they react realistically. Testing with this kind of synthetic data is 180 times faster and millions of dollars cheaper than using real data, says Gandhi.

Related work from others:  Latest from MIT : MIT researchers develop an AI model that can detect future lung cancer risk

Cruise is also experimenting with virtual replicas of US cities other than San Francisco, says Gandhi, to test its self-driving software on simulated streets long before its real cars hit the road in those places.

Other firms agree that simulation is a crucial part of training and testing AI for autonomous driving. “In many ways simulation is actually more useful than real driving,” says Levinson.

Wayve, a UK-based autonomous-vehicle firm, also alternates between testing in simulation and testing on real roads. It has been testing its cars on busy streets in London, but with a human in the car at all times. Simulation not only accelerates the development of autonomous vehicles by reducing the cost of testing but can make that testing more reliable, says Jamie Shotton, chief scientist at Wayve. That’s because simulations make it easier to repeat tests many times. “The key to a successful simulation is continually working to increase both its realism and its diversity,” he says.

Even so, Waabi outstrips others in how far it claims it can go with simulation alone. Like Cruise, Waabi bases its virtual world on data taken from real sensors, including lidar and cameras, which it uses to create digital twins of real-world settings. Waabi then simulates the sensor data seen by the AI driver—including reflections on shiny surfaces, which can confuse cameras, and exhaust fumes or fog, which can throw off lidar—to make the virtual world as realistic as possible.

But the key player in Waabi World is its god-like driving instructor. As the AI driver learns to navigate a range of environments, another AI learns to spot its weaknesses and generates specific scenarios to test them.

In effect, Waabi World plays one AI against another, with the instructor learning how to make the driver fail by throwing tailor-made challenges at it, and the driver learning how to beat them. As the driver AI gets better, it gets harder to find cases where it will fail, says Urtasun: “You will need to expose it to millions, perhaps billions, of scenarios in order to find your flaws.”

Urtasun thinks that training the driver in a rich simulation more closely replicates the way people learn new skills. “Every time that we experience something,” she says, “we rewire our brain.”

Training AI in a simulation by pitting it against itself or an adversary—millions and millions of times—has become a very powerful technique. It is how DeepMind trained its AI to play Go and Starcraft; it is also how AI bots learn in virtual playgrounds like DeepMind’s XLand and OpenAI’s Hide & Seek, which teach basic but general skills via trial and error.  

Related work from others:  Latest from MIT Tech Review - The Download: Google’s AI cuteness overload, and America’s fight for gun control

But a downside of giving AI free rein in a simulation is that it can learn to exploit loopholes not seen in the real world. OpenAI’s Hide & Seek bots learned to cooperate in teams to hide from—or find—others. But they also found glitches in the simulation that let them defy physics by launching themselves into the air or pushing objects through walls.

Waabi will need to make sure its simulation is accurate enough to stop its AI driver from learning such bad habits. Neural networks will always learn to exploit discrepancies between virtual and real worlds, says Urtasun: “They know how to cheat.” 

Urtasun says the company has developed ways to measure differences between real and virtual driving environments and keep them as small as possible. She won’t yet give details about this tech but says Waabi has plans to publish its work.

How far Waabi can go using simulation alone will depend on how realistic Waabi World really is. “Simulations are getting better and better, so there are fewer and fewer things that you can learn in real life that you can’t learn in simulation,” says Levinson. “But I think it’s going to be a long time before it’s nothing.”

“It’s important to maintain a healthy balance between simulation and real-world testing,” says Shotton. “The ultimate test for any autonomous-driving company is to get its technology safely deployed on the road, with all the complexities of real hardware.”

Urtasun agrees in principle. “There’s still a need for real-world testing,” she says. “But it’s much, much less.”

Whatever happens, Urtasun is adamant that the status quo cannot continue. “Everybody keeps doing the same thing, even though we haven’t solved the problem,” she says. “We need something that speeds up the process. We need to go all the way with this new way of thinking.

Similar Posts