Wayve, a driverless-car startup based in London, has made a machine-learning model that can drive two different types of vehicle: a passenger car and a delivery van. It is the first time the same AI driver has learned to drive multiple vehicles.

The news comes less than a year after Wayve showed that it could take AI trained on the streets of London and use it to drive cars in four other cities across the UK, a challenge that would typically require significant re-engineering. “It’s like when you go somewhere new and get a rental car, you can still drive,” says Jeff Hawke, Wayve’s vice president of technology.  

The advance suggests that Wayve’s approach to autonomous vehicles (AVs), in which a deep-learning model is trained to drive from scratch, could help it scale up faster than leading companies like Cruise, Waymo, and Tesla.

Wayve is a far smaller company than its bigger and better-funded competitors. But it is part of a new generation of startups, including Waabi and Ghost, sometimes known as AV2.0, that is ditching the robotics mindset embraced by the first wave of driverless car firms—where driverless cars rely on super-detailed 3D maps and separate modules for sensing and planning. Instead, these startups rely entirely on AI to drive the vehicles. 

The robotics approach has brought robotaxis to a handful of streets in Phoenix and San Francisco—but at enormous cost and with few signs that these services will spread beyond the pilot phase any time soon. Wayve and others hope to change that, repeating what deep learning did for computer vision and natural language processing with self-driving vehicles, allowing them to adapt better to unfamiliar streets and scenarios without having to keep complex maps up to date or maintain hand-crafted software systems.

I visited Wayve’s headquarters in London to check out the company’s new Maxus e9 van parked beside its existing fleet of Jaguar I-PACE cars. The van is fitted with the same seven webcam-sized sensors as the cars, but they are positioned higher and at different angles. This means that the input to the model—a video feed from each camera that it monitors around 30 times a second—differs between vehicles, but the AI has learned to control them from either viewpoint. The AI also had to adapt to the van’s larger size and mass. It has a different turning circle, and it takes longer to stop.

Related work from others:  Latest from Google AI - Pix2Seq: A New Language Interface for Object Detection

The car and van may have the same decision-maker behind the wheel, but those decisions need to be carried out in different ways. Under the van’s hood, a jumble of wires and custom-built computer parts translate the model’s commands to the particular vehicle it is controlling.

Wayve’s AI model is trained using a combination of reinforcement learning, where it learns by trial and error, and imitation learning, where it copies the actions of human drivers. It had taken thousands of hours of driving data to train the model to drive a car. The company first trained its AI model to drive the van in a simulation, which took just another 80 hours of data.

That surprised the team. “When we started this project, we did not know how much data would be required to get the system to generalize,” says Becky Goldman, a scientist at Wayve. But the result suggests that the model can adapt to new vehicles more quickly than expected. Wayve also found that learning to drive a van improved its performance in the car.

Once the model could drive the van as well as the car in simulation, Wayve took it out on the road. Naomi Standard, a safety operator at Wayve, sits in the vehicles while they drive themselves. She admits to being scared during the van’s first run: “I used to feel the same way as a driving instructor when I took a driver out for the first time.” But the van coped well with London’s narrow streets, navigating roadwork, pedestrian crossings, buses, and double-parked cars.  

Jay Gierak at Ghost, which is based in Mountain View, California, is impressed by Wayve’s demonstrations and agrees with the company’s overall viewpoint. “The robotics approach is not the right way to do this,” says Gierak.

Related work from others:  Latest from MIT Tech Review - There’s no Tiananmen Square in the new Chinese image-making AI

But he’s not sold on Wayve’s total commitment to deep learning. Instead of a single large model, Ghost trains many hundreds of smaller models, each with a specialism. It then hand codes simple rules that tell the self-driving system which models to use in which situations. (Ghost’s approach is similar to that taken by another AV2.0 firm, Autobrains, based in Israel. But Autobrains uses yet another layer of neural networks to learn the rules.)

According to Volkmar Uhlig, Ghost’s co-founder and CTO, splitting the AI into many smaller pieces, each with specific functions, makes it easier to establish that an autonomous vehicle is safe. “At some point, something will happen,” he says. “And a judge will ask you to point to the code that says: ‘If there’s a person in front of you, you have to brake.’ That piece of code needs to exist.” The code can still be learned, but in a large model like Wayve’s it would be hard to find, says Uhlig.

Still, the two companies are chasing complementary goals: Ghost wants to make consumer vehicles that can drive themselves on freeways; Wayve wants to be the first company to put driverless cars in 100 cities. Wayve is now working with UK grocery giants Asda and Ocado, collecting data from their urban delivery vehicles.

Yet, by many measures, both firms are far behind the market leaders. Cruise and Waymo have racked up hundreds of hours of driving without a human in their cars and already offer robotaxi services to the public in a small number of locations.

Related work from others:  Latest from MIT Tech Review - What are AI agents? 

“I don’t want to diminish the scale of the challenge ahead of us,” says Hawke. “The AV industry teaches you humility.”

Similar Posts