MIT Professor Jonathan How’s research interests span the gamut of autonomous vehicles — from airplanes and spacecraft to unpiloted aerial vehicles (UAVs, or drones) and cars. He is particularly focused on the design and implementation of distributed robust planning algorithms to coordinate multiple autonomous vehicles capable of navigating in dynamic environments.

For the past year or so, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics and a team of researchers from the Aerospace Controls Laboratory at MIT have been developing a trajectory planning system that allows a fleet of drones to operate in the same airspace without colliding with each other. Put another way, it is a multi-vehicle collision avoidance project, and it has real-world implications around cost savings and efficiency for a variety of industries including agriculture and defense.

The test facility for the project is the Kresa Center for Autonomous Systems, an 80-by-40-foot space with 25-foot ceilings, custom designed for MIT’s work with autonomous vehicles — including How’s swarm of UAVs regularly buzzing around the center’s high bay. To avoid collision, each UAV must compute its path-planning trajectory onboard and share it with the rest of the machines using a wireless communication network.

But, according to How, one of the key challenges in multi-vehicle work involves communication delays associated with the exchange of information. In this case, to address the issue, How and his researchers embedded a “perception aware” function in their system that allows a vehicle to use the onboard sensors to gather new information about the other vehicles and then alter its own planned trajectory. In testing, their algorithmic fix resulted in a 100 percent success rate, guaranteeing collision-free flights among their group of drones. The next step, says How, is to scale up the algorithms, test in bigger spaces, and eventually fly outside.

Related work from others:  Latest from MIT Tech Review - Google DeepMind has a new way to look inside an AI’s “mind”

Born in England, Jonathan How’s fascination with airplanes started at a young age, thanks to ample time spent at airbases with his father, who, for many years, served in the Royal Air Force. However, as How recalls, while other children wanted to be astronauts, his curiosity had more to do with the engineering and mechanics of flight. Years later, as an undergraduate at the University of Toronto, he developed an interest in applied mathematics and multi-vehicle research as it applied to aeronautical and astronautical engineering. He went on to do his graduate and postdoctoral work at MIT, where he contributed to a NASA-funded experiment on advanced control techniques for high-precision pointing and vibration control on spacecraft. And, after working on distributed space telescopes as a junior faculty member at Stanford University, he returned to Cambridge, Massachusetts, to join the faculty at MIT in 2000.

“One of the key challenges for any autonomous vehicle is how to address what else is in the environment around it,” he says. For autonomous cars that means, among other things, identifying and tracking pedestrians. Which is why How and his team have been collecting real-time data from autonomous cars equipped with sensors designed to track pedestrians, and then they use that information to generate models to understand their behavior — at an intersection, for example — which enables the autonomous vehicle to make short-term predictions and better decisions about how to proceed. “It’s a very noisy prediction process, given the uncertainty of the world,” How admits. “The real goal is to improve knowledge. You’re never going to get perfect predictions. You’re just trying to understand the uncertainty and reduce it as much as you can.”

Related work from others:  Latest from Google AI - Revisiting Mask Transformer from a Clustering Perspective

On another project, How is pushing the boundaries of real-time decision-making for aircraft. In these scenarios, the vehicles have to determine where they are located in the environment, what else is around them, and then plan an optimal path forward. Furthermore, to ensure sufficient agility, it is typically necessary to be able to regenerate these solutions at about 10-50 times per second, and as soon as new information from the sensors on the aircraft becomes available. Powerful computers exist, but their cost, size, weight, and power requirements make their deployment on small, agile, aircraft impractical. So how do you quickly perform all the necessary computation — without sacrificing performance — on computers that easily fit on an agile flying vehicle?

How’s solution is to employ, on board the aircraft, fast-to-query neural networks that are trained to “imitate” the response of the computationally expensive optimizers. Training is performed during an offline (pre-mission) phase, where he and his researchers run an optimizer repeatedly (thousands of times) that “demonstrates” how to solve a task, and then they embed that knowledge into a neural network. Once the network has been trained, they run it (instead of the optimizer) on the aircraft. In flight, the neural network makes the same decisions that the optimizer would have made, but much faster, significantly reducing the time required to make new decisions. The approach has proven to be successful with UAVs of all sizes, and it can also be used to generate neural networks that are capable of directly processing noisy sensory signals (called end-to-end learning), such as the images from an onboard camera, enabling the aircraft to quickly locate its position or to avoid an obstacle. The exciting innovations here are in the new techniques developed to enable the flying agents to be trained very efficiently – often using only a single task demonstration. One of the important next steps in this project are to ensure that these learned controllers can be certified as being safe.

Related work from others:  Latest from MIT Tech Review - Watch this team of drones 3D print a tower

Over the years, How has worked closely with companies like Boeing, Lockheed Martin, Northrop Grumman, Ford, and Amazon. He says working with industry helps focus his research on solving real-world problems. “We take industry’s hard problems, condense them down to the core issues, create solutions to specific aspects of the problem, demonstrate those algorithms in our experimental facilities, and then transition them back to the industry. It tends to be a very natural and synergistic feedback loop,” says How.

Similar Posts