Imagine that a robot is helping you clean the dishes. You ask it to grab a soapy bowl out of the sink, but its gripper slightly misses the mark.

Using a new framework developed by MIT and NVIDIA researchers, you could correct that robot’s behavior with simple interactions. The method would allow you to point to the bowl or trace a trajectory to it on a screen, or simply give the robot’s arm a nudge in the right direction.

Unlike other methods for correcting robot behavior, this technique does not require users to collect new data and retrain the machine-learning model that powers the robot’s brain. It enables a robot to use intuitive, real-time human feedback to choose a feasible action sequence that gets as close as possible to satisfying the user’s intent.

When the researchers tested their framework, its success rate was 21 percent higher than an alternative method that did not leverage human interventions.

In the long run, this framework could enable a user to more easily guide a factory-trained robot to perform a wide variety of household tasks even though the robot has never seen their home or the objects in it.

“We can’t expect laypeople to perform data collection and fine-tune a neural network model. The consumer will expect the robot to work right out of the box, and if it doesn’t, they would want an intuitive mechanism to customize it. That is the challenge we tackled in this work,” says Felix Yanwei Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this method.

Related work from others:  Latest from MIT Tech Review - My deepfake shows how valuable our data is in the age of AI

His co-authors include Lirui Wang PhD ’24 and Yilun Du PhD ’24; senior author Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia Perez-D’Arpino PhD ’19, and Dieter Fox of NVIDIA. The research will be presented at the International Conference on Robots and Automation.

Mitigating misalignment

Recently, researchers have begun using pre-trained generative AI models to learn a “policy,” or a set of rules, that a robot follows to complete an action. Generative models can solve multiple complex tasks.

During training, the model only sees feasible robot motions, so it learns to generate valid trajectories for the robot to follow.

While these trajectories are valid, that doesn’t mean they always align with a user’s intent in the real world. The robot might have been trained to grab boxes off a shelf without knocking them over, but it could fail to reach the box on top of someone’s bookshelf if the shelf is oriented differently than those it saw in training.

To overcome these failures, engineers typically collect data demonstrating the new task and re-train the generative model, a costly and time-consuming process that requires machine-learning expertise.

Instead, the MIT researchers wanted to allow users to steer the robot’s behavior during deployment when it makes a mistake.

But if a human interacts with the robot to correct its behavior, that could inadvertently cause the generative model to choose an invalid action. It might reach the box the user wants, but knock books off the shelf in the process.

Related work from others:  Latest from MIT : Understanding our place in the universe

“We want to allow the user to interact with the robot without introducing those kinds of mistakes, so we get a behavior that is much more aligned with user intent during deployment, but that is also valid and feasible,” Wang says.

Their framework accomplishes this by providing the user with three intuitive ways to correct the robot’s behavior, each of which offers certain advantages.

First, the user can point to the object they want the robot to manipulate in an interface that shows its camera view. Second, they can trace a trajectory in that interface, allowing them to specify how they want the robot to reach the object. Third, they can physically move the robot’s arm in the direction they want it to follow.

“When you are mapping a 2D image of the environment to actions in a 3D space, some information is lost. Physically nudging the robot is the most direct way to specifying user intent without losing any of the information,” says Wang.

Sampling for success

To ensure these interactions don’t cause the robot to choose an invalid action, such as colliding with other objects, the researchers use a specific sampling procedure. This technique lets the model choose an action from the set of valid actions that most closely aligns with the user’s goal.

“Rather than just imposing the user’s will, we give the robot an idea of what the user intends but let the sampling procedure oscillate around its own set of learned behaviors,” Wang explains.

This sampling method enabled the researchers’ framework to outperform the other methods they compared it to during simulations and experiments with a real robot arm in a toy kitchen.

Related work from others:  Latest from MIT : Helping robots practice skills independently to adapt to unfamiliar environments

While their method might not always complete the task right away, it offers users the advantage of being able to immediately correct the robot if they see it doing something wrong, rather than waiting for it to finish and then giving it new instructions.

Moreover, after a user nudges the robot a few times until it picks up the correct bowl, it could log that corrective action and incorporate it into its behavior through future training. Then, the next day, the robot could pick up the correct bowl without needing a nudge.

“But the key to that continuous improvement is having a way for the user to interact with the robot, which is what we have shown here,” Wang says.

In the future, the researchers want to boost the speed of the sampling procedure while maintaining or improving its performance. They also want to experiment with robot policy generation in novel environments.

Share via
Copy link
Powered by Social Snap