This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Welcome back to The Algorithm! 

I have a chair of shame at home. By that I mean a chair in my bedroom onto which I pile used clothes that aren’t quite dirty enough to wash. For some inexplicable reason folding and putting away those clothes feels like an overwhelming task when I go to bed at night, so I dump them on the chair for “later.” I would pay good money to automate that job before the chair is covered by a mountain of clothes. 

Thanks to AI, we’re slowly inching towards the goal of household robots that can do our chores. Building truly useful household robots that we can easily offload tasks to has been a science fiction fantasy for decades, and is the ultimate goal of many roboticists. But robots are clumsy, and struggle to do things we find easy. The sorts of robots that can do very complex things, like surgery, often cost hundreds of thousands of dollars, which makes them prohibitively expensive.

I just published a story on a new robotics system from Stanford called Mobile ALOHA, which researchers used to get a cheap, off-the-shelf wheeled robot to do some incredibly complex things on its own, such as cooking shrimp, wiping stains off surfaces and moving chairs. They even managed to get it to cook a three-course meal—though that was with human supervision. Read more about it here.

Robotics is at an inflection point, says Chelsea Finn, an assistant professor at Stanford University, who was an advisor for the project. In the past, researchers have been constrained by the amount of data they can train robots on. Now there is a lot more data available, and work like Mobile ALOHA shows that with neural networks and more data, robots can learn complex tasks fairly quickly and easily, she says. 

Related work from others:  Latest from MIT Tech Review - This horse-riding astronaut is a milestone in AI’s ability to make sense of the world

While AI models, such as the large language models that power chatbots, are trained on huge datasets that have been hoovered up from the internet, robots need to be trained on data that has been physically collected. This makes it a lot harder to build vast datasets. A team of researchers at NYU and Meta recently came up with a simple and clever way to work around this problem. They used an iPhone attached to a reacher-grabber stick to record volunteers doing tasks at home. They were then able to train a system called Dobb-E (10 points to Ravenclaw for that name) to complete over 100 household tasks in around 20 minutes. (Read more from Rhiannon Williams here.)

Mobile ALOHA also debunks a belief held in the robotics community that it was primarily hardware shortcomings holding back robots’ ability to do such tasks, says Deepak Pathak, an assistant professor at Carnegie Mellon University, who was also not part of the research team. 

“The missing piece is AI,” he says. 

AI has also shown promise in getting robots to respond to verbal commands, and helping them adapt to the often messy environments in the real world. For example, Google’s RT-2 system combines a vision-language-action model with a robot. This allows the robot to “see” and analyze the world, and respond to verbal instructions to make it move. And a new system called AutoRT from DeepMind uses a similar vision-language model to help robots adapt to unseen environments, and a large language model to come up with instructions for a fleet of robots. 

And now for the bad news: even the most cutting-edge robots still cannot do laundry. It’s a chore that is significantly harder for robots than for humans. Crumpled clothes form weird shapes which makes it hard for robots to process and handle.

Related work from others:  UC Berkeley - Koala: A Dialogue Model for Academic Research

But it might just be a matter of time, says Tony Zhao, one of the researchers from Stanford. He is optimistic that even this trickiest of tasks will one day be possible for robots to master using AI. They just need to collect the data first. Maybe there is hope for me and my chair after all! 

A Birthday Special

How MIT Technology Review got its start

We are turning 125 this year! Thank you for sticking with us all these years. Here’s how it all began—and how the fledgling magazine helped rally alumni to oppose a merger with Harvard.

Did you know? When the publication was founded in 1899, The Technology Review, as it was first titled, didn’t focus on the application of scientific knowledge to practical purposes. It was a magazine about MIT itself—or “Technology,” as its earliest alumni fondly called it. Read more from Simson Garfinkel here

Bits and Bytes

Meet the woman who transformed Sam Altman into the avatar of AI
A great profile of Anna Makanju, OpenAI’s vice president of global affairs. She is the woman who orchestrated Sam Altman’s global tour meeting world leaders, transforming him into the AI sector’s ambassador in the process. (The Washington Post

It’s “impossible” to create AI models without copyrighted material, OpenAI says
In a submission to a committee in the UK’s House of Lords, the AI company said it could not train its large AI models, such GPT-4 and ChatGPT, without access to copyrighted work. The company also argued that excluding copyrighted content would lead to inadequate systems. Critics, such as professor emeritus at NYU Gary Marcus, called this “self-serving nonsense” and an attempt to avoid paying licensing fees. (The Guardian

Related work from others:  Latest from Google AI - Nested Hierarchical Transformer: Towards Accurate, Data-Efficient, and Interpretable Visual Understanding

US companies and Chinese experts engaged in secret diplomacy on AI safety
With the blessing of government officials, OpenAI, Anthropic and Cohere met with top Chinese AI experts last year. The meetings were about the risks relating to the technology, and encouraging investment in AI safety research. The “ultimate goal was to find a scientific path forward to safely develop more sophisticated AI technology,” writes the FT. (The Financial Times

Duolingo has cut 10% of its contractors as it creates more content with AI
The language-learning app company has fired some of its contractors and has started using more generative AI to create content. The company says it’s not a direct replacement of workers to AI, but a result of its employees using more AI tools. It will be interesting to see how well this will serve Duolingo in the long term, knowing how flawed and biased generative AI can be. (Bloomberg)

Share via
Copy link
Powered by Social Snap