Do you fancy your chances of beating a robot at a game of table tennis? Google DeepMind has trained a robot to play the game at the equivalent of amateur-level competitive performance, the company has announced. It claims it’s the first time a robot has been taught to play a sport with humans at a human level.

Researchers managed to get a robotic arm wielding a 3D-printed paddle to win 13 of 29 games against human opponents of varying abilities in full games of competitive table tennis. The research was published in an Arxiv paper. 

The system is far from perfect. Although the table tennis bot was able to beat all beginner-level human opponents it faced and 55% of those playing at amateur level, it lost all the games against advanced players. Still, it’s an impressive advance.

“Even a few months back, we projected that realistically the robot may not be able to win against people it had not played before. The system certainly exceeded our expectations,” says  Pannag Sanketi, a senior staff software engineer at Google DeepMind who led the project. “The way the robot outmaneuvered even strong opponents was mind blowing.”

And the research is not just all fun and games. In fact, it represents a step towards creating robots that can perform useful tasks skillfully and safely in real environments like homes and warehouses, which is a long-standing goal of the robotics community. Google DeepMind’s approach to training machines is applicable to many other areas of the field, says Lerrel Pinto, a computer science researcher at New York University who did not work on the project.

Related work from others:  Latest from MIT : Artificial intelligence meets “blisk” in new DARPA-funded collaboration

“I’m a big fan of seeing robot systems actually working with and around real humans, and this is a fantastic example of this,” he says. “It may not be a strong player, but the raw ingredients are there to keep improving and eventually get there.”

To become a proficient table tennis player, humans require excellent hand-eye coordination, the ability to move rapidly and make quick decisions reacting to their opponent—all of which are significant challenges for robots. Google DeepMind’s researchers used a two-part approach to train the system to mimic these abilities: they used computer simulations to train the system to master its hitting skills; then fine tuned it using real-world data, which allows it to improve over time.

The researchers compiled a dataset of table tennis ball states, including data on position, spin, and speed. The system drew from this library in a simulated environment designed to accurately reflect the physics of table tennis matches to learn skills such as returning a serve, hitting a forehand topspin, or backhand shot. As the robot’s limitations meant it could not serve the ball, the real-world games were modified to accommodate this.

During its matches against humans, the robot collects data on its performance to help refine its skills. It tracks the ball’s position using data captured by a pair of cameras, and follows its human opponent’s playing style through a motion capture system that uses LEDs on its opponent’s paddle. The ball data is fed back into the simulation for training, creating a continuous feedback loop.

This feedback allows the robot to test out new skills to try and beat its opponent—meaning it can adjust its tactics and behavior just like a human would. This means it becomes progressively better both throughout a given match, and over time the more games it plays.

Related work from others:  Latest from Google AI - Enabling delightful user experiences via predictive models of human attention

The system struggled to hit the ball when it was hit either very fast, beyond its field of vision (more than six feet above the table), or very low, because of a protocol that instructs it to avoid collisions that could damage its paddle. Spinning balls proved a challenge because it lacked the capacity to directly measure spin—a limitation that advanced players were quick to take advantage of.

Training a robot for all eventualities in a simulated environment is a real challenge, says Chris Walti, founder of robotics company Mytra and previously head of Tesla’s robotics team, who was not involved in the project.

“It’s very, very difficult to actually simulate the real world because there’s so many variables, like a gust of wind, or even dust [on the table]” he says. “Unless you have very realistic simulations, a robot’s performance is going to be capped.” 

Google DeepMind believes these limitations could be addressed in a number of ways, including by developing predictive AI models designed to anticipate the ball’s trajectory, and introducing better collision-detection algorithms.

Crucially, the human players enjoyed their matches against the robotic arm. Even the advanced competitors who were able to beat it said they’d found the experience fun and engaging, and said they felt it had potential as a dynamic practice partner to help them hone their skills. 

“I would definitely love to have it as a training partner, someone to play some matches from time to time,” one of the study participants said.

Share via
Copy link
Powered by Social Snap