Google DeepMind has launched a new watermarking tool that labels whether images have been generated with AI.

The tool, called SynthID, will initially be available only to users of Google’s AI image generator Imagen, which is hosted on Google Cloud’s machine learning platform Vertex. Users will be able to generate images using Imagen and then choose whether to add a watermark or not. The hope is that it could help people tell when AI-generated content is being passed off as real, or help protect copyright. 

In the past year, the huge popularity of generative AI models has also brought with it the proliferation of AI-generated deepfakes, nonconsensual porn, and copyright infringements. Watermarking—a technique where you hide a signal in a piece of text or an image to identify it as AI-generated—has become one of the most popular ideas proposed to curb such harms. 

In July, the White House announced it had secured voluntary commitments from leading AI companies such as OpenAI, Google, and Meta to develop watermarking tools in an effort to combat misinformation and misuse of AI-generated content. 

At Google’s annual conference I/O in May, CEO Sundar Pichai said the company is building its models to include watermarking and other techniques from the start. Google DeepMind is now the first Big Tech company to publicly launch such a tool.

Traditionally images have been watermarked by adding a visible overlay onto them, or adding information into their metadata. But this method is “brittle” and the watermark can be lost when images are cropped, resized, or edited, says Pushmeet Kohli, vice president of research at Google DeepMind.

Related work from others:  Latest from MIT : Subtle biases in AI can influence emergency decisions

SynthID is created using two neural networks. One takes the original image and produces another image that looks almost identical to it, but with some pixels subtly modified. This creates an embedded pattern that is invisible to the human eye. The second neural network can spot the pattern and will tell users whether it detects a watermark, suspects the image has a watermark, or finds that it doesn’t have a watermark. Kohli said SynthID is designed in a way that means the watermark can still be detected even if the image is screenshotted or edited—for example, by rotating or resizing it. 

Google DeepMind is not the only one working on these sorts of watermarking methods,  says Ben Zhao, a professor at the University of Chicago, who has worked on systems to prevent artists’ images from being scraped by AI systems. Similar techniques already exist and are used in the open-source AI image generator Stable Diffusion. Meta has also conducted research on watermarks, although it has yet to launch any public watermarking tools. 

Kohli claims Google DeepMind’s watermark is more resistant to tampering than previous attempts to create watermarks for images, although still not perfectly immune.  

But Zhao is skeptical. “There are few or no watermarks that have proven robust over time,” he says. Early work on watermarks for text has found that they are easily broken, usually within a few months. 

Bad actors have a vested interest in disrupting watermarks, he adds—for example, to claim that deepfaked content is genuine photographic evidence of a nonexistent crime or event. 

Related work from others:  Latest from MIT : Explained: How to tell if artificial intelligence is working the way we want it to

 “An attacker seeking to promote deepfake imagery as real, or discredit a real photo as fake, will have a lot to gain, and will not stop at cropping, or lossy compression or changing colors,” Zhao says. 

Nevertheless, Google DeepMind’s launch is a good first step and could lead to better information-sharing in the field about which techniques work and which don’t, says Claire Leibowicz, the head of the AI and Media Integrity Program at the Partnership on AI. 

“The fact that this is really complicated shouldn’t paralyze us into doing nothing,” she says. 

Kohli told MIT Technology Review the watermarking tool is  “experimental” and said the company wants to see how people use it and learn about its strengths and weaknesses before rolling it out more widely. He refused to say whether Google DeepMind might make the tool more widely available for images other than ones generated by Imagen. He also did not say whether Google will add the watermark to its AI image generation systems.

This limits its usefulness, says Sasha Luccioni, an AI researcher at startup Hugging Face. Google’s decision to keep the tool proprietary means only Google will be able to both embed and detect these watermarks, she adds. 

“If you add a watermarking component to image generation systems across the board, there will be less risk of harms like deepfake pornography,” Luccioni says. 

Similar Posts