Adobe has announced a new tool to help creators watermark their artwork and opt out of having it used to train generative AI models.

The web app, called Adobe Content Authenticity, allows artists to signal that they do not consent for their work to be used by AI models, which are generally trained on vast databases of content scraped from the internet. It also gives creators the opportunity to add what Adobe is calling “content credentials,” including their verified identity, social media handles, or other online domains, to their work.

Content credentials are based on C2PA, an internet protocol that uses cryptography to securely label images, video, and audio with information clarifying where they came from—the 21st-century equivalent of an artist’s signature. 

Although Adobe had already integrated the credentials into several of its products, including Photoshop and its own generative AI model Firefly, Adobe Content Authenticity allows creators to apply them to content regardless of whether it was created using Adobe tools. The company is launching a public beta in early 2025.

The new app is a step in the right direction toward making C2PA more ubiquitous and could make it easier for creators to start adding content credentials to their work, says Claire Leibowicz, head of AI and media integrity at the nonprofit Partnership on AI.

“I think Adobe is at least chipping away at starting a cultural conversation, allowing creators to have some ability to communicate more and feel more empowered,” she says. “But whether or not people actually respond to the ‘Do not train’ warning is a different question.”

Related work from others:  Latest from MIT Tech Review - Transforming software with generative AI

The app joins a burgeoning field of AI tools designed to help artists fight back against tech companies, making it harder for those companies to scrape their copyrighted work without consent or compensation. Last year, researchers from the University of Chicago released Nightshade and Glaze, two tools that let users add an invisible poison attack to their images. One causes AI models to break when the protected content is scraped, and the other conceals someone’s artistic style from AI models. Adobe has also created a Chrome browser extension that allows users to check website content for existing credentials.

Users of Adobe Content Authenticity will be able to attach as much or as little information as they like to the content they upload. Because it’s relatively easy to accidentally strip a piece of content of its unique metadata while preparing it to be uploaded to a website, Adobe is using a combination of methods, including digital fingerprinting and invisible watermarking as well as the cryptographic metadata. 

This means the content credentials will follow the image, audio, or video file across the web, so the data won’t be lost if it’s uploaded on different platforms. Even if someone takes a screenshot of a piece of content, Adobe claims, credentials can still be recovered.

However, the company acknowledges that the tool is far from infallible. “Anybody who tells you that their watermark is 100% defensible is lying,” says Ely Greenfield, Adobe’s CTO of digital media. “This is defending against accidental or unintentional stripping, as opposed to some nefarious actor.”

The company’s relationship with the artistic community is complicated. In February, Adobe updated its terms of service to give it access to users’ content “through both automated and manual methods,” and to say it uses techniques such as machine learning in order to improve its vaguely worded “services and software.” The update was met with a major backlash from artists who took it to mean the company planned to use their work to train Firefly. Adobe later clarified that the language referred to features not based on generative AI, including a Photoshop tool that removes objects from images. 

Related work from others:  UC Berkeley - Making RL Tractable by Learning More Informative Reward Functions: Example-Based Control, Meta-Learning, and Normalized Maximum Likelihood

While Adobe says that it doesn’t (and won’t) train its AI on user content, many artists have argued that the company doesn’t actually obtain consent or own the rights to individual contributors’ images, says Neil Turkewitz, an artists’ rights activist and former executive vice president of the Recording Industry Association of America.

“It wouldn’t take a huge shift for Adobe to actually become a truly ethical actor in this space and to demonstrate leadership,” he says. “But it’s great that companies are dealing with provenance and improving tools for metadata, which are all part of an ultimate solution for addressing these problems.”

Share via
Copy link
Powered by Social Snap