Getty Images is so confident its new generative AI model is free of copyrighted content that it will cover any potential intellectual-property disputes for its customers. 

The generative AI system, announced today, was built by Nvidia and is trained solely on images in Getty’s image library. It does not include logos or images that have been scraped off the internet without consent. 

“Fundamentally, it’s trained; it’s clean. It’s viable for businesses to use. We’ll stand behind that claim,” says Craig Peters, the CEO of Getty Images. Peters says companies that want to use generative AI want total legal certainty they won’t face expensive copyright lawsuits. 

The past year has seen a boom in generative AI systems that produce images and text. But AI companies are embroiled in numerous legal battles over copyrighted content. Prominent artists and authors—most recently John Grisham, Jodi Picoult, and George R.R. Martin—have sued AI companies such as OpenAI and Stability AI for copyright infringement. Earlier this year, Getty Images announced it was suing Stability AI for using millions of its images, without permission, to train its open-source image-generation AI Stable Diffusion.

The legal challenges have sparked many attempts by others to benefit from generative AI while also protecting intellectual property. Adobe recently launched Firefly, which it claims is similarly trained on copyright-free content. Shutterstock has said it is planning on reimbursing artists whose works have been sold to AI companies to train models. Microsoft recently announced it will also foot any copyright legal bills for clients using its text-based generative models. 

Related work from others:  Latest from MIT Tech Review - DeepMind’s AI can control superheated plasma inside a fusion reactor 

Peters says that  the creators of the images—and any people that appear in them—have consented to having their art used in the AI model. Getty is also offering a Spotify-style compensation model to creatives for the use of their work. 

The fact that creatives will be compensated in this way is good news, says Jia Wang, an assistant professor at Durham University in the UK, who specializes in AI and intellectual-property law. But it might be tricky to determine which images have been used in generated AI images in order to determine who should be compensated for what, she adds. 

Getty’s model is only trained on the firm’s creative content, so it does not include imagery of real people or places that could be manipulated into deepfake imagery. 

“The service doesn’t know who the pope is and it doesn’t know what Balenciaga is, and they can’t combine the two. It doesn’t know what the Pentagon is, and [that] you’re not gonna be able to blow it up,” says Peters, referring to recent viral images created by generative AI models. 

As an example, Peters types in a prompt for the president of the United States, and the AI model generates images of men and women of different ethnicities in suits and in front of the American flag. 

Tech companies claim that AI models are complex and can’t be built without copyrighted content and point out that artists can opt out of AI models, but Peters calls those arguments “bullshit.” 

“I think there are some really sincere people that are actually being thoughtful about this,” he says. “But I also think there’s some hooligans that just want to go for that gold rush.”

Related work from others:  Latest from MIT Tech Review - Successfully deploying machine learning

Similar Posts