This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It’s the Taylor Swifts of the world that are going to save us. In January, nude deepfakes of Taylor Swift went viral on X, which caused public outrage. Nonconsensual explicit deepfakes are one of the most common and severe types of harm posed by AI. The generative AI boom of the past few years has only made the problem worse, and we’ve seen high-profile cases of children and female politicians being abused with these technologies. 

Though terrible, Swift’s deepfakes did perhaps more than anything else to raise awareness about the risks and seem to have galvanized tech companies and lawmakers to do something. 

“The screw has been turned,” says Henry Ajder, a generative AI expert who has studied deepfakes for nearly a decade. We are at an inflection point where the pressure from lawmakers and awareness among consumers is so great that tech companies can’t ignore the problem anymore, he says. 

First, the good news. Last week Google said it is taking steps to keep explicit deepfakes from appearing in search results. The tech giant is making it easier for victims to request that nonconsensual fake explicit imagery be removed. It will also filter all explicit results on similar searches and remove duplicate images. This will prevent the images from popping back up in the future. Google is also downranking search results that lead to explicit fake content. When someone searches for deepfakes and includes someone’s name in the search, Google will aim to surface high-quality, non-explicit content, such as relevant news articles.

This is a positive move, says Ajder. Google’s changes remove a huge amount of visibility for nonconsensual, pornographic deepfake content. “That means that people are going to have to work a lot harder to find it if they want to access it,” he says. 

Related work from others:  UC Berkeley - imodels: leveraging the unreasonable effectiveness of rules

In January, I wrote about three ways we can fight nonconsensual explicit deepfakes. These included regulation; watermarks, which would help us detect whether something is AI-generated; and protective shields, which make it harder for attackers to use our images. 

Eight months on, watermarks and protective shields remain experimental and unreliable, but the good news is that regulation has caught up a little bit. For example, the UK has banned both creation and distribution of nonconsensual explicit deepfakes. This decision led a popular site that distributes this kind of content, Mr DeepFakes, to block access to UK users, says Ajder. 

The EU’s AI Act is now officially in force and could usher in some important changes around transparency. The law requires deepfake creators to clearly disclose that the material was created by AI. And in late July, the US Senate passed the Defiance Act, which gives victims a way to seek civil remedies for sexually explicit deepfakes. (This legislation still needs to clear many hurdles in the House to become law.) 

But a lot more needs to be done. Google can clearly identify which websites are getting traffic and tries to remove deepfake sites from the top of search results, but it could go further. “Why aren’t they treating this like child pornography websites and just removing them entirely from searches where possible?” Ajder says. He also found it a weird omission that Google’s announcement didn’t mention deepfake videos, only images. 

Looking back at my story about combating deepfakes with the benefit of hindsight, I can see that I should have included more things companies can do. Google’s changes to search are an important first step. But app stores are still full of apps that allow users to create nude deepfakes, and payment facilitators and providers still provide the infrastructure for people to use these apps. 

Related work from others:  Latest from MIT Tech Review - OpenAI and Google are launching supercharged AI assistants. Here’s how you can try them out.

Ajder calls for us to radically reframe the way we think about nonconsensual deepfakes and pressure companies to make changes that make it harder to create or access such content. 

“This stuff should be seen and treated online in the same way that we think about child pornography—something which is reflexively disgusting, awful, and outrageous,” he says. “That requires all of the platforms … to take action.” 

Now read the rest of The Algorithm

Deeper Learning

End-of-life decisions are difficult and distressing. Could AI help?

A few months ago, a woman in her mid-50s—let’s call her Sophie—experienced a hemorrhagic stroke, which left her with significant brain damage. Where should her medical care go from there? This difficult question was left, as it usually is in these kinds of situations, to Sophie’s family members, but they couldn’t agree. The situation was distressing for everyone involved, including Sophie’s doctors.

Enter AI: End-of-life decisions can be extremely upsetting for surrogates tasked with making calls on behalf of another person, says David Wendler, a bioethicist at the US National Institutes of Health. Wendler and his colleagues are working on something that could make things easier: an artificial-intelligence-based tool that can help surrogates predict what patients themselves would want. Read more from Jessica Hamzelou here

Bits and Bytes

OpenAI has released a new ChatGPT bot that you can talk to
The new chatbot represents OpenAI’s push into a new generation of AI-powered voice assistants in the vein of Siri and Alexa, but with far more capabilities to enable more natural, fluent conversations. (MIT Technology Review

Related work from others:  UC Berkeley - Generating 3D Molecular Conformers via Equivariant Coarse-Graining and Aggregated Attention

Meta has scrapped celebrity AI chatbots after they fell flat with users
Less than a year after announcing it was rolling out AI chatbots based on celebrities such as Paris Hilton, the company is scrapping the feature. Turns out nobody wanted to chat with a random AI celebrity after all! Instead, Meta is rolling out a new feature called AI Studio, which allows creators to make AI avatars of themselves that can chat with fans. (The Information)

OpenAI has a watermarking tool to catch students cheating with ChatGPT but won’t release it
The tool can detect text written by artificial intelligence with 99.9% certainty, but the company hasn’t launched it for fear it might put people off from using its AI products. (The Wall Street Journal

The AI Act has entered into force
At last! Companies now need to start complying with one of the world’s first sweeping AI laws, which aims to curb the worst harms. It will usher in much-needed changes to how AI is built and used in the European Union and beyond. I wrote about what will change with this new law, and what won’t, in March. (The European Commission)

How TikTok bots and AI have powered a resurgence in UK far-right violence
Following the tragic stabbing of three girls in the UK, the country has seen a surge of far-right riots and vandalism. The rioters have created AI-generated images that incite hatred and spread harmful stereotypes. Far-right groups have also used AI music generators to create songs with xenophobic content. These have spread like wildfire online thanks to powerful recommendation algorithms. (The Guardian)

Share via
Copy link
Powered by Social Snap