Welcome to The Algorithm 2.0! 

I’m Melissa Heikkilä, MIT Technology Review’s senior reporter for AI. I’m so happy you’re here. Every week I will demystify the latest AI breakthroughs and cut through the hype. This week, I want to talk to you about some of the unforeseen consequences that might come from one of the hottest areas of AI: text-to-image generation. 

Text-to-image AI models are a lot of fun. Enter any random text prompt, and they will generate an image in that vein. Sometimes the results are really silly. But increasingly, they’re impressive, and can pass for high-quality art drawn by a human being. 

I just published a story about a Polish artist called Greg Rutkowski, who paints fantasy landscapes (see an example of his work above) and who has become a sudden hit in this new world. 

Thanks to his distinctive style, Rutkowski is now one of the most commonly used prompts in the new open-source AI art generator Stable Diffusion, which was launched late last month—far more popular than some of the world’s most famous artists, like Picasso. His name has been used as a prompt around 93,000 times.

But he’s not happy about it. He thinks it could threaten his livelihood—and he was never given the choice of whether to opt in or out of having his work used this way. 

The story is yet another example of AI developers rushing to roll out something cool without thinking about the humans who will be affected by it. 

Stable Diffusion is free for anyone to use, providing a great resource for AI developers who want to use a powerful model to build products. But because these open-source programs are built by scraping images from the internet, often without permission and proper attribution to artists, they are raising tricky questions about ethics, copyright, and security. 

Artists like Rutkowski have had enough. It’s still early days, but a growing coalition of artists are figuring out how to tackle the problem. In the future, we might see the art sector shifting toward pay-per-play or subscription models like the one used in the film and music industries. If you’re curious and want to learn more, read my story

Related work from others:  Latest from MIT : Re-imagining the opera of the future

And it’s not just artists: We should all be concerned about what’s included in the training data sets of AI models, especially as these technologies become a more crucial part of the internet’s infrastructure.

In a paper that came out last year, AI researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe analyzed a smaller data set similar to the one used to build Stable Diffusion. Their findings are distressing. Because the data is scraped from the internet, and the internet is a horrible place, the data set is filled with explicit rape images, pornography, malign stereotypes, and racist and ethnic slurs. 

A website called Have I Been Trained lets people search for images used to train the latest batch of popular AI art models. Even innocent search terms get lots of disturbing results. I tried searching the database for my ethnicity, and all I got back was porn. Lots of porn. It’s a depressing thought that the only thing the AI seems to associate with the word “Asian” is naked East Asian women. 

Not everyone sees this as a problem for the AI sector to fix. Emad Mostaque, the founder of Stability.AI, which built Stable Diffusion, said on Twitter he thought the ethics debate around these models to be “paternalistic silliness that doesn’t trust people or society.”  

But there’s a big safety question. Free open-source models like Stable Diffusion and the large language model BLOOM give malicious actors tools to generate harmful content at scale with minimal resources, argues Abhishek Gupta, the founder of the Montreal AI Ethics Institute and a responsible-AI expert at Boston Consulting Group.

The sheer scale of the havoc these systems enable will constrain the effectiveness of traditional controls like limiting how many images people can generate and restricting dodgy content from being generated, Gupta says. Think deepfakes or disinformation on steroids. When a powerful AI system “gets into the wild,” Gupta says, “that can cause real trauma … for example, by creating objectionable content in [someone’s] likeness.” 

Related work from others:  Latest from MIT : A flexible solution to help artists improve animation

We can’t put the cat back in the bag, so we really ought to be thinking about how to deal with these AI models in the wild, Gupta says. This includes monitoring how the AI systems are used after they have been launched, and thinking about controls that “can minimize harms even in worst-case scenarios.” 

Deeper Learning

There’s no Tiananmen Square in the new Chinese image-making AI

My colleague Zeyi Yang wrote this piece about Chinese tech company Baidu’s new AI system called ERNIE-ViLG, which allows people to generate images that capture the cultural specificity of China. It also makes better anime art than DALL-E 2 or other Western image-making AIs.

However, it also refuses to show people results about politically sensitive topics, such as Tiananmen Square, the site of bloody protests in 1989 against the Chinese government.

TL;DR: “When a demo of the software was released in late August, users quickly found that certain words—both explicit mentions of political leaders’ names and words that are potentially controversial only in political contexts—were labeled as ‘sensitive’ and blocked from generating any result. China’s sophisticated system of online censorship, it seems, has extended to the latest trend in AI.” 

Whose values: Giada Pistilli, principal ethicist at AI startup Hugging Face, says the difficulty of identifying a clear line between censorship and moderation is a result of differences between cultures and legal regimes. “When it comes to religious symbols, in France nothing is allowed in public, and that’s their expression of secularism,” says Pistilli. “When you go to the US, secularism means that everything, like every religious symbol, is allowed.”

As AI matures, we need to be having continuous conversations about the power relations and societal priorities that underpin its development. We need to make difficult choices. Are we okay with using Chinese AI systems, which have been censored in this way? Or with another AI model that has been trained to conclude that Asian women are sex objects and people of color are gang members

Related work from others:  Latest from MIT Tech Review - Humans at the heart of generative AI

AI development happens at breakneck speed. It feels as if there is a new breakthrough every few months, and researchers are scrambling to publish papers before their competition. Often, when I talk to AI developers, these ethical considerations seem to be an afterthought, if they have thought about them at all. But whether they want to or not, they should—the backlash we’ve seen against companies such as Clearview AI should act as a warning that moving fast and breaking things doesn’t work. 

Bit and Bytes

An AI that can design new proteins could help unlock new cures and materials. 
Machine learning is revolutionizing protein design by offering scientists new research tools. One developed by a group of researchers from the University of Washington could open an entire new universe of possible proteins for researchers to design from scratch, potentially paving the way for the development of better vaccines, novel cancer treatments, or completely new materials. (MIT Technology Review)

An AI used medical notes to teach itself to spot disease on chest x-rays. 
The model can diagnose problems as accurately as a human specialist, and it doesn’t need lots of labor-intensive training data. (MIT Technology Review)

A surveillance artist shows how Instagram magic is made.
An artist is using AI and open cameras to show behind-the-scenes footage of how influencers’ Instagram pictures were taken. Fascinating and creepy! (Input mag)

Scientists tried to teach a robot called ERICA to laugh at their jokes.
The team say they hope to improve conversations between humans and AI systems. The humanoid robot is in the shape of a woman, and the system was trained on data from speed-dating dialogues between male university students at Kyoto University and the robot, which was initially operated remotely by female actors. You can draw your own conclusions. (The Guardian)
 

That’s it from me. Thanks for joining me for this first edition, and I hope to see you again next week! 

Melissa

Similar Posts