Around 100,000 people have played with OpenAI’s latest image-making program DALL-E 2 since its invite-only launch in April. Today the San Francisco-based company opens the door to a million more, MIT Technology Review can reveal.

OpenAI is turning its research project into a commercial product, launching the DALL-E Beta, which will be available as a paid-for service to everybody on the DALL-E 2 waiting list. “We’ve seen much more interest than we had anticipated, much bigger than it was for GPT-3” says Peter Welinder, vice president of product and partnerships at OpenAI.

Paying customers will now be able to use the images they create with DALL-E in commercial projects, such as illustrations in children’s books, concept art for games and movies, and marketing brochures. But the product launch will also be the biggest test yet for the company’s preferred approach to rolling out its powerful AI, which is to release it to customers in stages and address problems as they arise.

A DALL-E Beta subscription won’t break the bank. $15 buys you 115 credits, with one credit letting you submit a text prompt to the AI, which returns four images at a time. In other words, that’s $15 for 460 images. On top of this, users get 50 free credits in their first month and 15 free credits a month after that. Still, with users typically generating dozens of images at a time and keeping only the best, power users could soon burn through that quota.

Experiments I conducted with DALL·E 2 from @OpenAI replicating styles of well known portrait photographers using photo-realistic AI.
1. Dorothea Lange pic.twitter.com/845AzE51xu

Related work from others:  Latest from MIT Tech Review - Taking AI to the next level in manufacturing

— Michael Green (@triplux) June 30, 2022

In the lead up to this launch, OpenAI has been working with early adopters to troubleshoot the tool. The first wave of users has produced a steady stream of surreal and striking images, from mash-ups of cute animals, to pictures that imitate the style of real photographers with eerie accuracy, to mood boards for restaurants and sneaker designs. This has allowed OpenAI to explore the strengths and weaknesses of its tool. “They’ve been giving us a ton of really great feedback,” says Joanne Jang, product manager at OpenAI.

OpenAI has already taken steps to control what kind of images users can produce. For example, people cannot generate images that show well-known individuals. In preparation for this commercial launch, OpenAI has addressed another serious problem that early users flagged. The version of DALL-E released in April often produced images containing clear gender and racial bias, such as images of CEOs and firefighters who were all white men, and teachers and nurses who were all white women.

On July 18, OpenAI announced a fix. When users ask DALL-E 2 to generate an image that includes a group of people, the AI now draws on a dataset of samples that OpenAI claims is more representative of global diversity. According to its own testing, OpenAI says that users were 12 times more likely to report that DALL-E 2’s output included people of diverse backgrounds.

It’s a necessary fix, but a superficial one. OpenAI addresses most of the problems that its users flag by filtering what people can ask for or censoring what the underlying model produces. But it is not fixing problems in the model itself, or the data it is trained on. This approach allows OpenAI to make quick fixes. But some say its simply putting on a band-aid.

Related work from others:  Latest from MIT Tech Review - CIO vision 2025: Bridging the gap between BI and AI

“The issue of social biases in algorithms is huge,” says Judy Wajcman at the London School of Economics, who also studies gender in data science and AI at the Turing Institute. “A lot of energy goes into technical fixes, and I laud all those efforts, but they’re not long-term solutions to the problem.”

Still, OpenAI says that its work addressing DALL-E 2’s gender and racial bias gave it the confidence to go ahead with the full launch. It won’t be the final word, however. Bias in AI is a pernicious and intractable problem, and the company will have to carry on its game of whack-a-mole as new examples arise. OpenAI says it will pause the roll out whenever the product needs tweaking.  

It’s a balancing act, says Welinder. Tweaks can sometimes curb what users create in unexpected ways. For example, when OpenAI first released its fix for gender bias some users complained that they were now getting too many female Super Marios. That kind of case is hard to predict ahead of time, says Welinder: “Seeing what people were trying to create from it lets us fine tune and calibrate.”

But monitoring hundreds of millions of images produced by a million or more users will be a vast undertaking. Welinder won’t be drawn on how many human moderators it will take, but they will be in-house staff, he says. The company takes a hybrid approach to moderation, mixing human judgment with automated inspection. Welinder says that the make-up of the team can be adapted as required by adding more moderators or adjusting the balance between human and machine intervention.

Related work from others:  Latest from Google AI - Teaching old labels new tricks in heterogeneous graphs

In May, Google showed off its own image-making AI, called Imagen. Unlike OpenAI, Google has said very little about its plans for the technology. “We still don’t have anything new to share re Imagen yet,” says Google spokesperson Brian Gabriel.

When OpenAI was founded in 2015 it presented itself as a pure research lab, with a belief in artificial general intelligence and a commitment to making sure that technology would benefit humanity—if it ever arrived. But in the last few years, it has pivoted to become a product company, offering its powerful AI to paying customers.

It’s still all part of the same vision, says Welinder. “Deploying our technology as a product and at scale is a critical part of our mission. It’s important to iterate on the usefulness and safety around the technology early, while the stakes are lower.”

Similar Posts