This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

How was your break? I spent mine back home in snowy Finland, extremely offline. Bliss! I hope you’re well-rested, because this year is going to be even wilder than 2022 for AI. 

Last year was a big one for so-called generative AI, like the text-to-image model Stable Diffusion and the text generator ChatGPT. It was the first time many non-techy people got hands-on experience with an AI system. 

Despite my best efforts not to think about AI during the holidays, everyone I met seemed to want to talk about it. I met a friend’s cousin who admitted to using ChatGPT to write a college essay (and went pale when he heard I had just written a story about how to detect AI-generated text); random people at a bar who, unprompted, started telling me about their experiments with the viral Lensa app; and a graphic designer who was nervous about AI image generators.

This year we are going to see AI models with more tricks up their metaphorical sleeves. My colleague Will Douglas Heaven and I have taken a stab at predicting exactly what’s likely to arrive in the field of AI in 2023.  

One of my predictions is that we will see the AI regulatory landscape move from vague, high-level ethical guidelines to concrete, regulatory red lines as regulators in the EU finalize rules for the technology and US government agencies such as the Federal Trade Commission mull rules of their own.

Lawmakers in Europe are working on rules for image- and text-producing generative AI models that have created such excitement recently, such as Stable Diffusion, LaMDA, and ChatGPT. They could spell the end of the era of companies releasing their AI models into the wild with little to no safeguards or accountability. 

Related work from others:  Latest from MIT : AI simulation gives people a glimpse of their potential future self

These models increasingly form the backbone of many AI applications, yet the companies that make them are fiercely secretive about how they are built and trained. We don’t know much about how they work, and that makes it difficult to understand how the models generate harmful content or biased outcomes, or how to mitigate those problems. 

The European Union is planning to update its upcoming sweeping AI regulation, called the AI Act, with rules that force these companies to shed some light on the inner workings of their AI models. It will likely be passed in the second half of the year, and after that, companies will have to comply if they want to sell or use AI products in the EU or face fines of up to 6% of their total worldwide annual turnover. 

The EU calls these generative models “general-purpose AI” systems, because they can be used for many different things (not to be confused with artificial general intelligence, the much-hyped idea of AI superintelligence). For example, large language models such as GPT-3 can be used in customer service chatbots or to create disinformation at scale, and Stable Diffusion can be used to make images for greeting cards or nonconsensual deepfake porn. 

While the exact way in which these models will be regulated in the AI Act is still under heated debate, creators of general–purpose AI models, such as OpenAI, Google, and DeepMind, will likely need to be more open about how their models are built and trained, says Dragoș Tudorache, a liberal member of the European Parliament who is part of the team negotiating the AI Act. 

Regulating these technologies is tricky, because there are two different sets of problems associated with generative models, and those have very different policy solutions, says Alex Engler, an AI governance researcher at the Brookings Institution. One is the dissemination of harmful AI-generated content, such as hate speech and nonconsensual pornography, and the other is the prospect of biased outcomes when companies integrate these AI models into hiring processes or use them to review legal documents. 

Related work from others:  Latest from MIT : Teaching AI to ask clinical questions

Sharing more information on models might help third parties who are building products on top of them. But when it comes to the spread of harmful AI-generated content, more stringent rules are required. Engler suggests that creators of generative models should be required to build in restraints on what the models will produce, monitor their outputs, and ban users who abuse the technology. But even that won’t necessarily stop a determined person from spreading toxic things.

While tech companies have traditionally been loath to reveal their secret sauce, the current push from regulators for more transparency and corporate accountability might usher in a new age where AI development is less exploitative and is done in a way that respects rights such as privacy. That gives me hope for this year. 

Deeper Learning

Generative AI is changing everything. But what’s left when the hype is gone?

Each year, MIT Technology Review’s reporters and editors select 10 breakthrough technologies that are likely to shape the future. Generative AI, the hottest thing in AI right now, is one of this year’s picks. (But you can, and should, read about the other nine technologies.)

What’s going on: Text-to-image AI models such as OpenAI’s DALL-E took the world by storm. Its popularity surprised even its own creators. And while we will have to wait to see exactly what lasting impact these tools will have on creative industries, and on the entire field of AI, it’s clear this is just the beginning. 

What’s coming: Next year is likely to introduce us to AI models that can do many different things, from generating images from text in multiple languages to controlling robots. Generative AI could eventually be used to produce designs for everything from new buildings to new drugs. “I think that’s the legacy,” Sam Altman, the founder of OpenAI, told Will Douglas Heaven. “Images, video, audio—eventually, everything will be generated. I think it is just going to seep everywhere.” Read Will’s story

Related work from others:  Latest from MIT Tech Review - Kids are learning how to make their own little language models

Bits and Bytes

Microsoft and OpenAI want to use ChatGPT to power Bing searches 
Microsoft is hoping to use the powerful language model to compete with Google Search; it could launch the new feature as early as March. Microsoft also wants to use ChatGPT in its word processing software Word and in Outlook emails. But the company will have to work overtime to ensure that the results are accurate, or it risks alienating users. (The Information

Apple unveils a catalogue of AI-voiced audiobooks
Apple has quietly launched a suite of audiobooks completely narrated by an AI. While the move may be smart for Apple—the company will be able to roll out audiobooks quickly and at a fraction of the cost involved in hiring human actors—it will likely spark backlash from a growing coalition of artists who are worried about AI taking their jobs. (The Guardian)

Meet the 72-year-old congressman who is pursuing a degree in AI
Tech companies often criticize lawmakers for not understanding the technology they are trying to regulate. Don Beyer, a Democrat from Virginia, hopes to change that. He is pursuing a master’s degree in machine learning at George Mason University, hoping to use the knowledge he gains to steer regulation and promote more ethical uses of AI in mental health. (The Washington Post

Similar Posts