Dragoș Tudorache is feeling pretty damn good. We’re sitting in a conference room in a chateau overlooking a lake outside Brussels, sipping glasses of cava. The Romanian liberal member of the European Parliament has spent the day hosting a conference on AI, defense and geopolitics attended by nearly 400 VIP guests. The day is almost over, and Tudorache has promised to squeeze an interview with me in during cocktail hour. 

A former interior minister, Tudorache is one of the most important players in European AI policy. He is one of the two lead negotiators of the AI Act in the European Parliament. The bill is the first sweeping AI law of its kind in the world, and will enter into force this year. We first met two years ago, when Tudorache was appointed as the lead negotiator for the bill. 

But Tudorache’s interest in AI started much earlier, in 2015. He says reading Nick Bostrom’s book Superintelligence, which explores how an AI superintelligence could be created and its implications, made him realize the potential and dangers of AI, and the need for regulating it. (Bostrom has recently been embroiled in a scandal for expressing racist views in emails unearthed from the ‘90s. Tudorache says he is not aware of Bostrom’s career after the publication of the book, and did not comment.) 

When he was elected to the European Parliament in 2019, he says he arrived determined to work on AI regulation if the opportunity presented itself. 

“When I heard [Ursula] von der Leyen [the European Commission President] say in her first speech in front of Parliament that there will be AI regulation, I said ‘Whoo ha, this is my moment,’” Tudorache says. 

Since then, Tudorache has chaired a special committee on AI, and shepherded the AI Act through the European Parliament and into its final form following negotiations with other EU institutions. 

It’s been a wild ride, with intense negotiations, the rise of ChatGPT, lobbying from tech companies, and a flip-flopping by some of Europe’s largest economies. But now, as the AI Act has passed into law, Tudorache’s job on it is done and dusted, and he says he has no regrets. Although the AI Act has been criticized by both civil society for not protecting human rights enough, and by industry for being too restrictive, Tudorache says the bill’s final form was the sort of compromise he expected. Politics is the art of compromise, after all. 

Related work from others:  Latest from MIT : Program teaches US Air Force personnel the fundamentals of AI

“There’s going to be a lot of building the plane while flying and there’s going to be a lot of learning while doing,” he says. “But if the true spirit of what we meant with legislation is well understood by all concerned, I do think that the outcome can be a positive one,” he adds. 

It is still early days—the law only comes fully into force two years from now. But Tudorache believes it will change the tech industry for the better, and will start a process where companies will start to take responsible AI seriously thanks to the Act’s legally binding obligations for AI AI companies to be more transparent about how their models are built. (I wrote about the five things you need to know about the AI Act a couple of months ago here.)

“The fact that we now have a blueprint for how you put the right boundaries, while also leaving room for innovation is something that will serve society,” says Tudorache. It will also serve businesses, he says, because it offers a predictable path forward on what you can and cannot do with AI. 

But the AI Act is just the beginning, and there is still plenty keeping Tudorache up at night. AI is ushering in big changes across every industry and society, and will change everything from healthcare to education, labor, defense and even our creativity. Most countries have not grasped what AI will mean for them, he says, and the responsibility now lies with governments to ensure citizens and society more broadly are ready for the AI age. 

Related work from others:  UC Berkeley - 2024 BAIR Graduate Directory

“The crunch time… starts now,” he says. 

Join Dragoș Tudorache and me at Emtech Digital London on April 16-17! Tudorache will walk you through what companies need to take into account with the AI Act right now. See you in a couple of weeks! 

Now read the rest of The Algorithm

Deeper Learning

A conversation with OpenAI’s first artist in residence

Alex Reben’s work is often absurd, sometimes surreal: a mash-up of giant ears imagined by DALL-E and sculpted by hand out of marble; critical burns generated by ChatGPT that thumb the nose at AI art. But its message is relevant to everyone. Reben is interested in the roles humans play in a world filled with machines, and how those roles are changing. He is also OpenAI’s first artist in residence. 
Meet the artist: Officially, the appointment started in January and lasts three months. But he’s been working with OpenAI for years already. Our senior editor for AI Will Douglas Heaven sat down with Reuben to talk about the role AI can play in art, and the backlash by artists against AI. Read more here.

Bits and Bytes

It’s easy to tamper with watermarks from AI-generated text

Watermarks for AI-generated text are easy to remove and can be stolen and copied, rendering them useless, researchers have found. They say these kinds of attacks discredit watermarks, and can fool people into trusting text they shouldn’t. It’s an especially significant finding as many regulations around the world, including the AI Act, are betting heavily on the development of watermarks to trace AI-generated content.  (MIT Technology Review

How three filmmakers created Sora’s latest stunning videos

In the last month, a handful of filmmakers have taken OpenAI’s new generative AI model Sora for a test drive. The results are amazing. The short films are a big jump up even from the cherry-picked demo videos that OpenAI used to tease Sora just six weeks ago. Here’s how three of the filmmakers did it. (MIT Technology Review

Related work from others:  Latest from MIT : Multi-AI collaboration helps reasoning and factual accuracy in large language models

What’s next for generative video

Generative video will probably upend a wide range of businesses and change the roles of many professionals, from animators to advertisers. Fears of misuse are also growing. The widespread ability to generate fake video will make it easier than ever to flood the internet with propaganda and nonconsensual porn. We can see it coming. The problem is, nobody has a good fix. (MIT Technology Review

Google is considering charging for AI-powered search

In a major potential shakeup to Google’s business model, the tech giant is considering putting AI-powered search features behind a paywall. But, considering how untrustworthy AI search results are, it’s unclear if people will want to pay for them. (Financial Times

The fight for AI talent heats up 

As layoffs sweep through the tech sector, AI jobs are still super hot. Tech giants are fighting each other for top talent, even offering seven-figure salaries, and poaching entire engineering teams with experience in generative AI. (The Wall Street Journal

Inside Big Tech’s underground race to buy AI training data

AI models need to be trained on massive datasets, and big tech companies are quietly paying for datasets, chat logs and personal photos hidden behind paywalls and login screens. (Reuters

How tech giants cut corners to harvest data for AIAI companies are running out of quality training data for their huge AI models. In order to harvest more data, tech companies such as OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, the New York Times found. (The New York TImes)

Share via
Copy link
Powered by Social Snap