This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

There’s an AI revolution brewing. Last week, Hollywood’s union for actors went on strike, joining a writers’ strike already in progress—the first time these unions have been on strike simultaneously in six decades. Artificial intelligence has become a big bone of contention for creatives. 

Writers are protesting against studios’ use of AI language models to write scripts. Actors are on strike after rejecting a proposal from companies seeking to use AI technology to scan people’s faces and bodies, and own the right to use these deepfake-style digital copies without consent or compensation in perpetuity. 

What connects these cases is a fear that humans will be replaced by computer programs, and a feeling that there’s very little we can do about it. No wonder. Our lax approach to regulating the excesses of the previous tech boom means AI companies have felt safe building and launching products that are exploitative and harmful

But that is about to change. The generative AI boom has revived American politicians’ enthusiasm for passing AI-specific laws. Though it’ll take a while until that has any effect, existing laws already provide plenty of ammunition for those who say their rights have been harmed by AI companies. 

I just published a story looking at the flood of lawsuits and investigations that have hit those companies recently. These lawsuits are likely to be very influential in ensuring that the way AI is developed and used in the future is more equitable and fair. Read it here

The gist is that last week, the Federal Trade Commission opened an investigation into whether OpenAI violated consumer protection laws by scraping people’s online data to train its popular AI chatbot ChatGPT. 

Related work from others:  Latest from Google AI - Scaling vision transformers to 22 billion parameters

Meanwhile, artists, authors, and the image company Getty are suing AI companies such as OpenAI, Stability AI, and Meta, alleging that they broke copyright laws by training their models on their work without providing any recognition or payment. Last week comedian and author Sarah Silverman joined the authors’ copyright fight against AI companies. 

Both the FTC investigation and the slew of lawsuits revolve around AI’s data practices, which rely on hoovering the internet for data to train models. This inevitably includes personal data as well as copyrighted works

These cases will essentially determine how AI companies are legally allowed to behave,  says Matthew Butterick, a lawyer who represents artists and authors, including Silverman, in class actions against GitHub and Microsoft, OpenAI, Stability AI, and Meta. 

The reality is that AI companies have a ton of choices when it comes to how they build their models and what data they use. (Whether they care is another thing.) Courts could force the companies to share how they’ve built their models and what kind of data has gone into their data sets. Increasing the transparency around AI models is a welcome move and would help burst the myth that AI is somehow magical. 

Strikes, investigations, and court cases could also help pave the way for artists, actors, authors, and others to be compensated, through a system of licensing and royalties, for the use of their work as training data for AI models.

But to me, these court cases are a sign of a bigger fight we are starting as a society. They will help determine how much power we are comfortable giving private companies, and how much agency we are going to have in this brave new AI-powered world. 

Related work from others:  O'Reilly Media - Risk Management for AI Chatbots

I think that’s something worth fighting for. 

Deeper Learning

Bill Gates isn’t too scared about AI

Bill Gates has joined the chorus of big names in tech who have weighed in on the question of risk around artificial intelligence. TL;DR? He’s not too worried—we’ve been here before.

No fearmongering here: In the AI risk hyperbole spectrum, Gates lands squarely in the middle. He frames the debate as one pitting “longer-term” against “immediate” risk, and chooses to focus on “the risks that are already present, or soon will be.” He also urges fast but cautious action to address all the harms on his list. The problem is that he doesn’t offer anything new. Many of his suggestions are tired; some are frankly facile. Read more from Will Douglas Heaven here

Bits and Bytes

ChatGPT can turn bad writers into better ones
What if ChatGPT doesn’t replace human writers, but makes less skilled ones better? A new study from MIT, published in Science, suggests it could help reduce gaps in writing ability between employees. The researchers found that AI could enable less experienced workers who lack writing skills to produce work similar in quality to that of more skilled colleagues. It’s an intriguing glimpse at how AI could change the workplace. (MIT Technology Review)

Mustafa Suleyman’s new Turing test would see if AI can make $1 million 
In this op-ed, the cofounder of DeepMind proposes a new way to measure the intelligence of modern AI systems. His test would have people ask an AI model to make $1 million on a retail web platform in a few months with just a $100,000 investment. This, he argues, would exhibit a level of planning and skill in machines that could be a “seismic moment for the world economy.” (MIT Technology Review

Related work from others:  UC Berkeley - Linguistic Bias in ChatGPT: Language Models Reinforce Dialect Discrimination

AI’s data annotators in the spotlight
Three new stories look at the often thankless and low-paid human labor that goes into making AI systems seem smart. Rest of the World spoke with outsourced workers from Manila to Cairo about how generative AI is changing their work and income. Bloomberg got a look at internal documents instructing annotators on how to label data for its new chatbot Bard. It found that annotators encountered bestiality, war footage, child pornography, and hate speech. And finally, the Wall Street Journal has a new podcast episode dedicated to Kenyan data annotators for ChatGPT, who share their difficult work experiences on the record. 

Inside the white-hot center of AI doomerism
AI startup Anthropic launched Claude 2, its rival to ChatGPT. Anthropic is one of the poster companies for preventing existential AI doom. This piece has some hilarious details about the anxiety inside the company and looks at why tech companies keep building AI technologies while simultaneously saying they fear they will kill us all. (New York Times

AI is making politics easier, cheaper, and more dangerous
As America gets ready for a chaotic election season, this great piece looks at how generative AI will change political campaigning and communication, and the risks associated with that. (Bloomberg)

Similar Posts