This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

It feels as though a switch has turned on in AI policy. For years, US legislators and American tech companies were reluctant to introduce—if not outright against—strict technology regulation. Now both have started begging for it.

Last week, OpenAI CEO Sam Altman appeared before a US Senate committee to talk about the risks and potential of AI language models. Altman, along with many senators, called for international standards for artificial intelligence. He also urged the US to regulate the technology and set up a new agency, much like the Food and Drug Administration, to regulate AI. 

For an AI policy nerd like myself, the Senate hearing was both encouraging and frustrating. Encouraging because the conversation seems to have moved past promoting wishy-washy self-regulation and on to rules that could actually hold companies accountable. Frustrating because the debate seems to have forgotten the past five-plus years of AI policy. I just published a story looking at all the existing international efforts to regulate AI technology. You can read it here

I’m not the only one who feels this way. 

“To suggest that Congress starts from zero just plays into the industry’s favorite narrative, which is that Congress is so far behind and doesn’t understand technology—how could they ever regulate us?” says Anna Lenhart, a policy fellow at the Institute for Data Democracy and Policy at George Washington University, and a former Hill staffer. 

In fact, politicians in the last Congress, which ran from January 2021 to January 2023, introduced a ton of legislation around AI. Lenhart put together this neat list of all the AI regulations proposed during that time. They cover everything from risk assessments to transparency to data protection. None of them made it to the president’s desk, but given that buzzy (or, to many, scary) new generative AI tools have captured Washington’s attention, Lenhart expects some of them to be revamped and make a reappearance in one form or another. 

Related work from others:  Latest from MIT : A framework for solving parabolic partial differential equations

Here are a few to keep an eye on. 

Algorithmic Accountability Act

This bill was introduced by Democrats in the US Congress and Senate in 2022, pre-ChatGPT, to address the tangible harms of automated decision-making systems, such as ones that denied people pain medications or rejected their mortgage applications. 

The bill would require companies to do algorithmic impact and risk assessments, says Lenhart. It would also put the Federal Trade Commission in charge of regulating and enforcing rules around AI, and boost its staff numbers.

American Data Privacy Protection Act

This bipartisan bill was an attempt to regulate how companies collect and process data. It gained lots of momentum as a way to help women keep their personal health data safe after Roe v. Wade was overturned, but it failed to pass in time. The debate around the risks of generative AI could give it the added urgency to go further than last time. ADPPA would ban generative AI companies from collecting, processing, or transfering data in a discriminatory way. It would also give users more control over how companies use their data. 

An AI agency

During the hearing, Altman and several senators suggested we need a new US agency to regulate AI. But I think this is a bit of a red herring. The US government needs more technical expertise and resources to regulate the tech, whether it be in a new agency or in a revamped existing one, Lenhart says. And more importantly, any regulator, new or old, needs the power to enforce the laws. 

“It’s easy to create an agency and not give it any powers,” Lenhart says. 

Related work from others:  Latest from MIT Tech Review - A playbook for crafting AI strategy

Democrats have tried to set up new protections with the Digital Platform Commission Act, the Data Protection Act, and the Online Privacy Act. But these attempts have failed, as most US bills without bipartisan support are doomed to do. 

What’s next? 

Another tech-focused agency is likely on the way. Senators Lindsey Graham, a Republican, and Elizabeth Warren, a Democrat, are working together to create a new digital regulator that might also have the power to police and perhaps license social media companies. 

Democrat Chuck Schumer is also rallying the troops in the Senate to introduce a new bill that would tackle AI harms specifically. He has gathered bipartisan support to put together a comprehensive AI bill that would set up guardrails aimed at promoting responsible AI development. For example, companies might be required to allow external experts to audit their tech before it is released, and to give users and the government more information about their AI systems. 

And while Altman seems to have won the Senate Judiciary Committee over, leaders from the commerce committees in both the House and Senate need to be on board for a comprehensive approach to AI regulation to become law, Lenhart says. 

And it needs to happen fast, before people lose their interest in generative AI. 

“It’s gonna be tricky, but anything’s possible,” Lenhart says.

Deeper Learning

Meta’s new AI models can recognize and produce speech for more than 1,000 languages

Meta has built AI models that can recognize and produce speech for more than 1,000 languages—a tenfold increase on what’s currently available.

Why this matters: It’s a significant step towards preserving languages that are at risk of disappearing, the company says. There are around 7,000 languages in the world, but existing speech recognition models only cover approximately 100 languages comprehensively. This is because these kinds of models tend to require huge amounts of labeled training data, which is only available for a small number of languages, including English, Spanish, and Chinese. Read more from Rhiannon Williams here.

Related work from others:  Latest from MIT Tech Review - Google DeepMind’s new AI assistant helps elite soccer coaches get even better

Bits and Bytes

Google and Apple’s photo apps still can’t find gorillas
Eight years ago, Google’s photo app mislabeled pictures of Black people as gorillas. The company prevented any pictures from being labeled as apes as a temporary fix. But years later, tech companies haven’t found a solution to the problem, despite big advancements in computer vision (The New York Times)

Apple bans employees from using ChatGPT
It’s worried the chatbot might leak confidential company information. This is not an unreasonable concern, given that just a couple of months ago OpenAI had to pull ChatGPT offline because of a bug that leaked user chat history. (The Wall Street Journal

Here’s how AI will ruin work for everyone
Big Tech’s push to integrate AI into office tools will not spell the end of human labor. It’s the opposite: the easier work becomes, the more we will be expected to do. Or as Charlie Warzel writes, this AI boom is going to be less Skynet, more Bain & Company. (The Atlantic)

Does Bard know how many times “e” appears in “ketchup”?
This was a fun piece with a serious purpose: lifting the lid on how large language models work. Google’s chatbot Bard doesn’t know how many letters different words have. This is because instead of recognizing individual letters, these models form words using “tokens.” So for example, Bard would think the first letter in the word “ketchup” was “ket,” not “k.” (The Verge

Similar Posts