This story first appeared in China Report, MIT Technology Review’s newsletter about technology in China. Sign up to receive it in your inbox every Tuesday.

Last year was a banner year for artificial intelligence. Thanks to products like ChatGPT, many millions of people are now directly interacting with AI, talking about it, and grappling with its impact every day.

Some of those people are policymakers, who have been trying hard to respond to the problems AI products pose without reducing our ability to harness their power. 

So at the beginning of this year, my colleagues and I looked around the world for signs of how AI regulations are likely to change this year. We summarized what we found here. 

In China, one of the major moves to be on the lookout for in 2024 is whether the country will follow in the European Union’s footsteps and announce its own comprehensive AI Act. In June of last year, China’s top governing body released a list of legislation they were working on. An “Artificial Intelligence Law” appeared for the first time. 

The Chinese government is already good at reacting to new technologies swiftly. China was probably the first country in the world to introduce legislation on generative AI mere months after ChatGPT’s big break. But a new comprehensive law could give China even more control over how AI disrupts (or doesn’t disrupt) the way things work today.

But you shouldn’t just take my word for it. I asked several experts on Chinese AI regulations what they think will happen in 2024. So in this newsletter, I will share the four main things they said to expect this year.

1. Don’t expect the Chinese “AI Law” to be finalized soon. 

Unlike previous Chinese regulations that focus on subsets of AI such as deepfakes, this new law is aimed at the whole picture, and that means it will take a lot of time to draft. Graham Webster, a research scholar at the Stanford University Center for International Security and Cooperation, guesses that it’s likely we will see a draft of the AI Law in 2024, “but it’s unlikely it will be finalized or effective.” 

Related work from others:  Latest from MIT Tech Review - Meta’s AI leaders want you to know fears over AI existential risk are “ridiculous”

One big challenge is that even just judging what is and isn’t AI can be so tricky that trying to tackle everything with one law may be impractical. “[It’s] always a question in law and tech whether a singular law is necessary, or whether it should be addressed in terms of its applications in other areas,” says Jeremy Daum, who researches Chinese laws at the Paul Tsai China Center. “So a generative-AI content regulation makes sense, but just AI? We’ll see what happens.”

2. China’s government is telling AI companies what they should steer clear of  

The Chinese Academy of Social Sciences, a state-owned research institute, drafted an advisory version of the future AI law in 2023, and it’s a helpful reference for what China wants to achieve. One of the most interesting items in the document is a “negative list” of areas and existing products that AI companies should stay clear of unless they have explicit government approval. It’ll be interesting to see what ends up on this list and how it differs from similar bans set by the EU.

“The list subjects only some products, services, and model development to stringent oversight and was designed with the intention of lowering Chinese businesses’ overall regulatory compliance burden,” says Kristy Loke, a research fellow at the Centre for the Governance of AI, a think tank. The list tells companies exactly where they shouldn’t go to stay in the government’s good graces, which should help them to avoid accidentally angering Beijing. 

3. Third parties may start evaluating AI models

Regulations don’t mean anything if they aren’t enforced. So developing a way to evaluate AI models could be on Chinese regulators’ 2024 checklist, says Jeffrey Ding, an assistant professor of political science at George Washington University. 

What could that look like? “One, the development of a national platform for testing and verifying the safety and security of models, and two, support for third-party assessment organizations to implement regular reviews,” Ding tells me.

Related work from others:  Latest from Google AI - Auto-generated Summaries in Google Docs

(On that note, I wrote about a fascinating, highly detailed document released by Chinese tech companies and academics last year that suggested ways to evaluate AI models. You can read about it here.)

4. China is likely to be lenient on copyright 

Generative AI has created a copyright nightmare, and current laws aren’t up to the job of untangling who owes who what, and why. Angela Zhang, a law professor at the University of Hong Kong, expects more policy guidelines and court decisions from China to clarify potential IP issues next year.

China’s government will likely be lenient to AI companies. “Given the overarching national agenda to encourage the growth and development of the AI sector, it is very unlikely Chinese administrative agencies will take an aggressive stance in investigating firms for AI-related infringements. In the meantime, Chinese courts will take a business-friendly approach in deciding IP cases,” Zhang says.

Needless to say, I will be watching all four areas in the new year and updating you about them in the newsletter. 

And if you want to stay up to date on tech developments in the US, Europe, and beyond, you should really read my colleagues’ newsletters, like The Algorithm on all things AI and The Technocrat, on power, politics, and tech. 

Is there something missing from this list? What are you expecting from Chinese regulators in 2024? Tell me your thoughts at zeyi@technologyreview.com.

Catch up with China

1. Lai Ching-te (William), the Taiwanese presidential candidate whose political stance was least welcome to Beijing, won the election on Saturday. (Al Jazeera)

Taiwanese prosecutors arrested an online journalist, claiming he published fabricated election poll results as part of a Chinese disinformation campaign. (Politico)

Many Taiwanese people worship folk deities with roots in China. Those religious lineages have become increasingly politicized during this election cycle. (BBC)

2. Microsoft’s AI research lab in Beijing used to be a successful example of an international research collaboration. Now it’s a liability for the company amid US-China political tensions. (New York Times $)

3. The Chinese government has spent more than $65 billion to build up Xiongan, a supposedly era-defining smart city. But the city is still empty, as people hesitate to move there. (Bloomberg $)

Related work from others:  Latest from MIT Tech Review - Humans may be more likely to believe disinformation generated by AI

4. The Beijing municipal government bragged about employing a tech company to crack the encryption of Apple’s AirDrop service in order to find out who used it to send anonymous protest messages. (AFP)

5. How has the Great Wall of China survived thousands of years of deterioration? A “living skin” of tiny, rootless plants and microorganisms has helped, according to new research. (CNN)

6. Ships passing through the Red Sea are broadcasting their links to China to avoid attack by militants in Yemen. (Bloomberg $)

Lost in translation

Chinese tech companies finally returned to the International Consumer Electronics Show (CES) that just concluded in Las Vegas, after a years-long hiatus when China closed its border during the pandemic. Still, according to the publication Wu Xiaobo Channel, participants observed that Chinese companies occupied only a fifth to a third as much exhibition space as they did at their peak. They particularly said it’s a pity that Chinese electric-vehicles companies didn’t come to the US to showcase the rapid development of EV manufacturing in China. (The only exception is Xpeng, which drew a lot of attention with its flying-car model.) However, many companies from other countries mentioned that they source batteries from China.

One more thing

New AI models have made it easy to generate songs that closely mimic a singer’s voice. Some musicians hate it and are taking legal action; others are embracing it. Wan Kwong, a 75-year-old Hong Kong singer, released a song last year that featured an AI-generated clone of his voice in his youth. Together, Wan and his younger self reflected on his long and fruitful artistic journey: “When my voice isn’t what it was, I’ll entrust my life mission to the AI.” It’s one of the most touching AI artworks I’ve encountered lately. You can listen to it here.

Share via
Copy link
Powered by Social Snap