Last week Google revealed it is going all in on generative AI. At its annual I/O conference, the company announced it plans to embed AI tools into virtually all of its products, from Google Docs to coding and online search. (Read my story here.) 

Google’s announcement is a huge deal. Billions of people will now get access to powerful, cutting-edge AI models to help them do all sorts of tasks, from generating text to answering queries to writing and debugging code. As MIT Technology Review’s editor in chief, Mat Honan, writes in his analysis of I/O, it is clear AI is now Google’s core product. 

Google’s approach is to introduce these new functions into its products gradually. But it will most likely be just a matter of time before things start to go awry. The company has not solved any of the common problems with these AI models. They still make stuff up. They are still easy to manipulate to break their own rules. They are still vulnerable to attacks. There is very little stopping them from being used as tools for disinformation, scams, and spam. 

Because these sorts of AI tools are relatively new, they still operate in a largely regulation-free zone. But that doesn’t feel sustainable. Calls for regulation are growing louder as the post-ChatGPT euphoria is wearing off, and regulators are starting to ask tough questions about the technology. 

US regulators are trying to find a way to govern powerful AI tools. This week, OpenAI CEO Sam Altman will testify in the US Senate (after a cozy “educational” dinner with politicians the night before). The hearing follows a meeting last week between Vice President Kamala Harris and the CEOs of Alphabet, Microsoft, OpenAI, and Anthropic.

In a statement, Harris said the companies have an “ethical, moral, and legal responsibility” to ensure that their products are safe. Senator Chuck Schumer of New York, the majority leader, has proposed legislation to regulate AI, which could include a new agency to enforce the rules. 

Related work from others:  Latest from MIT Tech Review - Sam Altman says helpful agents are poised to become AI’s killer function

“Everybody wants to be seen to be doing something. There’s a lot of social anxiety about where all this is going,” says Jennifer King, a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence. 

Getting bipartisan support for a new AI bill will be difficult, King says: “It will depend on to what extent [generative AI] is being seen as a real, societal-level threat.” But the chair of the Federal Trade Commission, Lina Khan, has come out “guns blazing,” she adds. Earlier this month, Khan wrote an op-ed calling for AI regulation now to prevent the errors that arose from being too lax with the tech sector in the past. She signaled that in the US, regulators are more likely to use existing laws already in their tool kit to regulate AI, such as antitrust and commercial practices laws. 

Meanwhile, in Europe, lawmakers are edging closer to a final deal on the AI Act. Last week members of the European Parliament signed off on a draft regulation that called for a ban on facial recognition technology in public places. It also bans predictive policing, emotion recognition, and the indiscriminate scraping of biometric data online. 

The EU is set to create more rules to constrain generative AI too, and the parliament wants companies creating large AI models to be more transparent. These measures include labeling AI-generated content, publishing summaries of copyrighted data that was used to train the model, and setting up safeguards that would prevent models from generating illegal content.

But here’s the catch: the EU is still a long way away from implementing rules on generative AI, and a lot of the proposed elements of the AI Act are not going to make it to the final version. There are still tough negotiations left between the parliament, the European Commission, and the EU member countries. It will be years until we see the AI Act in force.

Related work from others:  Latest from Google AI - Google Research, 2022 & beyond: Algorithms for efficient deep learning

While regulators struggle to get their act together, prominent voices in tech are starting to push the Overton window. Speaking at an event last week, Microsoft’s chief economist, Michael Schwarz, said that we should wait until we see “meaningful harm” from AI before we regulate it. He compared it to driver’s licenses, which were introduced after many dozens of people were killed in accidents. “There has to be at least a little bit of harm so that we see what is the real problem,” Schwarz said. 

This statement is outrageous. The harm caused by AI has been well documented for years. There has been bias and discriminationAI-generated fake news, and scams. Other AI systems have led to innocent people being arrested, people being trapped in poverty, and tens of thousands of people being wrongfully accused of fraud. These harms are likely to grow exponentially as generative AI is integrated deeper into our society, thanks to announcements like Google’s. 

The question we should be asking ourselves is: How much harm are we willing to see? I’d say we’ve seen enough.

Deeper Learning

The open-source AI boom is built on Big Tech’s handouts. How long will it last?

New open-source large language models—alternatives to Google’s Bard or OpenAI’s ChatGPT that researchers and app developers can study, build on, and modify—are dropping like candy from a piñata. These are smaller, cheaper versions of the best-in-class AI models created by the big firms that (almost) match them in performance—and they’re shared for free.

The future of how AI is made and used is at a crossroads. On one hand, greater access to these models has helped drive innovation. It can also help catch their flaws. But this open-source boom is precarious. Most open-source releases still stand on the shoulders of giant models put out by big firms with deep pockets. If OpenAI and Meta decide they’re closing up shop, a boomtown could become a backwater. Read more from Will Douglas Heaven.

Related work from others:  Latest from MIT Tech Review - From data and AI aspirations to sustainable business outcomes

Bits and Bytes

Amazon is working on a secret home robot with ChatGPT-like features
Leaked documents show plans for an updated version of the Astro robot that can remember what it’s seen and understood, allowing people to ask it questions and give it commands. But Amazon  has to solve a lot of problems before these models are safe to deploy inside people’s homes at scale. (Insider)

Stability AI has released a text-to-animation model 
The company that created the open-source text-to-image model Stable Diffusion has launched another tool that lets people create animations using text, image, and video prompts. Copyright problems aside, these tools could become powerful tools for creatives, and the fact that they’re open source makes them accessible to more people. It’s also a stopgap before the inevitable next step, open-source text-to-video. (Stability AI

AI is getting sucked into culture wars—see the Hollywood writers’ strike
One of the disputes between the Writers Guild of America and Hollywood studios is whether people should be allowed to use AI to write film and television scripts. With wearying predictability, the US culture-war brigade has stepped into the fray. Online trolls are gleefully telling striking writers that AI will replace them. (New York Magazine)

Watch: An AI-generated trailer for Lord of the Rings … but make it Wes Anderson 
This was cute. 

Similar Posts