Generative AI models have become remarkably good at conversing with us, and creating images, videos, and music for us, but they’re not all that good at doing things for us. 

AI agents promise to change that. Think of them as AI models with a script and a purpose. They tend to come in one of two flavors. 

The first, called tool-based agents, can be coached using natural human language (rather than coding) to complete digital tasks for us. Anthropic released one such agent in October—the first from a major AI model-maker—that can translate instructions (“Fill in this form for me”) into actions on someone’s computer, moving the cursor to open a web browser, navigating to find data on relevant pages, and filling in a form using that data. Salesforce has released its own agent too, and OpenAI reportedly plans to release one in January. 

The other type of agent is called a simulation agent, and you can think of these as AI models designed to behave like human beings. The first people to work on creating these agents were social science researchers. They wanted to conduct studies that would be expensive, impractical, or unethical to do with real human subjects, so they used AI to simulate subjects instead. This trend particularly picked up with the publication of an oft-cited 2023 paper by Joon Sung Park, a PhD candidate at Stanford, and colleagues called “Generative Agents: Interactive Simulacra of Human Behavior.” 

Last week Park and his team published a new paper on arXiv called “Generative Agent Simulations of 1,000 People.” In this work, researchers had 1,000 people participate in two-hour interviews with an AI. Shortly after, the team was able to create simulation agents that replicated each participant’s values and preferences with stunning accuracy.

Related work from others:  Latest from MIT : Computing for ocean environments

There are two really important developments here. First, it’s clear that leading AI companies think it’s no longer good enough to build dazzling generative AI tools; they now have to build agents that can accomplish things for people. Second, it’s getting easier than ever to get such AI agents to mimic the behaviors, attitudes, and personalities of real people. What were once two distinct types of agents—simulation agents and tool-based agents—could soon become one thing: AI models that can not only mimic your personality but go out and act on your behalf. 

Research on this is underway. Companies like Tavus are hard at work helping users create “digital twins” of themselves. But the company’s CEO, Hassaan Raza, envisions going further, creating AI agents that can take the form of therapists, doctors, and teachers. 

If such tools become cheap and easy to build, it will raise lots of new ethical concerns, but two in particular stand out. The first is that these agents could create even more personal, and even more harmful, deepfakes. Image generation tools have already made it simple to create nonconsensual pornography using a single image of a person, but this crisis will only deepen if it’s easy to replicate someone’s voice, preferences, and personality as well. (Park told me he and his team spent more than a year wrestling with ethical issues like this in their latest research project, engaging in many conversations with Stanford’s ethics board and drafting policies on how the participants could withdraw their data and contributions.) 

The second is the fundamental question of whether we deserve to know whether we’re talking to an agent or a human. If you complete an interview with an AI and submit samples of your voice to create an agent that sounds and responds like you, are your friends or coworkers entitled to know when they’re talking to it and not to you? On the other side, if you ring your cell service provider or doctor’s office and a cheery customer service agent answers the line, are you entitled to know whether you’re talking to an AI?

Related work from others:  UC Berkeley - GPT-4 + Stable-Diffusion = ?: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models

This future feels far off, but it isn’t. There’s a chance that when we get there, there will be even more pressing and pertinent ethical questions to ask. In the meantime, read more from my piece on AI agents here, and ponder how well you think an AI interviewer could get to know you in two hours.

Now read the rest of The Algorithm

Deeper Learning

Inside Clear’s ambitions to manage your identity beyond the airport

Clear is the most visible biometrics company around, and one you’ve likely interacted with already, whether passing security checkpoints at airports and stadiums or verifying your identity on LinkedIn. Along the way, it’s built one of the largest private repositories of identity data on the planet, including scans of fingerprints, irises, and faces. A confluence of factors is now accelerating the adoption of identity verification technologies—including AI, of course, as well as the lingering effects of the pandemic’s push toward “contactless” experiences—and Clear aims to be the ubiquitous provider of these services. In the near future, countless situations where you might need an ID or credit card might require no more than showing your face. 

Why this matters: Now that biometrics have gone mainstream, what—and who—bears the cost? Because this convenience, even if chosen by only some of us, leaves all of us wrestling with the effects. If Clear gains ground in its vision, it will move us toward a world where we’re increasingly obligated to give up our biometric data to a system that’s vulnerable to data leaks.  Read more from Eileen Guo.

Related work from others:  Latest from MIT : A simpler path to better computer vision

Bits and Bytes

Inside the booming “AI pimping” industry

Instagram is being flooded with hundreds of AI-generated influencers who are stealing videos from real models and adult content creators, giving them AI-generated faces, and monetizing their bodies with links to dating sites, Patreon, OnlyFans competitors, and various AI apps. (404 Media)

How to protect your art from AI

There is little you can do if your work has already been scraped into a data set, but you can take steps to prevent future work from being used that way. Here are four ways to do that. (MIT Technology Review)

Elon Musk and Vivek Ramaswamy have offered details on their plans to cut regulations

In an op-ed, the pair emphasize that their goal will be to immediately use executive orders to eliminate regulations issued by federal agencies, using “a lean team of small-government crusaders.” This means AI guidelines issued by federal agencies under the Biden administration, like ethics rules from the National Institute of Standards and Technology or principles in the National Security Memorandum on AI, could be rolled back or eliminated completely. (Wall Street Journal)

How OpenAI tests its models

OpenAI gave us a glimpse into how it selects people to do its testing and how it’s working to automate the testing process by, essentially, having large language models attack each other. (MIT Technology Review)

Similar Posts