The trick to making a good AI-powered chatbot might be to have humans tell it how to behave—and force the model to back up its claims using the internet, according to a new paper by Alphabet-owned AI lab DeepMind. 

In a new non-peer-reviewed paper out today, the team unveils Sparrow, an AI chatbot that is trained on DeepMind’s large language model Chinchilla. 

Sparrow is designed to talk with humans and answer questions, using a live Google search or information to inform those answers. Based on how useful people find those answers, it’s then trained using a reinforcement learning algorithm, which learns by trial and error to achieve a specific objective. This system is intended to be a step forward in developing AIs that can talk to humans without dangerous consequences, such as encouraging people to harm themselves or others.

Large language models generate text that sounds like something a human would write. They are an increasingly crucial part of the internet’s infrastructure, being used to summarize texts, build more powerful online search tools, or as customer service chatbots. 

But they are trained by scraping vast amounts of data and text from the internet, which inevitably reflects lots of harmful biases. It only takes a little prodding before they start spewing toxic or discriminatory content. In an AI that is built to have conversations with humans, the results could be disastrous. A conversational AI without appropriate safety measures in place could say offensive things about ethnic minorities or suggest that people drink bleach, for example. 

AI companies hoping to develop conversational AI systems have tried several techniques to make their models safer. 

Related work from others:  Latest from MIT : System combines light and electrons to unlock faster, greener computing

OpenAI, creator of the famous large language model GPT-3, and AI startup Anthropic have used reinforcement learning to incorporate human preferences into their models. And Facebook’s AI chatbot BlenderBot uses an online search to inform its answers. 

DeepMind’s Sparrow brings all these techniques together in one model. 

DeepMind presented human participants multiple answers the model gave to the same question, and asked them which one they liked the most. They were then asked to determine whether they thought the answers were plausible, and whether Sparrow had supported the answer with appropriate evidence, such as links to sources. The model managed plausible answers to factual questions—using evidence that had also been retrieved from the internet—78% of the time.

In formulating those answers, it followed 23 rules determined by the researchers, such as not offering financial advice, making threatening statements, or claiming to be a person. 

The difference between this approach and its predecessors is that DeepMind hopes to use “dialogue in the long term for safety,” says Geoffrey Irving, a safety researcher at DeepMind. 

“That means we don’t expect that the problems that we face in these models—either  misinformation or stereotypes or whatever—are obvious at first glance, and we want to talk through them in detail. And that means between machines and humans as well,” he says. 

DeepMind’s idea of using human preferences to optimize how an AI model learns is not new, says Sara Hooker, who leads Cohere for AI, a nonprofit AI research lab. 

“But the improvements are convincing and show clear benefits to human-guided optimization of dialogue agents in a large-language-model setting,” says Hooker. 

Related work from others:  Latest from Google AI - Constrained Reweighting for Training Deep Neural Nets with Noisy Labels

Douwe Kiela, a researcher at AI startup Hugging Face, says Sparrow is “a nice next step that follows a general trend in AI, where we are more seriously trying to improve the safety aspects of large-language-model deployments.”

But there is much work to be done before these conversational AI models can be deployed in the wild. 

Sparrow still makes mistakes. The model sometimes goes off topic or makes up random answers. Determined participants were also able to make the model break rules 8% of the time. (This is still an improvement over older models: DeepMind’s previous models broke rules three times more often than Sparrow.) 

“For areas where human harm can be high if an agent answers, such as providing medical and financial advice, this may still feel to many like an unacceptably high failure rate,” Hooker says.The work is also built around an English-language model, “whereas we live in a world where technology has to safely and responsibly serve many different languages,” she adds.

And Kiela points out another problem: “Relying on Google for information-seeking leads to unknown biases that are hard to uncover, given that everything is closed source.” 

Similar Posts