AI is getting better at passing tests designed to measure human creativity. In a study published in Nature Scientific Reports today, AI chatbots achieved higher average scores than humans in the Alternate Uses Task, a test commonly used to assess this ability.
This study will add fuel to an ongoing debate among AI researchers about what it even means for a computer to pass tests devised for humans. The findings do not necessarily indicate that AIs are developing an ability to do something uniquely human. It could just be that AIs can pass creativity tests, not that they’re actually creative in the way we understand. However, research like this might give us a better understanding of how humans and machines approach creative tasks.
Researchers started by asking three AI chatbots—OpenAI’s ChatGPT and GPT-4 as well as Copy.Ai, which is built on GPT-3—to come up with as many uses for a rope, a box, a pencil, and a candle as possible within just 30 seconds.
Their prompts instructed the large language models to come up with original and creative uses for each of the items, explaining that the quality of the ideas was more important than the quantity. Each chatbot was tested 11 times for each of the four objects. The researchers also gave 256 human participants the same instructions.
The researchers used two methods to assess both AI and human responses. The first was an algorithm that rated how closely the suggested use for the object was to the object’s original purpose. The second involved asking six human assessors (who were unaware that some of the answers had been generated by AI systems) to evaluate each response on a scale of 1 to 5 in terms of how creative and original it was—1 being not at all, and 5 being very. Average scores for both humans and AIs were then calculated.
Although the chatbots’ responses were rated as better than the humans’ on average, the best-scoring human responses were higher.
While the purpose of the study was not to prove that AI systems are capable of replacing humans in creative roles, it raises philosophical questions about the characteristics that are unique to humans, says Simone Grassini, an associate professor of psychology at the University of Bergen, Norway, who co-led the research.
“We’ve shown that in the past few years, technology has taken a very big leap forward when we talk about imitating human behavior,” he says. “These models are continuously evolving.”
Proving that machines can perform well in tasks designed for measuring creativity in humans doesn’t demonstrate that they’re capable of anything approaching original thought, says Ryan Burnell, a senior research associate at the Alan Turing Institute, who was not involved with the research.
The chatbots that were tested are “black boxes,” meaning that we don’t know exactly what data they were trained on, or how they generate their responses, he says. “What’s very plausibly happening here is that a model wasn’t coming up with new creative ideas—it was just drawing on things it’s seen in its training data, which could include this exact Alternate Uses Task,” he explains. “In that case, we’re not measuring creativity. We’re measuring the model’s past knowledge of this kind of task.”
That doesn’t mean that it’s not still useful to compare how machines and humans approach certain problems, says Anna Ivanova, an MIT postdoctoral researcher studying language models, who did not work on the project.
However, we should bear in mind that although chatbots are very good at completing specific requests, slight tweaks like rephrasing a prompt can be enough to stop them from performing as well, she says. Ivanova believes that these kinds of studies should prompt us to examine the link between the task we’re asking AI models to complete and the cognitive capacity we’re trying to measure. “We shouldn’t assume that people and models solve problems in the same way,” she says.