This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.
When the generative AI boom started with ChatGPT in late 2022, we were sold a vision of superintelligent AI tools that know everything, can replace the boring bits of work, and supercharge productivity and economic gains.
Two years on, those productivity gains mostly haven’t materialized. And we’ve seen something peculiar and slightly unexpected happen: People have started forming relationships with AI systems. We talk to them, say please and thank you, and have started to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers.
We’re seeing a giant, real-world experiment unfold, and it’s still uncertain what impact these AI companions will have either on us individually or on society as a whole, argue Robert Mahari, a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School, and Pat Pataranutaporn, a researcher at the MIT Media Lab. They say we need to prepare for “addictive intelligence”, or AI companions that have dark patterns built into them to get us hooked. You can read their piece here. They look at how smart regulation can help us prevent some of the risks associated with AI chatbots that get deep inside our heads.
The idea that we’ll form bonds with AI companions is no longer just hypothetical. Chatbots with even more emotive voices, such as OpenAI’s GPT-4o, are likely to reel us in even deeper. During safety testing, OpenAI observed that users would use language that indicated they had formed connections with AI models, such as “This is our last day together.” The company itself admits that emotional reliance is one risk that might be heightened by its new voice-enabled chatbot.
There’s already evidence that we’re connecting on a deeper level with AI even when it’s just confined to text exchanges. Mahari was part of a group of researchers that analyzed a million ChatGPT interaction logs, and revealed that the second most popular use of AI is sexual role-playing. Aside from that, the overwhelmingly most popular use case for the chatbot was creative composition. People also liked to use it for brainstorming and planning, asking for explanations and general information about stuff.
These sorts of creative and fun tasks are excellent ways to use AI chatbots. AI language models work by predicting the next likely word in a sentence. They are confident liars, and often present falsehoods and facts and make stuff up, or hallucinate. This matters less when making stuff up is kind of the entire point. In June, my colleague Rhiannon Williams wrote about how comedians found AI language models to be useful for generating a first “vomit draft” of their material, which they could then add their own human ingenuity to in order to make it funny.
But these use cases aren’t necessarily productive in the financial sense. I’m pretty sure smutbots weren’t what investors had in mind when they poured billions of dollars into AI companies, and, combined with the fact we still don’t have a killer app for AI,it’s no wonder that Wall Street is feeling a lot less bullish about it recently.
The use cases that would be “productive,” and have thus been the most hyped, have seen less success in AI adoption. Hallucination starts to become a problem in some of these use cases, such as code generation, news and online search, where it matters a lot to get things right. Some of the most embarrassing failures of chatbots have happened when people have started trusting AI chatbots too much, or considered them sources of factual information. Earlier this year, for example, Google’s AI overview feature, which summarizes online search results, suggested people eat rocks and add glue on pizza.
And that’s the problem with AI hype. It sets our expectations way too high, and leaves us disappointed and disillusioned when the quite literally incredible promises don’t happen. It also tricks us into thinking AI is a technology that is even mature enough to bring about instant changes. In reality, it might be years until we see its true benefit.
Now read the rest of The Algorithm
Deeper Learning
AI “godfather” Yoshua Bengio has joined a UK project to prevent AI catastrophes
Yoshua Bengio, a Turing Award winner who is considered one of the “godfathers” of modern AI, is throwing his weight behind a project funded by the UK government to embed safety mechanisms into AI systems. The project, called Safeguarded AI, aims to build an AI system that can check whether other AI systems deployed in critical areas are safe. Bengio is joining the program as scientific director and will provide critical input and advice.
What are they trying to do: Safeguarded AI’s goal is to build AI systems that can offer quantitative guarantees, such as a risk score, about their effect on the real world. The project aims to build AI safety mechanisms by combining scientific world models, which are essentially simulations of the world, with mathematical proofs. These proofs would include explanations of the AI’s work, and humans would be tasked with verifying whether the AI model’s safety checks are correct. Read more from me here.
Bits and Bytes
Google DeepMind trained a robot to beat humans at table tennis
Researchers managed to get a robot wielding a 3D-printed paddle to win 13 of 29 games against human opponents of varying abilities in full games of competitive table tennis. The research represents a small step towards creating robots that can perform useful tasks skillfully and safely in real environments like homes and warehouses, which is a long-standing goal of the robotics community. (MIT Technology Review)
Are we in an AI bubble? Here’s why it’s complex.
There’s been a lot of debate, and even some alarm, recently about whether AI is ever going to live up to its potential, especially thanks to tech stocks’ recent nosedive. This nuanced piece explains why, although the sector faces significant challenges, it’s far too soon to write off AI’s transformative potential. (Platformer)
How Microsoft spread its bets beyond OpenAI
Microsoft and OpenAI have one of the most successful partnerships in AI. But following OpenAI’s boardroom drama last year, the tech giant and its CEO Satya Nadella have been working on a strategy that will make Microsoft more independent of Sam Altman’s startup. Microsoft has diversified its investments and partnerships in generative AI, built its own smaller, cheaper models, and hired aggressively to develop its consumer AI efforts. (Financial Times)
Humane’s daily returns are outpacing sales
Oof. The extremely hyped AI pin, which was billed as a wearable AI assistant, seems to have flopped. Between May and August, more AI Pins were returned than purchased. Infuriatingly, the company has no way to reuse the returned pins, so they become e-waste. (The Verge)