This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Many people in AI will be familiar with the story of the Mechanical Turk. It was a chess-playing machine built in 1770, and it was so good its opponents were tricked into believing it was supernaturally powerful. In reality, the machine had space for a human to hide in it and control it. The hoax went on for 84 years. That’s three generations! 

History is rich with examples of people trying to breathe life into inanimate objects, and of people selling hacks and tricks as “magic.” But this very human desire to believe in consciousness in machines has never matched up with reality. 

Creating consciousness in artificial intelligence systems is the dream of many technologistsLarge language models are the latest example of our quest for clever machines, and some people (contentiously) claim to have seen glimmers of consciousness in conversations with them. The point is: machine consciousness is a hotly debated topic. Plenty of experts say it is doomed to remain science fiction forever, but others argue it’s right around the corner.

For the latest edition of MIT Technology Review, neuroscientist Grace Huckins explores what consciousness research in humans can teach us about AI, and the moral problems that AI consciousness would raise. Read more here.

We don’t fully understand human consciousness, but neuroscientists do have some clues about how it’s manifested in the brain, Grace writes. To state the obvious, AI systems don’t have brains, so it’s impossible to use traditional methods of measuring brain activity for signs of life. But neuroscientists have various different theories about what consciousness in AI systems might look like. Some treat it as a feature of the brain’s “software,” while others tie it more squarely to physical hardware.

Related work from others:  Latest from Google AI - Scanned Objects by Google Research: A Dataset of 3D-Scanned Common Household Items

There have even been attempts to create tests for AI consciousness. Susan Schneider, director of the Center for the Future Mind at Florida Atlantic University, and Princeton physicist Edwin Turner have developed one, which requires an AI agent to be isolated from any information about consciousness it could’ve picked up during its training before it’s tested. This step is important so that it can’t just parrot human statements it’s picked up about consciousness during training, as a large language model would.  

The tester then asks the AI questions it should only be able to answer if it is itself conscious. Can it understand the plot of the movie Freaky Friday, where a mother and daughter switch bodies, their consciousnesses dissociated from their physical selves? Can it grasp the concept of dreaming—or even report dreaming itself? Can it conceive of reincarnation or an afterlife?

Of course, this test is not foolproof. It requires its subject to be able to use language, so babies and animals—manifestly conscious beings—would not pass the test. And language-based AI models will have been exposed to the concept of consciousness in the vast amount of internet data they have been trained on. 

So how will we really know if an AI system is conscious? A group of neuroscientists, philosophers, and AI researchers, including Turing Prize winner Yoshua Bengio, have put out a white paper that proposes practical ways to detect AI consciousness based on a variety of theories from different fields. They propose a sort of report card for different markers, such as flexibly pursuing goals and interacting with an external environment, that would indicate AI consciousness—if the theories hold true. None of today’s systems tick any boxes, and it’s unclear if they ever will. 

Here is what we do know. Large language models are extremely good at predicting what the next word in a sentence should be. They are also very good at making connections between things—sometimes in ways that surprise us and make it easy to believe in the illusion that these computer programs might have sparks of something else. But we know remarkably little about AI language models’ inner workings. Until we know more about exactly how and why these systems come to the conclusions they do, it’s hard to say that the models’ outcomes are not just fancy math. 

Related work from others:  Latest from Google AI - MediaPipe FaceStylizer: On-device real-time few-shot face stylization

Deeper Learning

How AI could supercharge battery research

We need better batteries if electric vehicles are going to achieve their potential of nudging fossil-fuel-powered cars off the roads. The problem is that there are a million different potential materials, and combinations of materials, we could use to make these batteries. It’s very labor-intensive and expensive to do rounds and rounds of trial and error. 

Enter AI: Startup Aionics is using AI tools to help researchers find better battery chemistries faster. It uses machine learning to sort through the wide range of material options and suggest combinations. Generative AI can also help researchers design new materials more quickly. Read more from Casey Crownhart in her weekly newsletter, The Spark, on the tech that could solve the climate crisis.

Bits and Bytes

Big Tech struggles to turn AI hype into profits
Microsoft has reportedly lost money on one of its first generative AI products. And it’s not alone: the other tech giants are equally struggling to find a way to capitalize on their massive investment in generative AI, which is eye-wateringly expensive to train and run. (The Wall Street Journal

How AI reduces the world to stereotypes
Rest of World analyzed 3,000 AI-generated images of different countries and cultures, and found they portray the world in a deeply stereotypical way. No surprises there, but this visual piece neatly shows just how deeply ingrained biases are in AI systems. (Rest of World

Even Google insiders are questioning the usefulness of the Bard chatbot
Glad to know it’s not just me! In leaked messages from an official invite-only Discord chat, Google product managers and designers share their skepticism about the utility of the company’s AI chatbot Bard, considering that the system makes things up. Google insiders seem to think it is best for creative uses, brainstorming, or coding—and even then, it needs lots of supervision. (Bloomberg)

Related work from others:  Latest from MIT : New insights into training dynamics of deep classifiers

The US is mulling escalating its AI tech blockade on China
Anxious about the prospect of China gaining AI supremacy, the UA has been limiting its access to the computer chips needed to power AI. The US is now considering escalating its blockade and restricting China’s access to a broad category of general-purpose AI programs, not just physical parts. (The Atlantic

How a billionaire-backed network of AI advisors took over Washington
The little-known Horizon Institute for Public Service, a nonprofit created in 2022, is funding salaries of people working in key Senate offices, agencies, and think tanks. The group is pushing to put the existential risk posed by AI at the top of Washington’s agenda, which could benefit AI companies with ties to the network. 
(Politico)

Google offers to pay its customers’ legal feel in generative AI lawsuits
Google has joined Microsoft and Getty Images in promising to cough up legal fees if its customers get sued over the outputs of its generative AI models or the training data they use. This is a smart move from Big Tech, as it could help persuade organizations that are hesitating to adopt these companies’ AI tools until there is more legal clarity over copyright and AI. (Google)

Similar Posts