As AI models become better at mimicking human behavior, it’s becoming increasingly difficult to distinguish between real human internet users and sophisticated systems imitating them. 

That’s a real problem when those systems are deployed for nefarious ends like spreading misinformation or conducting fraud, and it makes it a lot harder to trust what you encounter online.

A group of 32 researchers from institutions including OpenAI, Microsoft, MIT and Harvard have developed a potential solution— a verification concept called ‘personhood credentials’ that proves its holder is a real person, without revealing any further information about their identity. The team explored the idea in a non peer-reviewed paper posted to the Arxiv preprint server earlier this month.

Personhood credentials work by doing two things AI systems still cannot do: bypassing state-of-the-art cryptographic systems, and passing as a person in the offline, real world. 

To request credentials, a human would have to physically go to one of a number of issuers, which could be a government or other kind of trusted organization, where they would be asked to provide evidence that they’re a real human, such as a passport, or volunteer biometric data. Once they’ve been approved, they’d receive a single credential to store on their devices like users are currently able to store credit and debit cards in smartphones’ Wallet apps.

To use these credentials online, a user could present it to a third party digital service provider who could then verify them using zero-knowledge proofs, a cryptographic protocol that would confirm the holder was in possession of a personhood credential without disclosing any further unnecessary information.

The ability to filter out any non-verified humans on a platform could allow people to choose not to see anything that hasn’t definitely been posted by a human on social media, or filter out Tinder matches that don’t come with personhood credentials, for example. 

Related work from others:  Latest from MIT : MIT-Takeda Program heads into fourth year with crop of 10 new projects

The authors want to encourage governments, companies and standards bodies to consider adopting it in the future to prevent AI deception ballooning out of our control. 

“AI is everywhere. There will be many issues, many problems, and many solutions,” says Tobin South, a PhD student at MIT who worked on the project. “Our goal is not to prescribe this to the world, but to open the conversation about why we need this and how it could be done.”

Possible technical options already exist. For example, a network called Idena claims to be the first blockchain proof-of-person system. It works by getting humans to solve puzzles that would prove difficult for bots within a short time frame. The controversial Worldcoin program, which collects users’ biometric data, bills itself as the world’s largest privacy-preserving human identity and financial network. It recently partnered with the Malaysian government to provide proof of humanness online by scanning users’ irises, which creates a code. Like the personhood credentials concept, each code is protected using cryptography.

However, the project has been criticized for deceptive marketing practices, collecting more personal data than acknowledged, and failing to obtain meaningful consent from users. Regulators in Hong Kong and Spain banned Worldcoin from operating earlier this year, while its operations have been suspended in countries including Brazil, Kenya, and India. 

So there remains a need for fresh solutions. The rapid rise of accessible AI tools has ushered in a dangerous period when internet users are hyper-suspicious about what is and isn’t true online, says Henry Ajder, an expert on AI and deepfakes and adviser to Meta and the UK government. And while ideas for verifying personhood have been around for some time, these credentials feel like one of the most substantive visions of how to push back against encroaching skepticism, he says.

Related work from others:  Latest from MIT : Technique improves the reasoning capabilities of large language models

But the biggest challenge the credentials will face is getting enough adoption from platforms, digital services and governments, who may feel uncomfortable conforming to a standard they don’t control. “For this to work effectively, it would have to be something which is universally adopted,” he says. “In principle the technology is quite compelling, but in practice and the messy world of humans and institutions, I think there would be quite a lot of resistance.”

Martin Tschammer, head of security at startup Synthesia, which creates AI-generated hyperrealistic deepfakes, says he agrees with the principle driving personhood credentials: the need to verify humans online. However, he is unsure whether it’s the right solution or how practical it would be to implement. He also expressed skepticism over who would run such a scheme.  

“We may end up in a world in which we centralize even more power and concentrate decision-making over our digital lives, giving large internet platforms even more ownership over who can exist online and for what purpose,” he says. “And, given the lackluster performance of some governments in adopting digital services and autocratic tendencies that are on the rise, is it practical or realistic to expect this type of technology to be adopted en masse and in a responsible way by the end of this decade?” 

Rather than waiting for collaboration across industry, Synthesia is currently evaluating how to integrate other personhood-proving mechanisms into its products. He says it already has several measures in place: For example, it requires businesses to prove that they are legitimate registered companies, and will ban and refuse to refund customers found to have broken its rules. 

Related work from others:  Latest from MIT Tech Review - Google DeepMind has a new way to look inside an AI’s “mind”

One thing is clear: we are in urgent need of methods to differentiate humans from bots, and encouraging discussions between tech and policy stakeholders is a step in the right direction, says Emilio Ferrara, a professor of computer science at the University of Southern California, who was also not involved in the project. 

“We’re not far from a future where, if things remain unchecked, we’re going to be essentially unable to tell apart interactions that we have online with other humans or some kind of bots. Something has to be done,” he says. “We can’t be naive as previous generations were with technologies.”

Share via
Copy link
Powered by Social Snap