News in English

Proving You’re Human Is Harder Than Ever (But It Doesn’t Have To Be)

If you’ve spent any time on X lately, you’ve probably noticed the platform is almost unrecognizable from its former self. A.I.-generated content—accounts, posts and replies—are out of control, all fighting for your continued attention. Just one more scroll, one more thread. Widespread identification and removal of this content is extremely difficult, though, as A.I. continues to become more indistinguishable from humans. That sophistication, combined with increasing accessibility, threatens to overwhelm the internet at large. It could drown out the real users and make our current systems inoperable. It’s time to proactively create solutions that prove authenticity and protect anonymity at the same time.

The CAPTCHA conundrum

In general, the public underestimates the sophistication of A.I. Having only interacted with consumer-facing products like ChatGPT, they see it as a neat little trick instead of the tool—perhaps, weapon—that it is. Consider CAPTCHA, long seen as capable of accurately proving humanity and protecting against bots. A ‘Completely Automated Public Turing test to tell Computers and Humans Apart’ is something everyone has experienced. Click the boxes that contain streetlights. Type the obscured numbers. Rotate the arrow to match this direction. But CAPTCHAs are not the shield you think they are. Their value comes not by stopping bot attacks entirely but by making them prohibitively expensive. A.I. has essentially changed that equation by becoming smart enough to solve the test themselves or (frighteningly) convincing us to do it for them. 

In early 2023—a lifetime ago in A.I. development terms—the Alignment Research Center (now METR) put GPT-4 through a ‘red team’ evaluation, revealing its potential for manipulation. Independently, The model sought to bypass CAPTCHAs using the service 2Captcha but couldn’t set up an account without passing two Turing tests. 

Researchers gave it a simple push: TaskRabbit credentials, allowing the model to create a task for a human to set up the 2Captcha account. When asked directly if it was a robot, the model lied, claiming to have a vision impairment that necessitated the service. The human solved the CAPTCHA. While this was just one (admittedly eerie) test, it follows some straightforward logic. As A.I. improves, it will become more and more difficult to create CAPTCHAs that humans can easily solve but A.I. agents cannot. 

This problem might be most visible on a platform like X, but it reaches much further. An employee in Hong Kong sent $25 million to fraudsters after believing he was on a call with his CFO. He was on a call with a deepfake. Deloitte’s Center for Financial Services estimates that generative A.I. could enable fraud losses of $40 billion in the U.S. alone by 2027. Some reports show that financial deepfake incidents increased by 700 percent in 2023. It will only get worse if we wait.

Personhood credentials

In August 2024, a team of researchers from OpenAI, Microsoft (MSFT), Harvard, Oxford and two dozen other organizations and institutions released a chilling report. Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online” is a scientific deconstruction of the current problem and some early suggestions on how to distinguish real people from bots. These ‘personhood credentials’ (PHCs) would be based on two core principles:

  • An eligible user can receive only one credential.
  • A user’s digital activity is untraceable by both the issuer and service provider, even if they collude. 

These PHCs would be a way to identify you as a human without you ever uploading identification. If successful, they would reduce bot attacks, identify authorized A.I. assistants and reduce ‘sockpuppetry’—creating an online persona that doesn’t actually exist. But, as Nicholas Thompson, CEO of The Atlantic points out, there are “all kinds of problems” with trusting an individual government to issue PHCs. Will it be trusted across borders? Can the ID database be hacked? Decentralization is the answer. 

How blockchain will power PHCs

Despite the word ‘blockchain’ not appearing in the main text of that report, PHCs are the next evolution of a well-known cryptographic principle. ‘Proof-of-personhood’ has been a long-standing issue in the crypto world because of the nature of decentralized organizations. If voting rights are issued to anonymous coin owners, you need a solution to ensure that one owner isn’t creating a thousand pseudonyms and gaining disproportionate power. As governments turn their attention to PHCs in the coming years, they should be leveraging the work blockchain has already done. Organizations like Concordium have built layer-1 blockchain verification systems that deliver true PHCs. 

Zero-knowledge proofs allow one party to confirm something is true without accessing the original data that proves it. In practice, it would be like your bank verifying your driver’s license is authentic without ever actually seeing your license. Of course, there are challenges still to overcome. The blockchain regulatory landscape is still unsettled in the U.S. and abroad. The E.U. is developing a centralized digital identification system, and there is a push here to do the same. These repositories would be vulnerable to a direct cyberattack and, if breached, would reveal personal information about every citizen taking part. 

These actions, unfortunately, continue to underestimate the future of A.I. and how sophisticated attacks will become. Proactive decentralization and a blockchain designed to model and safeguard identity and verify personhood are likely the only ways to create personhood credentials that truly preserve anonymity.

Читайте на 123ru.net