It’s a project that claims to use cryptocurrency to distribute money across the world, though its bigger ambition is to create a global identity system called “World ID” that relies on individuals’ unique biometric data to prove that they are humans. It officially launched on July 24 in more than 20 countries, and Sam Altman, the CEO of OpenAI and one of the biggest tech celebrities right now, is one of the cofounders of the project.
The company makes big, idealistic promises: that it can deliver a form of universal basic income through technology to make the world a better and more equitable place, while offering a way to verify your humanity in a digital future filled with nonhuman intelligence, which it calls “proof of personhood.” If you’re thinking this sounds like a potential privacy nightmare, you’re not alone.
Luckily, we have someone I’d consider the Worldcoin expert on staff here at MIT Technology Review. Last year investigative reporter Eileen Guo, with freelancer Adi Renaldi, dug into the company and found that Worldcoin’s operations were far from living up to its lofty goals and that it was collecting sensitive biometric data from many vulnerable people in exchange for cash.
As they wrote:
“Our investigation revealed wide gaps between Worldcoin’s public messaging, which focused on protecting privacy, and what users experienced. We found that the company’s representatives used deceptive marketing practices, collected more personal data than it acknowledged, and failed to obtain meaningful informed consent.”
What’s more, the company was using test users’ sensitive, but anonymized, data to train artificial intelligence models, but Eileen and Adi found that individuals did not know their data was being used that way.
I highly recommend you read their investigation—which builds on more than 35 interviews with Worldcoin executives, contractors, and test users recruited primarily in developing countries—to better understand how the company was handling sensitive personal data and how its idealistic rhetoric compared with the realities on the ground.
Given their reporting, it’s no surprise that regulators in at least four countries have already launched investigations into the project, citing concerns with its privacy practices. The company claims it has already scanned nearly 2.2 million “unique humans” into its database, which was primarily built during an extended test period over the last two years.