Original Title: What do I think about biometric proof of personhood?
Original Author: Vitalik
Original Translation: Qianwen, bayemon.eth, ChainCatcher
Special thanks to the Worldcoin team, Proof of Humanity community, and Andrew Miller for their discussions.
The Ethereum community has been working on building a decentralized solution for proof of personhood, which is a challenging but potentially valuable tool. Proof of personhood, also known as the unique person problem, is a limited form of real-world identity that asserts that a given registered account is controlled by a real person (and a different real person than other registered accounts), ideally without disclosing which real person it is.
There have been some attempts to address this problem, such as Proof of Personhood, BrightID, Idena, and Circles. Some of them have their own applications (typically UBI tokens), and others are used to verify which accounts are valid in quadratic voting in GitcoinPassport. Zero-knowledge technologies like Sismo have added privacy to many of these solutions. Recently, we have seen the rise of a larger and more ambitious identity project: Worldcoin.
Worldcoin was co-created by Sam Altman, known for his role as the CEO of OpenAI. The idea behind this project is simple: artificial intelligence will create a lot of wealth for humanity, but it may also lead to a loss of jobs for many people. It becomes almost impossible to distinguish who is human and who is a robot, so we need to plug this loophole in the following ways:
(i) Create a very good identity verification system so that humans can prove that they are indeed humans;
(ii) Provide interest-free loans to everyone.
What sets Worldcoin apart is its reliance on highly sophisticated biometric technology, using a proprietary hardware called Orb to scan the iris of each user: their goal is to produce a large number of Orbs and distribute them widely around the world, placing them in public places so that anyone can easily obtain an ID. It is commendable that Worldcoin also promises to achieve decentralization over time. Initially, this means technical decentralization: becoming an L2 on Ethereum with the Optimism stack and using ZK-SNARKs and other cryptographic technologies to protect user privacy. Later, it also includes the decentralization of the system's governance itself.
Worldcoin has been criticized for privacy and security issues with Orb, design problems with its token, and ethical choices made by the company. Some of these criticisms are very specific, focusing on decisions that could have been easily made differently - in fact, the Worldcoin project itself may be willing to change these decisions. But there are also more fundamental questions raised, such as whether using biometric technology - not just Worldcoin's eye-scanning biometric technology, but also simpler facial video uploads and non-bot verification games used in Idena - is a good idea. Some critics also question all forms of proof of personhood, citing risks including inevitable privacy breaches, further erosion of people's ability to browse the internet anonymously under oppressive governments, and the potential impossibility of achieving security while decentralizing.
This article will discuss these issues and help you decide whether it's a good idea to bow down before this god-like figure and let your eyes (or face, voice, etc.) be scanned, and whether there are alternative solutions - such as personality proof based on social graphs or abandoning personality proof altogether - that might be better.
What is proof of personhood and why is it important?
The simplest definition is: it creates a list of public keys, with the system guaranteeing that each public key is controlled by a unique human. In other words, if you are human, you can put one key on the list, but you can't put two keys on the list, and if you are a robot, you can't put any keys on the list.
Proof of personhood is valuable because it addresses many of the challenges people face in anti-fraud and anti-power centralization, avoiding reliance on centralized authorities and disclosing as little information as possible. Without solving the problem of proof of personhood, decentralized governance (including micro-governance, such as voting on social media posts) becomes more susceptible to being captured by extremely wealthy participants, including hostile governments. Many services can only prevent denial-of-service attacks by setting access prices, which sometimes become too high for many low-income legitimate users to afford the attacker's high price.
Many major applications in the world today solve this problem by using government-backed identity systems, such as credit cards and passports. While this solves the problem, it comes at a significant, perhaps unacceptable, sacrifice in terms of privacy, and the government itself is also vulnerable to minor attacks.
Few proof of personhood supporters see the dual risks we face.
In many identity verification projects - not only Worldcoin, but also other projects (Circle, BrightID, Idena) - the flagship application is a built-in N-per-person token (sometimes called UBI token). Each user registered in the system receives a certain amount of tokens per day (or per hour, or per week). But there are many other applications:
Token distribution through airdrops
Token or NFT sales, providing more favorable conditions for less affluent users
Voting in DAOs
Approaches to developing a reputation system based on a graph
Quadratic voting (and payments of funds and attention)
Preventing bot/fake accounts attacks on social media
Alternative approaches to CAPTCHA to prevent DoS attacks
In many of these cases, the common goal is to establish open and democratic mechanisms, avoiding centralized control by project operators and domination by the wealthiest users. The latter is particularly important in decentralized management. In many cases, existing solutions rely on a combination of:
(1) Highly opaque AI algorithms that have significant leeway to subtly discriminate against users operators dislike;
(2) Centralized identity verification, also known as KYC.
An effective identity verification solution would be a better alternative, providing the required security properties for these applications without the drawbacks of existing centralized methods.
What are some early attempts at identity verification?
Identity verification primarily takes two forms: graph-based proofs and biometric proofs.
Graph-based identity verification relies on some form of attestation: if Alice, Bob, Charlie, and David are all verified humans and they all say Emily is a verified human, then Emily is likely also a verified human. Attestations are typically strengthened through incentive mechanisms: if Alice attests that Emily is human, but it turns out she is not, both Alice and Emily may face penalties. Biometric identity verification involves validating certain physical or behavioral characteristics of Emily to differentiate between humans and bots (as well as differences among human individuals). Most projects use a combination of these two approaches.
The workings of the four systems I mentioned at the beginning of the article are roughly as follows:
Proof of identity: Upload a video of yourself and provide a deposit. To be approved, you need an existing user to vouch for you and a certain amount of time during which challenges can be made against you. If a challenge arises, the decentralized court Kleros will decide whether your video is authentic; if it is not, you lose your deposit and the challenger receives a reward.
BrightID: You participate in video call verification parties with other users, where everyone verifies each other. More advanced verification can be done through Bitu, where if enough Bitu-verified users vouch for you, you can pass the verification.
Idena: You play a captcha game at a specific point in time (to prevent multiple participations); the game involves creating and validating captchas, then using those captchas to verify others.
Circles: Existing Circles users vouch for you. The unique aspect of Circles is that it does not attempt to create a globally verifiable ID; instead, it creates a trust graph where someone's credibility can only be verified based on your own position in the graph.
Each Worldcoin user installs an application on their phone, which generates a private and public key, similar to an Ethereum wallet. Then, they personally visit an Orb. The user looks into the Orb's camera while showing the QR code generated by the Worldcoin app, containing their public key. The Orb scans the user's eyes and uses complex hardware scanning and machine learning classifiers to verify two things:
1) The user is a real person.
2) The user's iris is different from any other user's iris previously used in the system.
If both scans pass, the Orb signs a message approving the specialized hash value of the user's iris scan. The hash values are uploaded to a database (currently a central server), which will be replaced by a decentralized on-chain system once the hash value mechanism is deemed effective. The system does not store complete iris scan results, only hash values, which are used for uniqueness checks. From that point on, the user has a world ID.
World ID holders can prove their uniqueness as a human by generating ZK-SNARKs, which prove that they hold the private key corresponding to the public key in the database without revealing the key they possess. Therefore, even if someone rescans your iris, they cannot see any of your operations.
What are the main issues with Worldcoin?
The four major risks are:
- Privacy. The iris scan registry could leak information. If someone else scans your iris, they can cross-reference it with the database to determine if you have a world ID. The iris scan may leak more information.
- Accessibility. It is not possible for everyone to reliably access a world ID unless there are enough orbs.
l Decentralization. Orb is a hardware device, and we cannot verify its construction or whether it has a backdoor. Therefore, even if the software layer is perfect and fully decentralized, the Worldcoin Foundation still has the capability to insert a backdoor into the system, allowing it to create numerous false human identities.
l Security. Users' smartphones may be hacked, users may be coerced into scanning their irises and presenting someone else's public key, and it is also possible to create fake people through 3D printing and have them pass iris scans to obtain a World ID.
It is important to distinguish between (i) the specific issues with Worldcoin's choice, (ii) the inevitable issues with any biometric proof of identity, and (iii) the issues that are present with any general proof of identity.
For example, registering for proof of identity means publicly revealing your face on the internet. Joining the BrightID verification party, although it does not fully expose your face, still exposes your identity to many people. And joining Circles will publicly expose your social network.
In comparison, Worldcoin is much better at protecting privacy. On the other hand, Worldcoin relies on specialized hardware, which presents the challenge of trusting the ball manufacturer to correctly produce the ball--a challenge that does not exist in proof of identity, BrightID, or Circles. It is even conceivable that in the future, in addition to Worldcoin, someone else will create different dedicated hardware solutions and make different trade-offs.
How do biometric proof of identity solutions address privacy concerns?
The most obvious and largest potential privacy leak in any individual identification system is linking every action of a person to their real-world identity. This data leak is extremely serious, to the point that it is unacceptable, but fortunately, zero-knowledge proof technology easily solves this problem.
Users do not need to sign directly with their private keys (whose corresponding public key is in the database), but can prove that they own the private key with a ZK-SNARK, while the corresponding public key is somewhere in the database, without revealing the specific private key they own. This can be achieved through tools like Sismo (see the specific implementation of identity proof here), and Worldcoin has its own built-in implementation. Here, it is necessary to give credit to the native cryptographic proof of identity: they indeed value taking this fundamental step to provide anonymization, something that essentially all centralized identity solutions do not achieve.
More subtle but still important privacy breaches are the public registration of biometric scans. In terms of identity verification, this means a large amount of data: you would get videos of each identity verification participant, allowing anyone in the world with the intention of investigating the identity verification participant to do so. In Worldcoin, the leaked data is much more limited: Orb only computes and publishes the hash value of each person's iris scan locally. This hash value is not a regular hash value like SHA-256, but a specialized algorithm based on machine learning Gabor filters, which is used to deal with the inherent inaccuracy in any biometric scan and ensure that the continuous hash values of the iris of the same person have similar outputs.
Blue: the percentage of different bits in two iris scan results from the same person.
Orange: the percentage of different bits in the iris scan results from two different people.
These iris hash values only leak a small amount of data. If an adversary can forcefully (or secretly) scan your iris, then they can calculate your iris hash value themselves and compare it with the iris hash value database to determine if you are participating in the system. This functionality of checking if someone is registered is necessary for the system itself to prevent people from registering multiple times, but it is always possible to abuse this functionality.
In addition, iris hash values have the potential to leak a certain amount of medical data (including gender, race, and possibly medical conditions), but this leakage is much smaller than what almost all other large-scale data collection systems currently in use (e.g., even street cameras) can capture.
All in all, in my opinion, storing iris hash values is sufficient to protect privacy. If others disagree with this judgment and decide to design a system with stronger privacy, there are two ways to do so:
1) If the iris hashing algorithm can be improved to significantly reduce the differences between two scans from the same person (e.g., reliable bit flips below 10%), then the system can store fewer error-correcting bits for the iris hash instead of storing the complete iris hash (see: fuzzy extractor). If the differences between two scans are below 10%, the number of bits that need to be disclosed would decrease at least 5 times.
2) If we want to go further, the iris hash database can be stored in a multi-party computation (MPC) system accessible only by Orb (with rate limitations), making the data completely inaccessible, but at the cost of significant protocol and sociological complexities in managing the participants of multi-party computation. The benefit of doing this is that even if users wish, they cannot prove the linkage between two different World IDs they own at different times.
Unfortunately, these technologies are not applicable to identity verification because identity verification requires the public availability of complete videos of each participant, so that questions can be raised in case of signs of fake videos (including AI-generated fake videos) and more detailed investigations can be conducted in such cases.
In general, although staring at Orb, it gives a utopian feeling by deeply scanning your eyeballs, but the dedicated hardware system seems to do a good job in protecting privacy. However, this also indicates, on the other hand, that dedicated hardware systems bring greater centralization issues. Therefore, it seems that we are trapped in a dilemma: we must balance between one set of values and another set of values.
What are the accessibility issues with biometric identity verification systems?
Specialized hardware brings accessibility issues because it is not very convenient to use. Currently, approximately 51% to 64% of people in Sub-Saharan Africa have smartphones, and this proportion is expected to increase to 87% by 2030. However, while there are billions of smartphones globally, there are only a few hundred Orbs. Even with larger-scale distributed manufacturing, it would be difficult to achieve a world where everyone has an Orb within a five-kilometer radius.
It is worth noting that many other forms of identity verification have even more serious accessibility issues. Unless you already know someone in the social graph, it is difficult to join a social graph-based identity verification system. This makes such systems easily limited to a single community in a single country.
Even centralized identity verification systems have learned from this lesson: India's Aadhaar ID system is based on biometric technology because only in this way can it quickly absorb a large population (thus saving a lot of costs) while avoiding a large number of duplicates and fraudulent account fraud. Of course, the Aadhaar system as a whole is much weaker in terms of privacy protection than any large-scale proposals put forward by the cryptocurrency community.
From an accessibility perspective, the best-performing system is actually a system like identity verification, where you only need to use a smartphone for registration.
What are the centralization issues with biometric identity verification systems?
1. Centralization risk in senior management of the system (especially if there are subjective differences among different participants in the system, and the system makes final high-level decisions).
2. Centralization risk is unique to systems that use specialized hardware.
3. There is centralization risk if proprietary algorithms are used to determine who the real participants are.
Any identity verification system must struggle with the point (1) above, with the exception, perhaps, of a fully subjective system based on an accepted set of IDs. If a system uses incentive mechanisms priced in external assets like Ethereum, USDC, and DAI, then it can't be fully subjective, and governance risk becomes inevitable.
The second risk is much greater for Worldcoin than for identity verification (or BrightID), as Worldcoin relies on specialized hardware, while other systems do not.
The third risk is particularly present in logically centralized systems, unless all algorithms are open source and we can ensure that they are actually running the code they claim to be, relying on a single system for verification is particularly risky. This is not a risk for systems that purely rely on user verification of other users (such as identity verification).
How does Worldcoin solve the issue of hardware centralization?
Currently, a Worldcoin subsidiary entity called Tools for Humanity is the only organization manufacturing orbs. However, most of the source code for Orb is open: you can see the hardware specifications in this GitHub repository, and other parts of the source code are expected to be released soon. The license is a kind of shared source license, open source after four years, similar to Uniswap BSL, except that it prevents forks and behaviors they consider unethical—they specifically list large-scale surveillance and three international declarations of human rights.
The team's established goal is to enable and encourage other organizations to create orbs, and over time, transition from orbs created by Tools for Humanity to orbs created by approved and managed by some kind of DAO.
This design has one problem:
1) Due to common pitfalls in collaborative protocols, it may ultimately fail to truly achieve decentralization: over time, one manufacturer will come to dominate in practice, leading to a return to centralization. While governance bodies can limit the amount of valid orbs each manufacturer can produce, this requires careful management and puts significant pressure on those governance bodies to decentralize while monitoring the ecosystem and effectively addressing threats. This is much more difficult than a static DAO that only deals with top-level dispute resolution tasks.
2) Ensuring the security of this distributed manufacturing mechanism is basically impossible, as there are two risks:
Vulnerability to malicious orb manufacturers: even if only one orb manufacturer is malicious or becomes hacked, they can generate an infinite number of fake iris scan hashes and present them as World IDs.
Government restrictions on orbs: governments that do not want their citizens to participate in the Worldcoin ecosystem can prohibit orbs from entering the country. Furthermore, they can even coerce citizens to undergo iris scans to obtain their accounts, with no recourse for citizens.
In order to make the system more resilient against attacks by malicious Orb manufacturers, the Worldcoin team proposes to conduct regular audits of the Orb to verify the correct manufacturing process, whether key hardware components are made according to specifications, and whether there has been tampering afterwards. This is a challenging task: it is basically like the International Atomic Energy Agency (IAEA) conducting nuclear inspections, but focused on Orb. We hope that even if the implementation of the audit system is not perfect, it can greatly reduce the number of fake Orbs.
To prevent any malicious Orbs from slipping through the cracks and causing harm to the system, it is necessary to take a second mitigation measure. This involves registering World IDs that can effectively differentiate Orbs registered with different Orb manufacturers. If this information is confidential and only stored on the devices of World ID holders, that's not a problem, but it does require verification when necessary. This way, the ecosystem will be able to respond to (inevitable) attacks and selectively remove individual Orb manufacturers, or even individual Orbs, from the whitelist. For example, if we discover that the North Korean government is forcing people to scan their eyeballs everywhere, those Orbs and any accounts generated by them will be disabled immediately and traced back.
Security issues commonly found in Proof of Personhood
In addition to the unique issues that the Worldcoin system brings, there are some problems that affect general designs of Proof of Personhood. The main problems I can think of are as follows:
3D printed fake persons: People can use AI-generated photos or even 3D printed fake persons with enough credibility to be accepted by Orb software. As long as there is a group doing this, they can generate an unlimited number of identities.
Selling IDs: Someone can provide someone else's public key instead of their own when registering, allowing others to control the registered ID and exchange it for money. This situation seems to have already occurred and could also involve renting IDs.
Hacked phones: If someone's phone is hacked, the hacker can steal the key that controls their World ID.
Government coercion to steal identity documents: Governments can force citizens to verify themselves when presenting government-owned QR codes. This way, malicious governments could obtain millions of IDs. In a biometric system, this can even be done covertly: the government can use a tampered Orb to extract World IDs from every person entering the country during passport verification at customs.
(1) It is a problem unique to the biometric proof of personality system. (2) and (3) are common features of biometric and non-biometric designs. Point (4) is also a common feature of both. Although the required technology is very different in these two cases, in this section, I will focus on the issues related to biometric recognition.
These are quite serious issues, some of which have been properly addressed in existing protocols, some can be eliminated through future improvements, and some seem to be fundamental limitations.
How to deal with the problems brought by 3D printed figures?
For Worldcoin, this risk is much smaller compared to systems like personality proof: scanning a person face-to-face can check many characteristics of a person, making it difficult to forge. Professional hardware itself is harder to deceive than ordinary hardware, and ordinary hardware is harder to deceive than digital algorithms that verify images and videos sent remotely.
Will there eventually be something 3D printed that can deceive even dedicated hardware? It's possible. I expect that at some point in the future, the contradiction between openness and security will become greater: open-source AI algorithms are inherently more susceptible to adversarial machine learning. Black-box algorithms offer more protection, but it is difficult to ensure that no private malicious information is added during the training process. Perhaps in the future, ZK-ML technology can achieve the best of both worlds, but from another perspective, even the best AI algorithms can be deceived by the best 3D printed figures.
How to prevent ID selling?
In the short term, it is difficult to prevent the leakage of IDs because most people in the world are not even aware of identity proof protocols. If you tell them to raise a QR code and scan their eyes to get $30, they will surely do it. Once more people know what identity proof protocols are, a fairly simple mitigation measure becomes possible, which is to allow people who have already registered their IDs to re-register and invalidate the previous IDs. This significantly reduces the credibility of selling IDs because the person who sold you the ID can directly go and re-register, canceling the ID that was just sold to you. However, for this to happen, the protocol must be widely known, and Orb must also be very easy to obtain to make on-demand registration a reality.
This is also one of the reasons why integrating UBI Coin into the Proof of Personhood system is valuable: UBI Coin provides an easily understandable incentive mechanism. First, it allows people to understand the protocol and register. Second, if they register on behalf of others, it triggers an immediate re-registration mechanism, which also effectively prevents hackers from invading their phones.
Can we prevent coercion in biometric identification systems?
That depends on what kind of coercion we are talking about. Possible forms of coercion may include:
Government scanning people's eyes (or faces, etc.) at border controls and other routine government checkpoints for citizen registration and frequent re-registration.
Government banning the use of Orb domestically to prevent independent re-registration by individuals.
Individuals purchasing ID and then threatening others by causing harm to those who re-register if their own ID becomes invalid due to re-registration.
Applications (possibly government-operated) requiring people to sign in with their own public key and show them the corresponding biometric scan, thus revealing the connection between their current ID and any future ID obtained through re-registration. The concern here is that it would be easy to create a permanent record that accompanies a person throughout their life.
Preventing these situations completely, especially in the hands of immature users, seems quite difficult. Users can leave their countries and (re)register on a safer Orb, but this is a difficult and costly process. In truly hostile legal environments, it is too difficult and risky to find an independent Orb.
A feasible solution is to make such abusive behavior more difficult to implement and detect. Personality verification, which requires users to say a specific phrase during registration, is a good example. This may be enough to prevent covert scanning, but it requires coercive behavior to be more overt, and the registration phrase can even include a statement confirming that the interviewee is aware of their right to independently re-register and may receive UBI Coin or other rewards. If coercive behavior is detected, the device used to enforce mass coercive registration may have its access revoked. To prevent applications from linking people's current and past IDs and attempting to leave permanent records, the default identity verification application can lock the user's key in trusted hardware, preventing any application from using the key directly without an anonymous ZK-SNARK layer. If the government or application developer wants to bypass this, they will need to force the use of their own custom application.
Combining these technologies with vigilance against ID abuse, it seems possible to lock out truly hostile regimes and keep moderately corrupt regimes honest (which is the case in many places around the world). This can be achieved by projects like Worldcoin or personality verification maintaining their own bureaucratic institutions, or by disclosing more information about how IDs are registered (e.g., in the case of Worldcoin, where it comes from the Orb) and leaving this classification task to the community.
How to prevent ID rental (e.g. vote-selling issues)?
Re-registering does not prevent the rental of IDs. This is not a problem in certain applications: the cost of renting the right to receive daily UBI Coin would only be the value of the daily UBI Coin. But in applications like community voting, vote-selling is a big problem.
Systems like MACI can prevent you from selling your vote in a trusted manner, allowing you to vote again afterwards, rendering your previous vote invalid, so that no one knows whether you actually cast such a vote or not. However, if someone malicious controls the key you obtained during registration, this solution won't work.
I think there are two solutions:
Run the entire application within the MPC: This also covers the re-registration process, meaning when a person registers with the MPC, the MPC assigns them a unique ID that is independent and not associated with their personality ID, and when a person re-registers, only the MPC knows which account to deactivate. This can prevent users from proving their own behavior, as every important step is carried out within the MPC using private information known only to the MPC.
Distributed Registration Ceremony: Decentralized registration ceremony. Basically, implementing a protocol like this requires four randomly selected local participants to complete the registration together. This ensures that the registration is a trusted process and attackers cannot snoop during the registration process.
In fact, systems based on social graph can perform better in this regard, as they can automatically create a decentralized registration process stored locally, which is a byproduct of how they work.
Biometric Technology vs Social Graph-based Verification
Apart from biometric methods, the other main competitor for proving personal identity so far has been social graph-based verification. Social graph-based verification systems are all based on the same principle: if there is a large set of existing verified identities that validate your identity, then this validity holds true and you should receive identity verification.
If only a few real users (accidentally or maliciously) verify false users, basic graph theory techniques can be used to set an upper limit on the number of false users verified by the system.
Supporters of social graph-based verification often describe it as a better alternative to biometric technology for the following reasons:
It does not rely on specialized hardware, making it easier to deploy
It avoids a long-term arms race between 3D human manufacturers and Orb
It does not require collecting biometric data, which is more privacy-friendly
It may be more friendly to anonymity, as if someone chooses to divide their internet lives into multiple independent identities, both of these identities may be verified (but maintaining multiple real and independent identities sacrifices network effects and is costly, so attackers would not easily do this).
Biometric methods provide a binary score of "person" or "not a person", which is fragile because individuals who are carelessly rejected will ultimately be unable to receive UBI and may be unable to participate in internet life. Social graph-based methods can provide a more nuanced numeric score, which may be slightly unfair to certain participants, but unlikely to completely exclude someone from participating in internet life.
For these arguments, I generally agree. These are the real advantages of social graph-based methods and should be taken seriously. However, social graph-based methods also have their limitations, which are worth considering:
Initial Social Connections: To join a system based on a social graph, users must know someone in the graph. This presents difficulties for large-scale applications and may exclude entire regions that have poor luck in the initial referral process.
Privacy: While methods based on social graphs can avoid collecting biometric data, they tend to reveal a person's social relationship information, which can lead to greater risks. Of course, zero-knowledge technology can mitigate this issue (for example, see the recommendations by Barry Whitehat), but the inherent interdependence in the graph and the need for mathematical analysis of the graph make it challenging to achieve the same level of data hiding as biometric technology.
Inequality: Each person can only have one biometric ID, but a social bull can generate many IDs using their relationships. Fundamentally, a social graph-based system can flexibly provide multiple pseudonyms to those who truly need them (e.g., event organizers), which may mean that individuals with more power and broader social connections can obtain more pseudonyms than those with less power and fewer connections.
Risk of Centralization: Most people are reluctant to spend time reporting to internet applications who is a real person and who is not. Therefore, over time, the system may tend to rely on easy entry methods from central authorities, and the social graphs of system users will effectively become the social graphs that define which individuals are recognized as citizens by which countries - providing us with centralized KYC but adding unnecessary extra steps.
In the real world, are identity proofs compatible with pseudonyms?
In principle, identity proofs can be compatible with various pseudonyms. Applications can be designed such that a person with an identity proof can create up to five profiles in the program, thus leaving space for pseudonymous accounts, and even use a quadratic formula to cost N² dollars for N accounts. But will they do it?
However, pessimists may say that attempting to create a more privacy-friendly form of ID and hoping it will be adopted in the right way is unrealistic because those in power do not care about the privacy and security of ordinary people. If a person with power obtains a tool that can be used to gather more personal information, they will surely use it in that way. In such a world, unfortunately, the only realistic approach is to prevent the implementation of any identity solution to defend a completely anonymous and highly trusted digital world.
I am well aware of the rationale behind this approach, but I am concerned that even if this method is successful, it will lead to a situation where no one can take any action to resist wealth centralization and governance centralization in this world, as one person can always impersonate ten thousand people. Conversely, this centralization can easily be held in the hands of the authorities. On the contrary, I am more inclined towards a moderate approach, where we vigorously promote a proof-of-personhood solution that has strong privacy, and if necessary, even introduce a mechanism at the protocol layer for registering N accounts at a cost of N² dollars, and create something of value that is privacy-friendly and has the potential to be accepted by the outside world.
So my view is that in the absence of an ideal proof-of-personhood method, we have three different proof methods, each with its own unique advantages and disadvantages, as shown in the comparison chart below:
The most ideal approach for us is to consider these three technologies as complementary and combine them. As demonstrated by India's Aadhaar, specialized hardware biometric technologies have the advantage of large-scale security, but they are very weak in terms of decentralization, although this can be addressed by holding individual Orbs accountable. Today, general biometric technologies have reached the level of large-scale application, but their security is rapidly declining, and their future usage expectations may only last for 1-2 years. Systems based on social graphs can be launched with only a few hundred people closely related to the founding team, but for many regions, there is a constant trade-off between directly ignoring or adopting them and being vulnerable to attacks. However, a social graph-based system can truly be effective if it originates from tens of millions of biometric ID holders. Biometric-guided approaches may be more effective in the short term, while social graph-based technologies may be more robust in the long term and have broader application prospects with the improvement of algorithms.
A feasible hybrid solution
All teams are prone to making many mistakes, and there is inevitably tension between business interests and the broader needs of the community, so we must remain highly vigilant. As a community, we should push open-source technology to the comfort zone of all participants, have third parties conduct audits, write software, or take other checks and balances measures. We also need more alternative technologies for each of these three categories.
At the same time, we must also commend the work that has been completed: many teams operating these systems have shown more seriousness towards privacy than any government or large enterprise-operated identity system, which is a wonderful quality that we should promote.
Building an effective and reliable identity verification system, especially one that is managed by individuals far from the encryption community, seems quite challenging. I definitely don't envy those who are trying to accomplish this task, as it will likely take them several years to find a viable solution. In principle, even though all implementation methods carry risks, the concept of identity verification remains extremely valuable. Moreover, a world completely devoid of identity verification still cannot avoid risks: a world without identity verification seems more likely to be dominated by centralized identity solutions, currencies, small closed communities, or some combination of the three. I look forward to seeing more progress in various types of identity verification and hope to see different approaches eventually converge into a coherent whole.
