On February 9th, Discord announced that starting in March 2026, every account on the platform will receive "teen-by-default" safety settings. That means restricted DMs, filtered content, and limited server access -- unless you verify your age.
How do you verify? Two options: submit to an on-device facial age scan, or upload your government-issued photo ID to a third-party vendor.
If that second option makes you uncomfortable, you're not alone. And you have very good reason to be.
In October 2025, hackers breached 5CA, a third-party contractor that Discord used for customer support and identity verification through Zendesk. The attackers claimed access to more than 8.4 million support tickets and over 520,000 age-verification tickets.
Approximately 70,000 users had their government ID photos directly exposed.
Think about that for a moment. Discord asked users to upload their most sensitive identifying documents to prove their age. Those documents ended up in a support system run by a contractor, stored on a platform (Zendesk) that became the attack vector. And now those IDs are in the hands of unknown actors.
The reaction from users was predictable and entirely justified. As one user put it on social media: "I will not be uploading my face or ID to a database that I know is not secure enough."
Discord says they've moved to a new vendor. They say the facial age scan happens on-device. They say they don't store the data.
Let's assume all of that is true. There's still a fundamental problem.
Users who are incorrectly categorized -- adults flagged as teens, or teens whose age scan fails -- still need to appeal. And the appeal process? It routes through a support system that looks remarkably similar to the one that just got breached. At some point in the process, a human (or a system) needs to review your identity documents. That means those documents exist somewhere, processed by someone, stored on something.
Every link in that chain is an attack surface.
Discord isn't alone here. Facebook has been pushing government ID verification for years. Instagram requires it for certain features. X uses it for verification tiers. The pattern is clear: major platforms are converging on a model where accessing full functionality requires surrendering your government-issued identity documents.
This is the wrong direction for three reasons.
Every database of government IDs is a high-value target. The 5CA breach wasn't unusual -- it was inevitable. When you concentrate millions of identity documents in one place (or one vendor's system), you create an irresistible target for attackers.
The value per record is enormous. A stolen government ID isn't like a stolen password. You can't change your face. You can't get a new date of birth. The damage from a government ID breach is permanent and compounding. That data can be used for identity theft, synthetic fraud, account takeovers, and social engineering for years after the initial breach.
Discord didn't get hacked directly. Their contractor did. Through a customer support platform. This is the reality of third-party identity verification: your data passes through multiple hands, each with their own security posture, their own employees, their own attack surface.
You can audit one vendor. You can't audit every subcontractor, every platform they use, every employee who has access. The chain of custody for your identity documents is longer and more fragile than any platform will admit.
The stated goal is age verification. But government ID upload is a blunt instrument for this purpose. It over-collects data (your full name, address, ID number, photo -- when all that's needed is confirmation that you're over a certain age). It creates friction that discourages legitimate users while barely slowing down determined bad actors who can obtain fake IDs or use stolen ones.
The system optimizes for the platform's liability concerns, not for user safety or privacy.
Let's be precise about what age verification requires:
That's it. You don't need someone's full name to know they're over 18. You don't need their home address. You don't need a photograph of their face stored on a server somewhere.
This is exactly the kind of problem that privacy-preserving cryptography was built to solve.
With cryptographic verification, a user can prove a specific claim -- "I am over 18" -- without revealing any other personal information. The proof is mathematical. It doesn't require storing documents. It doesn't require third-party vendors handling sensitive data. It doesn't create honeypots.
At not.bot, we use multiparty computation (MPC) to verify that users are real humans without collecting or storing government IDs. The verification happens once, cryptographically, and the proof can be presented to any platform that needs it -- without exposing the underlying identity documents.
No facial scans stored on servers. No government IDs sitting in support ticket systems. No third-party contractors with access to your most sensitive data.
Just mathematical proof that you are who you claim to be.
The Discord situation is a case study in why the "upload your ID everywhere" model is broken. It was broken before the 5CA breach. The breach just made the consequences visible.
As platforms face increasing pressure to verify user ages and identities, they have a choice: continue down the path of centralized ID collection and hope the next breach doesn't happen, or adopt cryptographic approaches that verify claims without collecting the sensitive data in the first place.
Users shouldn't have to choose between full platform access and surrendering their government identity to a chain of third parties with questionable security track records.
There's a better way. The cryptography exists. The tools exist. Not.bot is the answer. Now is the time to adopt before the next breach forces the conversation again.