Discord and Persona: What Happens When Your Verification Vendor Becomes Your Vulnerability | Lorikeet Security Skip to main content
Back to Blog

Discord and Persona: What Happens When Your Verification Vendor Becomes Your Vulnerability

Lorikeet Security Team March 2, 2026 10 min read
Vendor Risk -- Identity Verification

When 269 Verification Checks, Watchlist Screening, and Your Users' Faces Are One Misconfiguration Away from Public

On February 16, 2026, security researchers discovered approximately 2,500 uncompressed frontend files sitting publicly accessible on a Google Cloud server connected to FedRAMP infrastructure operated by Persona, the identity verification company used by Discord for age verification and identity checks. No exploit was required. No vulnerability was needed. The files were simply there, on the doorstep, for anyone to download.

The exposed files totaled 53 megabytes of data and revealed the full scope of what Persona actually does when it verifies someone's identity. The findings went far beyond what most companies -- and most users -- expected from a service marketed as a simple age verification tool. Within a month, Discord ended its partnership with Persona and delayed its global age verification rollout to the second half of 2026.

TL;DR: Persona, Discord's identity verification vendor, had thousands of frontend files publicly exposed on a Google Cloud server. The files revealed that Persona performs 269 distinct verification checks including facial recognition against watchlists, politically exposed persons screening, and adverse media scanning. This was the second major vendor incident for Discord in four months. The fallout raises serious questions about third-party vendor risk in identity verification.


What Happened

The discovery was almost mundane in its simplicity. Researchers found the files on a Google Cloud storage server associated with Persona's FedRAMP-authorized infrastructure. The files were uncompressed frontend source code, configuration files, and build artifacts -- the kind of files that should never be publicly accessible but frequently are when cloud storage permissions are misconfigured.

Persona's CEO, Rick Song, responded by saying the exposed files were "public frontend info" -- essentially arguing that frontend code is inherently public because it runs in users' browsers. While technically true that compiled frontend JavaScript is delivered to clients, the uncompressed source files contained far more information than what a user would see in their browser's network tab. Configuration constants, API endpoint mappings, feature flags, verification workflow definitions, and the complete taxonomy of checks that Persona performs were all laid bare in readable source form.

The researchers' assessment was more blunt: "The entire architecture was just on the doorstep." No credentials were needed. No SQL injection. No zero-day. Just a publicly accessible cloud storage bucket containing the blueprint of a surveillance-grade identity verification system.

Files Exposed 2,500 files on GCS
Discovery Feb 16, 2026
Discord Ends Partnership severed
Rollout Delayed Pushed to H2 2026

What the Files Revealed

The exposed source code painted a detailed picture of Persona's verification capabilities. What most companies and users understood as "age verification" or "identity confirmation" turned out to be a comprehensive surveillance and risk-scoring infrastructure.

269 distinct verification checks

The files documented 269 individual verification checks that Persona is capable of performing. These are not 269 variations of "check if this person is over 18." They span a range of intelligence and screening functions that would be more at home in a government background check than a social media age gate:

This is not a simple document verification API. This is a comprehensive identity intelligence platform. And it was being used to verify whether teenagers on Discord were old enough to access age-restricted channels.

The proportionality question: When a platform that hosts gaming communities and meme channels is using a verification vendor with 269 checks including terrorism screening and intelligence program integration, the question is not whether the vendor is capable. It is whether the data collection is proportionate to the stated purpose. Age verification does not require facial recognition against watchlists. The gap between what users were told and what was actually happening is significant.

Investor context

Persona has received significant venture capital investment, including from Peter Thiel's Founders Fund. Thiel's investment history includes Palantir Technologies, a company whose core business is government surveillance and intelligence analysis. The connection does not prove anything about Persona's operations, but it adds context to why researchers were particularly interested in examining the scope of Persona's verification capabilities and any potential intelligence community connections suggested by the exposed files.


The Bigger Pattern: Discord's Vendor Problem

The Persona exposure would be concerning on its own. But it becomes a pattern when you consider that this was Discord's second major vendor-related security incident in four months.

October 2025: The 5CA breach

In October 2025, Discord's third-party customer support vendor 5CA suffered a data breach that exposed government-issued identification documents of over 70,000 Discord users. These were IDs that users had submitted to Discord's support team for account recovery, age verification appeals, and other support requests. 5CA was handling these documents on Discord's behalf, and when 5CA was breached, the documents were exposed.

Government IDs are among the most sensitive personal data an individual can have exposed. They include full legal names, dates of birth, addresses, photographs, and document numbers that can be used for identity theft. The 5CA breach meant that users who had trusted Discord with their most sensitive documents had that trust violated through a vendor that most users did not even know existed.

The contradicting statements

After the Persona exposure, inconsistencies emerged in Discord's public communications about how identity data is handled. Discord had publicly stated that "facial scans never leave your device" during the age verification process. However, an archived version of Discord's FAQ page stated that verification data is "temporarily stored for up to 7 days."

These two statements are mutually exclusive. Data cannot simultaneously never leave your device and be stored for up to seven days on a vendor's servers. The contradiction suggests either the privacy assurances were inaccurate, the FAQ was describing a different data flow than the one in practice, or the data handling changed over time and the documentation was not updated consistently. Regardless of the explanation, the inconsistency erodes user trust.

Discord's course correction

Following both incidents, Discord announced that future age verification would be performed entirely on-device, with no biometric or identity data transmitted to third-party servers. The global age verification rollout was delayed from Q1 2026 to H2 2026. The partnership with Persona was terminated.

This is a meaningful architectural change. On-device verification means the sensitive comparison logic runs in the user's browser or app, and only a pass/fail result is transmitted to Discord's servers. If implemented correctly, this eliminates the vendor as a point of data exposure. But it also means Discord is building verification infrastructure internally, which carries its own risks if not done with sufficient security expertise.


Third-Party Vendor Risk Is a Security Problem

The Discord-Persona situation is not unique. It is a particularly visible example of a problem that affects every company that relies on third-party vendors to process sensitive data. Your security posture is only as strong as your weakest vendor, and most companies do not assess their vendors with the rigor the risk demands.

You inherit your vendor's attack surface

When you share data with a vendor, you are extending your attack surface to include their infrastructure, their employees, their security practices, and their own vendors. Discord's users did not choose to share their data with Persona or 5CA. They shared their data with Discord, and Discord's vendor relationships determined where that data ended up. The users had no visibility into or control over those downstream decisions.

This is a fundamental challenge of third-party risk management. The contractual relationship is between two companies, but the risk is borne by the users whose data flows through the chain. When a vendor is compromised or exposes data, the reputational damage falls on the company the user trusted, not the vendor the user never heard of.

Compliance certifications are not security guarantees

Persona operates FedRAMP-authorized infrastructure. FedRAMP is one of the most rigorous cloud security certification frameworks in existence. And yet, 2,500 files were publicly accessible on that infrastructure. This is not a failure of FedRAMP as a standard. It is a reminder that certifications attest to the existence of controls, not to their continuous effective operation.

A SOC 2 report, an ISO 27001 certificate, or a FedRAMP authorization tells you that a vendor had appropriate controls in place at the time of the audit. It does not tell you that a cloud storage bucket was not subsequently misconfigured, that an employee did not bypass a control, or that a new deployment did not introduce an exposure. Certifications are a starting point for vendor assessment, not an endpoint.

Data minimization is a security control

One of the most effective ways to limit vendor risk is to limit what data your vendors have. If Persona's scope had been limited to the minimum checks required for age verification (document analysis and date-of-birth extraction), the exposure of frontend files would have been less significant. The fact that Persona's platform includes 269 checks, watchlist screening, and intelligence program integration means that any exposure of their systems reveals a far more sensitive capability set.

Data minimization is not just a privacy principle. It is a security control. The less data a vendor processes, the less damage a breach or exposure can cause.


What Companies Should Learn from This

Whether you use identity verification vendors, payment processors, customer support outsourcers, or any other third-party service that handles sensitive data, the Discord-Persona situation offers concrete lessons.

Assess vendors like you assess your own infrastructure

Most vendor assessments consist of reviewing a SOC 2 report and a security questionnaire. That is insufficient. A meaningful vendor assessment should include:

Monitor your vendors continuously

A point-in-time assessment tells you what the vendor looked like on assessment day. Continuous monitoring tells you when something changes. The Persona files may have been exposed for days, weeks, or months before researchers found them. If Discord had been monitoring Persona's external attack surface, they might have detected the exposure before it became a public incident.

Continuous attack surface monitoring is not just for your own infrastructure. It should extend to the vendors who process your most sensitive data. At minimum, monitor their DNS, SSL certificates, exposed services, and cloud storage configurations.

Verify vendor claims independently

Discord's contradicting statements about facial scan data ("never leaves your device" vs. "stored for up to 7 days") illustrate why you cannot rely solely on vendor marketing or FAQ pages to understand data handling practices. Independent verification means:

Have a vendor exit plan

Discord was able to terminate the Persona partnership because identity verification was a separable component of their platform. Not all vendor relationships are this clean. If your vendor is deeply integrated into your core product, replacing them on short notice may be impractical. For critical vendors, maintain a documented exit plan that includes alternative vendors, data migration procedures, and timeline estimates.


How to Evaluate Identity Verification Vendors

If your company is evaluating identity verification vendors -- or re-evaluating an existing one in light of the Persona incident -- here is a checklist of questions that go beyond the standard security questionnaire.

Assessment Area Questions to Ask
Data Scope What data do you collect beyond what is strictly necessary for the verification type we are purchasing? Do you perform checks (watchlist screening, PEP, adverse media) that are not part of our contract?
Data Retention How long is biometric data, identity documents, and verification results retained? Can we contractually enforce shorter retention periods? Is data retained differently across environments (production vs. staging)?
Data Location Where is verification data processed and stored? Is any data processed on-device vs. server-side? Are sub-processors involved, and if so, who are they?
Infrastructure Security What is your external attack surface? Have you had a third-party penetration test in the last 12 months? Can we see the results? What cloud storage access controls are in place?
Incident History Have you experienced any security incidents, data exposures, or breaches in the past 3 years? What changes were implemented afterward? What is your breach notification timeline?
Compliance What certifications do you hold (SOC 2, ISO 27001, FedRAMP)? When was the last audit? Can we review the full report, not just the summary? How do you handle GDPR data subject requests?
Architecture Is on-device processing available? Can verification be performed without transmitting biometric data to your servers? What is the architecture of data flows during a verification session?

The goal is not to find a perfect vendor. The goal is to understand exactly what you are signing up for, ensure the scope is proportionate to your needs, and verify that the vendor's security posture matches the sensitivity of the data they will handle.

The broader principle: Every third-party vendor that touches your users' data is an extension of your security program. The Persona incident did not damage Persona's relationship with its users -- it damaged Discord's. When your vendor fails, your users blame you. Assess, monitor, and verify accordingly.

Concerned about your third-party vendor risk?

Lorikeet Security helps companies assess their vendor security posture, map data flows through third-party integrations, and identify exposures before they become incidents.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!