Webhooks are everywhere. Stripe sends your application a webhook when a payment completes. GitHub notifies your CI/CD pipeline when code is pushed. Slack delivers events when messages are posted. Twilio fires callbacks when an SMS is received. These HTTP callbacks have become the connective tissue of modern application architectures, linking SaaS platforms, payment processors, and internal services together through a constant stream of automated notifications.

And yet, webhook endpoints are among the most neglected attack surfaces we encounter during penetration tests. Development teams spend weeks hardening their user-facing APIs while leaving webhook receivers wide open. The logic is understandable but wrong: "Only Stripe sends traffic to that endpoint, so why would we worry about it?" The answer is that anyone on the internet can send an HTTP POST to your webhook URL. And if your validation is missing, broken, or bypassable, attackers absolutely will.

This article covers the specific attack techniques we use against webhook implementations during security assessments, drawn from real engagements and public disclosures. We will walk through signature bypasses, SSRF exploitation, replay attacks, information leakage, and flooding, then provide concrete guidance for building webhook receivers that actually hold up.


Why webhooks are a high-value target

Webhooks occupy a unique position in your application architecture. They are HTTP endpoints that accept incoming data from external systems and, critically, act on that data with elevated trust. When your application receives a webhook from Stripe saying a payment succeeded, it provisions access, updates account status, or triggers fulfillment workflows. When it receives a GitHub webhook, it kicks off deployments. The actions triggered by webhooks are often privileged operations that would normally require authentication.

This creates an asymmetry that attackers love. Your user-facing API probably requires authentication tokens, enforces rate limits, validates input schemas, and logs access. Your webhook endpoint, on the other hand, often accepts anonymous POST requests, trusts the payload content, and triggers business-critical actions based on a single HTTP header that serves as the "authentication" mechanism. If that header validation is flawed, the attacker effectively has unauthenticated access to trigger any action the webhook can perform.

The attack surface is also discoverable. Webhook URLs often follow predictable patterns: /webhooks/stripe, /api/webhooks/github, /hooks/slack. Documentation for popular platforms publishes the expected URL patterns. And because webhooks must be publicly accessible (the sending service needs to reach them), they cannot be hidden behind VPNs or IP allowlists in many cases.


Attack 1: Webhook signature bypass

Every major webhook provider includes a signature mechanism. Stripe signs payloads with HMAC-SHA256 using your endpoint's signing secret and includes the signature in the Stripe-Signature header.[1] GitHub uses HMAC-SHA256 delivered in the X-Hub-Signature-256 header.[2] Slack uses its own signing secret scheme with a X-Slack-Signature header.[3] The idea is simple: your application computes the expected signature using the shared secret and the raw request body, then compares it to the signature in the header. If they match, the request is authentic.

The implementation, however, is where things break. Here are the specific bypass techniques we test for during engagements.

No signature verification at all

This is more common than you would expect. Development teams set up the webhook endpoint, get it working in their development environment, and never implement signature verification because "it works without it." The webhook URL itself becomes the only "secret," and since URLs are not secrets, anyone who discovers or guesses the endpoint can send arbitrary webhook payloads. We find this in roughly one out of every three webhook implementations we test. The fix is straightforward, but the vulnerability is critical: an attacker can forge payment confirmations, fake deployment triggers, or inject arbitrary event data into your processing pipeline.

Timing attacks on signature comparison

When signature verification is implemented, the comparison between the expected signature and the provided signature must use a constant-time comparison function. A standard string equality check (== in most languages) returns false as soon as it finds the first mismatched byte. This timing difference is measurable over the network and can be used to determine the correct signature one byte at a time.[4] In PHP, use hash_equals(). In Python, use hmac.compare_digest(). In Node.js, use crypto.timingSafeEqual(). A regular === or strcmp() is not safe for this purpose.

Algorithm confusion and downgrade

Some webhook implementations support multiple signature algorithms or multiple signing secret versions. If the application lets the incoming request specify which algorithm or version to use without proper validation, an attacker can downgrade to a weaker algorithm or exploit version rollback behavior. GitHub's webhook signatures historically supported both SHA-1 (X-Hub-Signature) and SHA-256 (X-Hub-Signature-256). Applications that still accept the older SHA-1 signature when the SHA-256 header is absent are vulnerable to downgrade attacks, as SHA-1 collision attacks are practical.[5]

Body parsing before verification

This is a subtle but devastating mistake. The signature is computed over the raw request body. If your application parses the body as JSON before computing the signature, the parsed-and-reserialized body may differ from the original raw bytes. Whitespace changes, key reordering, or unicode normalization during parsing can produce a different byte sequence, causing valid signatures to fail and forcing developers to "work around" the verification, often by disabling it. The correct approach is to read the raw body first, verify the signature against the raw bytes, and only then parse it as JSON.

Real-world example: In 2018, a researcher demonstrated that Shopify's webhook verification could be bypassed because the application framework parsed the request body before the HMAC check ran. The reserialized body produced a different hash, so the verification was effectively a no-op for certain payloads. The issue was assigned CVE-2018-1000006 and affected applications built on multiple frameworks that eagerly parsed request bodies.


Attack 2: SSRF through webhook URLs

Many applications allow users to configure their own webhook URLs. "Enter the URL where you want us to send notifications." This feature turns your application into an HTTP client that will make requests to whatever URL the user provides. That is a textbook Server-Side Request Forgery (SSRF) vector.

An attacker configures a webhook URL pointing to an internal resource: http://169.254.169.254/latest/meta-data/ on AWS, http://metadata.google.internal/computeMetadata/v1/ on GCP, or http://169.254.169.254/metadata/instance on Azure. When the application sends the webhook test event, it makes an HTTP request to the cloud metadata service from within your infrastructure, and the response (containing IAM credentials, instance metadata, or service account tokens) may be reflected back to the attacker in a delivery log, retry response, or error message.[6]

Bypassing URL validation

Teams that are aware of SSRF risks often implement URL validation, but the validation is frequently bypassable. Common bypass techniques include:

Mitigating webhook SSRF

Proper SSRF protection for webhook URLs requires multiple layers. Resolve the DNS at request time and check the resulting IP against a denylist of private ranges (RFC 1918, link-local, loopback). Do not follow redirects, or if you must, validate the redirect target with the same checks. Use an allowlist of permitted URL schemes (https only). And critically, run webhook delivery from an isolated network segment that cannot reach internal services or cloud metadata endpoints. AWS VPC endpoints and GCP VPC Service Controls can help enforce this isolation at the network level.


Attack 3: Replay attacks

Webhook replay attacks exploit the fact that a valid webhook payload, once captured, can be sent again. If your application processes the same webhook event multiple times, an attacker who intercepts or obtains a legitimate webhook payload (through log files, debug endpoints, network sniffing, or a compromised intermediate system) can replay it to trigger the associated action again.

The impact depends on the action. Replaying a "payment succeeded" webhook could provision duplicate access or credit an account multiple times. Replaying a "user deprovisioned" event from an identity provider could lock a legitimate user out of their account. Replaying a "deployment approved" webhook could trigger an unwanted deployment.

How Stripe handles replay prevention

Stripe's webhook signature includes a timestamp component. The Stripe-Signature header contains both a timestamp (t=) and one or more signatures (v1=). Stripe's official libraries reject events where the timestamp is more than five minutes in the past by default.[1] This means a captured payload becomes invalid after five minutes because the timestamp in the header will be too old. However, this protection only works if your application actually enforces the timestamp check. Applications that verify the HMAC signature but skip the timestamp validation remain vulnerable to replay attacks indefinitely.

Idempotency as a defense

The most robust defense against replay attacks is idempotent webhook processing. Every webhook event from major providers includes a unique event identifier (Stripe uses evt_ prefixed IDs, GitHub includes a X-GitHub-Delivery header with a GUID). Your application should store the event ID after processing and reject any subsequent request with the same ID. This requires a durable store (typically your database) and a uniqueness constraint, but it eliminates the replay attack class entirely regardless of whether timestamp validation is in place.

Testing tip: During our assessments, we capture a legitimate webhook delivery, wait until the timestamp tolerance window has passed, and then replay the exact same request. If the application processes it again, replay protection is broken. We also test with modified timestamps but original signatures, which should fail HMAC verification but often does not when the timestamp is treated as metadata rather than as part of the signed payload.


Attack 4: Information disclosure in webhook payloads

Webhook payloads frequently contain more data than necessary, and that excess data creates information disclosure risks in multiple directions.

Outbound information leakage

When your application sends webhooks to customers or partners, the payloads may contain internal data that should not leave your system. We have seen webhook payloads that include internal database IDs, user email addresses for users other than the account owner, full API keys, internal service URLs, debug metadata, and employee names. Development teams serialize entire database objects into webhook payloads because it is easier than selecting specific fields, and nobody reviews what the webhook actually sends.

Inbound payload injection

When your application receives webhooks, the payload content may be stored, displayed, or forwarded without proper sanitization. If a webhook payload contains a customer name field and that name appears in an admin dashboard, an attacker who controls the source data can inject HTML or JavaScript through the webhook payload. This is especially dangerous with platforms that allow user-generated content in webhook fields, as the webhook becomes a cross-site scripting delivery mechanism that bypasses the usual input validation on your own forms.

Logging and exposure

Webhook payloads are frequently logged in their entirety for debugging purposes. These logs may end up in application log files, centralized logging platforms like Datadog or Splunk, error tracking services like Sentry, or even in version-controlled configuration files. Every one of these is a potential exposure point. Stripe webhook payloads contain the last four digits of card numbers, billing addresses, and email addresses. Slack webhook payloads contain message content. Identity provider webhooks contain user profile data. This information sitting in log files violates the principle of least privilege and, depending on the data, may violate GDPR, HIPAA, or PCI DSS requirements.[7]


Attack 5: Webhook flooding and denial of service

Webhook endpoints are designed to receive external traffic. They must be publicly accessible. And they typically trigger processing work: database writes, API calls, queue entries, or notification dispatches. This makes them an attractive target for denial-of-service attacks.

An attacker does not need a valid signature to flood your webhook endpoint. Even if signature verification rejects every forged request, the verification process itself consumes CPU cycles (HMAC computation is not free), the request handling consumes memory and connection resources, and the logging of failed verification attempts consumes disk I/O. A sustained flood of millions of forged webhook requests can degrade your application's performance even if none of the requests are processed.

Rate limiting webhook endpoints

Rate limiting on webhook endpoints is tricky because you cannot rate-limit the legitimate sender too aggressively. Stripe may send bursts of hundreds of webhooks during a batch operation. GitHub can fire dozens of push events in seconds from a busy repository. The rate limit must be high enough to accommodate legitimate traffic spikes but low enough to prevent abuse. A practical approach is to rate-limit by source IP range (most webhook providers publish their IP ranges) and apply a stricter limit for requests from unknown IPs. Cloudflare, AWS WAF, and similar edge services can handle this without burdening your application server.

Queue-based processing

The most effective defense against webhook flooding is to decouple reception from processing. Your webhook endpoint should do three things: read the raw body, verify the signature, and enqueue the event for asynchronous processing. It should return a 200 response immediately after enqueuing. All actual business logic (database writes, API calls, notifications) happens in a background worker that pulls from the queue at a controlled rate. This architecture means that even a flood of legitimate or signature-verified events cannot overwhelm your application's core processing capacity.


Attack 6: Missing TLS verification

When your application sends outbound webhook requests (delivering notifications to customer-configured URLs), it makes HTTPS connections to external servers. If TLS certificate verification is disabled, an attacker in a network position between your server and the destination can intercept, read, and modify webhook payloads in transit through a man-in-the-middle attack.

This is more common than it should be. Developers disable TLS verification during development (because local test servers use self-signed certificates) and forget to re-enable it. Some HTTP client libraries default to skipping verification. The result is that your application happily sends sensitive webhook payloads to whatever server responds, even if that server presents an invalid, expired, or attacker-controlled certificate.

In Node.js, setting NODE_TLS_REJECT_UNAUTHORIZED=0 as an environment variable disables TLS verification globally for all HTTPS connections. In Python's requests library, passing verify=False has the same effect. In PHP, setting CURLOPT_SSL_VERIFYPEER to false disables certificate checking. These are all common patterns in development that become critical vulnerabilities in production.[8]

During pentests, we check for this by configuring a webhook destination with a self-signed or deliberately invalid certificate. If the webhook is delivered successfully, TLS verification is broken. We also test with expired certificates and certificates issued for the wrong domain. Any of these scenarios being accepted indicates a complete absence of certificate validation.


Real-world webhook security across major platforms

Understanding how major platforms implement webhook security reveals both best practices and common pitfalls.

Stripe

Stripe's webhook implementation is the gold standard. Each endpoint has its own signing secret (whsec_ prefix). Signatures use HMAC-SHA256 computed over a combination of the timestamp and the raw payload body. The Stripe-Signature header includes the timestamp and supports multiple signature versions for key rotation. Stripe's official libraries (stripe-node, stripe-python, stripe-php) include built-in verification functions with configurable timestamp tolerance. The default tolerance is 300 seconds (five minutes).[1] Where teams go wrong is when they parse the body before verification, use a framework middleware that consumes the raw body, or implement custom verification logic instead of using the official library.

GitHub

GitHub signs webhook payloads using a secret you configure per webhook. The signature appears in the X-Hub-Signature-256 header as sha256=HMAC. GitHub's documentation explicitly recommends using constant-time comparison and provides code examples in multiple languages.[2] A common mistake is accepting the legacy X-Hub-Signature header (SHA-1) as a fallback when X-Hub-Signature-256 is unavailable. Since SHA-1 is considered broken for collision resistance, this fallback weakens the security of the verification. GitHub also publishes IP ranges for webhook delivery, which can be used as an additional validation layer but should never be the sole protection, as IP addresses can be spoofed.

Slack

Slack uses a signing secret scheme where the signature is computed over a version string, a timestamp, and the request body concatenated together. The X-Slack-Request-Timestamp header is included in the signed data, providing built-in replay protection when the application enforces a maximum age. Slack recommends rejecting requests with timestamps more than five minutes old.[3] The signing secret is separate from the OAuth token and the verification token (an older, less secure mechanism). Applications that still rely on the deprecated verification token instead of the signing secret are using a shared static string that provides no cryptographic assurance.


Building a secure webhook receiver: the complete checklist

Based on the attacks described above and our experience testing hundreds of webhook implementations, here is the complete set of controls a webhook receiver should implement.

1. Always verify signatures. Use the official SDK from the webhook provider. Never roll your own HMAC verification unless you fully understand the signing scheme, constant-time comparison, and raw body handling.

2. Use constant-time comparison. Never compare signatures with == or strcmp(). Use hash_equals() in PHP, hmac.compare_digest() in Python, or crypto.timingSafeEqual() in Node.js.

3. Verify the raw body, not parsed JSON. Read the request body as raw bytes before any JSON parsing. Compute the HMAC over those exact raw bytes. Parse into JSON only after verification succeeds.

4. Enforce timestamp tolerance. Reject webhook requests where the timestamp is more than five minutes old. This mitigates replay attacks using captured payloads.

5. Implement idempotent processing. Store event IDs and reject duplicates. This is your primary defense against replay attacks and accidental double-processing from provider retries.

6. Validate URL targets for outbound webhooks. If your application sends webhooks to user-configured URLs, validate the resolved IP address against private ranges. Do not follow redirects. Use HTTPS only.

7. Never disable TLS verification. Ensure CURLOPT_SSL_VERIFYPEER (PHP), verify=True (Python), and proper certificate handling (Node.js) are enforced in production.

8. Rate limit the endpoint. Apply rate limiting that is stricter for unknown source IPs. Use the provider's published IP ranges as an allowlist for higher rate limits.

9. Process asynchronously. Acknowledge receipt immediately, then process in a background queue. This protects your application from both flooding and from slow webhook processing blocking your web server.

10. Minimize payload logging. Log event IDs, timestamps, and verification results. Do not log full payload bodies, which may contain PII, payment data, or authentication tokens.


Testing webhooks during penetration tests

When we include webhook endpoints in a penetration test scope, our testing methodology covers each of the attack categories described above. The testing typically follows this sequence.

Discovery. We identify all webhook endpoints through documentation review, JavaScript source analysis, API endpoint enumeration, and path brute-forcing with common webhook URL patterns. Applications often have more webhook endpoints than the team realizes, especially when different services (payment, email, identity, CI/CD) each have their own receiver.

Signature bypass testing. For each endpoint, we send a valid webhook payload with a missing signature header, an empty signature, a malformed signature, and a signature computed with a guessed secret (common values like "test", "secret", the application name, etc.). We also test for timing-based leakage in the verification response.

Replay testing. We capture a legitimate webhook delivery (either through a test trigger feature in the provider's dashboard or by observing traffic), then replay it after the timestamp tolerance window. We also test with modified payloads but the original signature to verify that the entire body is included in the HMAC computation.

SSRF testing. For any feature that allows configuring a webhook URL, we test with internal IP addresses (both direct and encoded), DNS rebinding payloads, redirect chains, and cloud metadata URLs. We observe whether the application makes the request and whether any response data is reflected back.

Payload analysis. We examine the data included in webhook payloads for information disclosure, test for injection (XSS, SQLi) through payload fields that are stored or displayed, and verify that payloads are not logged with sensitive data intact.

This methodology is part of our broader API security testing approach, because webhooks are APIs. They deserve the same rigor as any other endpoint in your application.


Conclusion

Webhooks are deceptively simple. They are just HTTP POST requests. But that simplicity masks a rich attack surface that spans authentication bypass, server-side request forgery, replay exploitation, information disclosure, and denial of service. The controls required to secure webhook implementations are well-understood, but they require deliberate implementation. Signature verification, constant-time comparison, timestamp enforcement, idempotency, SSRF protection, TLS validation, rate limiting, and async processing are all individually straightforward. The challenge is implementing all of them consistently across every webhook endpoint in your application.

If your application relies on webhooks from payment processors, identity providers, CI/CD platforms, or any other external service, those endpoints should be included in your next penetration test scope. They are part of your attack surface, and attackers know it.

Sources

  1. Stripe, "Check the webhook signatures," Stripe Documentation. https://docs.stripe.com/webhooks/signatures
  2. GitHub, "Validating webhook deliveries," GitHub Documentation. https://docs.github.com/en/webhooks/using-webhooks/validating-webhook-deliveries
  3. Slack, "Verifying requests from Slack," Slack API Documentation. https://api.slack.com/authentication/verifying-requests-from-slack
  4. N. Lawson, "Timing attacks on string comparison," NCC Group Research, 2015. https://www.nccgroup.com/us/research-blog/timing-vulnerabilities-with-cbc-padding-in-openssl-and-amazon-s2n/
  5. M. Stevens et al., "The first collision for full SHA-1," CWI Amsterdam, 2017. https://shattered.io/
  6. OWASP, "Server-Side Request Forgery Prevention Cheat Sheet," OWASP Cheat Sheet Series. https://cheatsheetseries.owasp.org/cheatsheets/Server_Side_Request_Forgery_Prevention_Cheat_Sheet.html
  7. PCI Security Standards Council, "PCI DSS v4.0," Requirement 3: Protect Stored Account Data. https://www.pcisecuritystandards.org/document_library/
  8. OWASP, "Transport Layer Security Cheat Sheet," OWASP Cheat Sheet Series. https://cheatsheetseries.owasp.org/cheatsheets/Transport_Layer_Security_Cheat_Sheet.html

Are Your Webhook Endpoints Secure?

Webhooks are part of your attack surface. Our penetration testers know the specific bypass techniques attackers use against Stripe, GitHub, Slack, and custom webhook implementations. Let us test yours.

Book a Consultation Our Services
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.