You just received a penetration test report. It is 60 to 120 pages long. There are CVSS scores, attack chains, proof-of-concept screenshots, and terms like "Insecure Direct Object Reference" and "Server-Side Request Forgery" scattered throughout. If you are an engineer, you need to fix these issues. If you are a security lead, you need to prioritize them. If you are a CTO or VP of Engineering, you need to understand the risk and allocate resources.
The problem is that most pentest reports are written by security researchers for other security researchers. The format assumes familiarity with vulnerability taxonomies, scoring systems, and exploitation techniques that the people responsible for actually fixing the issues may not have.
This guide walks through every section of a pentest report, explains what each element means, and provides a practical framework for turning a stack of findings into an actionable remediation plan. Whether this is your first pentest report or your twentieth, understanding the structure and language will help you get more value from every engagement.
The Anatomy of a Pentest Report
While every security firm structures their reports slightly differently, a quality pentest report contains a predictable set of sections. Here is what you should expect and what to look for in each.
| Section | Audience | What It Tells You |
|---|---|---|
| Executive Summary | Leadership, board, non-technical stakeholders | Overall risk posture, critical issues, and strategic recommendations in plain language |
| Scope & Methodology | Security leads, auditors, compliance teams | What was tested, how it was tested, and what was explicitly excluded |
| Risk Summary | Engineering leads, security teams | Finding distribution by severity, affected systems, and vulnerability categories |
| Detailed Findings | Engineers, developers, DevOps | Specific vulnerabilities with reproduction steps, evidence, and fix guidance |
| Strategic Recommendations | Security leads, CTOs, architects | Systemic issues and architectural improvements beyond individual bug fixes |
| Appendices | Auditors, compliance teams | Raw evidence, tool output, testing credentials used, and methodology details |
Reading the Executive Summary
The executive summary is the most important section for anyone who needs to make decisions about resources, timelines, or risk acceptance. A well-written executive summary tells you three things in two pages or less: how your security posture compares to what was expected, what the most significant risks are in business terms, and what actions you should take first.
What to look for: The overall risk rating or posture assessment. Most firms use a qualitative scale (Critical, High, Moderate, Low) or a letter grade to characterize the overall security posture. This gives you a single data point to compare against previous assessments and to communicate upward.
What to watch out for: Executive summaries that are overly generic. Statements like "the application had several vulnerabilities that should be addressed" tell you nothing. A good executive summary names the most critical findings specifically: "An authentication bypass vulnerability allows any unauthenticated user to access admin functionality, including customer PII export. This should be remediated immediately."
Tip for leadership: If you only read one section of the report, read this one. But do not stop here. The executive summary tells you what is wrong. The detailed findings tell you how to fix it. If you are presenting results to your board, the executive summary is your starting material, but you will need the strategic recommendations section to propose next steps.
Understanding Scope and Methodology
The scope section defines the boundaries of the test. It tells you exactly what was tested and, critically, what was not tested. This matters because a pentest is not a guarantee that your entire infrastructure is secure. It is a statement about the security of the specific systems that were examined using the specific approach that was described.
Scope elements to verify
- Target systems: The URLs, IP addresses, API endpoints, and applications that were tested. Cross-reference this with what you intended to be in scope. If your staging API was tested but your production API was not, the findings may not reflect your real exposure
- Testing perspective: Was this a black-box test (no credentials, simulating an external attacker), gray-box (credentials provided, simulating an authenticated user), or white-box (source code access)? The perspective dramatically affects what the testers could find
- User roles tested: For applications with role-based access, which roles were tested? If only the admin and standard user roles were tested but your application has five roles, three roles were not evaluated for authorization flaws
- Exclusions: What was explicitly out of scope? Denial of service testing, social engineering, physical security, and certain production endpoints are commonly excluded. Knowing the exclusions tells you where you still have blind spots
- Testing dates: The pentest covers a point in time. Code deployed after testing ended was not evaluated. This matters for compliance auditors who need to verify the report is current
Methodology to look for
The methodology section should reference an industry-standard framework. The most common are OWASP Testing Guide (WSTG) for web applications, OWASP ASVS for application security verification, PTES (Penetration Testing Execution Standard) for general methodology, and NIST SP 800-115 for technical security testing. If the methodology section is vague or references only proprietary tools, that is a concern. Compliance auditors reviewing your report for SOC 2 or PCI DSS will specifically check that an industry-standard methodology was followed.
Decoding Severity Ratings and CVSS Scores
Every finding in a pentest report has a severity rating. Most firms use the Common Vulnerability Scoring System (CVSS) to calculate a numeric score, which maps to a qualitative severity level. Understanding how these scores work helps you prioritize remediation correctly instead of treating every finding as equally urgent.
| Severity | CVSS Range | What It Means | Remediation SLA |
|---|---|---|---|
| Critical | 9.0 - 10.0 | Easily exploitable, high impact, little or no attacker skill required. Often allows full system compromise, data exfiltration, or RCE | 24 - 72 hours |
| High | 7.0 - 8.9 | Exploitable with moderate effort. Significant impact on confidentiality, integrity, or availability. May require authentication | 1 - 2 weeks |
| Medium | 4.0 - 6.9 | Requires specific conditions, user interaction, or chaining with other findings to exploit. Moderate impact | 30 days |
| Low | 0.1 - 3.9 | Difficult to exploit or minimal impact. Information disclosure, missing best practices, or hardening gaps | 90 days |
| Informational | 0.0 | Not a vulnerability per se. Observations, best practice recommendations, or defense-in-depth suggestions | Next planning cycle |
How CVSS is calculated
CVSS scores are derived from several factors that describe the vulnerability's characteristics. The base score considers:
- Attack Vector (AV): How does the attacker reach the vulnerable component? Network (remote, worst case), Adjacent (local network), Local (physical access), or Physical (hands on device)
- Attack Complexity (AC): Does the attack require special conditions? Low complexity means it works reliably. High complexity means specific configurations or race conditions are needed
- Privileges Required (PR): Does the attacker need credentials? None is worst, Low means a regular user account, High means an admin account
- User Interaction (UI): Does the victim need to do something? None means the attack works without user action. Required means the victim must click a link or visit a page
- Impact (C/I/A): What happens if the attack succeeds? Each of Confidentiality, Integrity, and Availability is rated as None, Low, or High
Important nuance: CVSS scores do not account for your specific business context. A Medium-severity finding that exposes your customer database is more important than a High-severity finding on an internal test server with no real data. Use CVSS as a starting point for prioritization, then adjust based on the business value of the affected system and the sensitivity of the data at risk.
Reading Individual Findings
The detailed findings section is where engineers spend most of their time. Each finding is a self-contained description of a specific vulnerability. Understanding the structure helps you extract what you need quickly.
Finding title and ID
Every finding has a unique identifier (e.g., VULN-001, F-12) and a descriptive title. The title should tell you what the vulnerability is at a glance: "Stored Cross-Site Scripting in User Profile Bio Field" or "Missing Rate Limiting on Password Reset Endpoint." Vague titles like "Security Issue in Application" indicate a low-quality report.
Affected component
This specifies exactly where the vulnerability exists: a URL, an API endpoint, a specific parameter, or a system component. Your engineering team needs this to know where to look in the codebase. Good reports include the file path or controller if the testers had code access.
Description
The description explains what the vulnerability is, why it exists, and what an attacker can do with it. This section bridges the gap between the technical proof-of-concept and the business impact. Read this before jumping to the reproduction steps, because understanding the vulnerability class helps you fix the root cause instead of just patching the specific instance.
Reproduction steps
This is the step-by-step procedure to reproduce the vulnerability. It typically includes HTTP requests (often from a tool like Burp Suite), parameter values, and the exact sequence of actions. Your engineering team should be able to follow these steps to see the vulnerability themselves before writing a fix. If the reproduction steps are unclear or incomplete, ask the testing firm for clarification. You should never have to guess how to reproduce a reported finding.
Evidence and screenshots
Screenshots, HTTP request/response pairs, and tool output that prove the vulnerability exists. This evidence serves two purposes: it lets your engineers verify the finding, and it provides documentation for compliance auditors who need to see that testing was thorough.
Business impact
A good finding explains impact in business terms, not just technical terms. "An attacker can steal session cookies" is a technical description. "An attacker can hijack any user's session, access their account data, and perform actions on their behalf including transferring funds" is a business impact statement. The business impact drives prioritization decisions.
Remediation guidance
Specific instructions for fixing the vulnerability. The best reports provide remediation guidance tailored to your technology stack: not just "implement input validation" but "use the DOMPurify library to sanitize user input before rendering in React components, and remove the use of dangerouslySetInnerHTML in ProfileBio.tsx." If the remediation guidance is generic, the testers may not have understood your codebase well enough to provide targeted advice.
Understanding Common Vulnerability Types
Pentest reports use standardized vulnerability names that may not be immediately clear if you are not a security specialist. Here are the categories that appear most frequently, mapped to what they mean in practice.
| Vulnerability Type | What It Means | Real-World Impact |
|---|---|---|
| IDOR | Changing an ID in a request lets you access another user's data | User A can view, modify, or delete User B's records |
| XSS (Stored) | Attacker injects JavaScript that executes when others view a page | Session hijacking, credential theft, defacement |
| XSS (Reflected) | Malicious JavaScript in a URL executes when a victim clicks it | Phishing, session hijacking via crafted links |
| SQL Injection | User input is interpreted as database commands | Full database access, data exfiltration, data modification |
| SSRF | Application can be tricked into making requests to internal systems | Access to internal APIs, cloud metadata, internal network scanning |
| Auth Bypass | Authentication controls can be circumvented | Unauthorized access to accounts or admin functionality |
| Privilege Escalation | A lower-privilege user can perform higher-privilege actions | Regular users accessing admin features, data, or controls |
| CSRF | Attacker tricks a logged-in user into performing unwanted actions | Unauthorized state changes: password resets, transfers, setting modifications |
| Missing Rate Limiting | No throttling on sensitive endpoints | Brute-force attacks on login, OTP, and password reset flows |
| Insecure Deserialization | Application deserializes untrusted data without validation | Remote code execution, denial of service |
For a deeper look at how these vulnerabilities manifest in real engagements, see our posts on the OWASP Top 10 in practice, authentication bypass techniques, and business logic vulnerabilities.
Reading Attack Chains
Some of the most valuable content in a pentest report is the attack chain analysis, where the tester shows how multiple lower-severity findings combine to create a high-impact attack scenario. This is where manual penetration testing demonstrates value that automated scanners cannot replicate.
A typical attack chain might look like this:
- Information Disclosure (Low): An API endpoint returns internal user IDs in error messages
- IDOR (Medium): Using those internal IDs, the attacker accesses another user's profile data including email address
- Password Reset Flaw (Medium): The password reset flow does not invalidate previous tokens, allowing the attacker to use a leaked token
- Account Takeover (Critical): The attacker resets the target user's password and gains full access to their account
Individually, steps 1-3 are Low to Medium severity. Chained together, they enable full account takeover, which is Critical. When you see attack chains in your report, pay attention to them. Fixing any single link in the chain breaks the entire attack, so you can strategically choose the cheapest or fastest fix to neutralize the chain while working on comprehensive fixes for each individual finding.
What good looks like: A pentest report that only lists individual findings without showing how they chain together is leaving value on the table. If your report does not include attack chain analysis, ask the testing firm to provide it. The chained scenarios are often the most compelling evidence for getting engineering resources allocated to remediation.
Prioritizing Remediation
You have read the report. Now you need to fix things. With a typical report containing 15 to 40 findings across all severity levels, you cannot fix everything simultaneously. Here is the prioritization framework we recommend.
Tier 1: Fix immediately (Critical + externally exploitable High)
These are findings that an attacker can exploit right now from the internet with minimal effort. SQL injection, authentication bypass, remote code execution, and exposed admin panels with default credentials. If the finding allows an unauthenticated attacker to access data or execute code, it goes in this tier. Target remediation within 24 to 72 hours. If you cannot fix it that quickly, deploy a mitigating control (WAF rule, IP restriction, feature flag to disable the affected functionality) while working on the permanent fix.
Tier 2: Fix this sprint (remaining High + chain-enabling Medium)
High-severity findings that require authentication to exploit, and Medium-severity findings that enable attack chains leading to High or Critical impact. Also include any finding that exposes customer PII, regardless of its CVSS score. Target remediation within 1 to 2 weeks.
Tier 3: Fix this month (Medium)
Medium-severity findings that do not chain into higher-impact scenarios. These are real vulnerabilities but require specific conditions, user interaction, or authenticated access to exploit. Target remediation within 30 days.
Tier 4: Plan and schedule (Low + Informational)
Missing security headers, verbose error messages, information disclosure that does not reveal sensitive data, and hardening recommendations. These are defense-in-depth improvements that matter but are not urgent. Add them to your backlog and address them during regular development sprints. Target within 90 days.
| Tier | Criteria | Target SLA | Example Findings |
|---|---|---|---|
| Tier 1 | Critical severity or externally exploitable High | 24-72 hours | SQL injection, auth bypass, RCE, exposed secrets |
| Tier 2 | Remaining High + chain-enabling Medium + PII exposure | 1-2 weeks | Authenticated IDOR, stored XSS, privilege escalation |
| Tier 3 | Standalone Medium findings | 30 days | CSRF, reflected XSS, missing rate limiting |
| Tier 4 | Low + Informational | 90 days | Missing headers, verbose errors, cookie flags |
What to Do After You Fix Things
Fixing the vulnerabilities is only half the job. The other half is verifying the fixes work and documenting what was done. Here is the post-remediation process.
Request retesting
After remediating findings, ask your pentest firm to retest the fixed vulnerabilities. A good firm includes retesting in the engagement scope. Retesting confirms that the fix actually addresses the vulnerability and does not introduce new issues. This is especially important for PCI DSS compliance, which explicitly requires retesting of exploitable vulnerabilities.
Fix the root cause, not just the instance
If the report contains three separate IDOR findings across different endpoints, the root cause is probably a missing authorization middleware, not three individual bugs. Fix the middleware once rather than patching three endpoints. The strategic recommendations section of the report should identify these systemic patterns.
Update your risk register
Pentest findings that have been remediated and retested are closed. Findings that you accept as risks (because the fix is too costly relative to the impact, or because compensating controls mitigate the risk adequately) should be documented in your risk register with a justification. Auditors expect to see either remediation evidence or documented risk acceptance for every finding.
Preserve the report for compliance
SOC 2, ISO 27001, PCI DSS, and HIPAA auditors will all ask for your pentest report. Store it securely and know where to find it. Include the original report, the remediation evidence, and the retest results. If you are using a compliance automation platform, upload the report there so it is automatically available for your next audit cycle.
Red Flags in a Pentest Report
Not all pentest reports are created equal. Here are signs that the report you received may not reflect thorough testing.
- Only automated scanner output. If every finding looks like it came from Nessus, Burp Scanner, or OWASP ZAP with no manual validation, you received a vulnerability scan, not a penetration test. Manual findings like business logic flaws, authorization bypass, and attack chains should be present
- No reproduction steps. Every finding should be reproducible. If the report says "XSS was found" without specifying the exact payload, parameter, and URL, your engineers cannot verify or fix it
- Generic remediation advice. "Implement input validation" is not remediation guidance. Your report should tell you what to validate, where, and how, ideally with code examples for your specific tech stack
- No business impact context. A finding that says "CVSS 7.5, High severity" without explaining what an attacker can actually do with it leaves your team guessing about priority
- Suspiciously few findings. A pentest of a production web application that returns only 2-3 informational findings either had an extremely narrow scope, extremely short engagement time, or was not conducted thoroughly. Even well-secured applications typically have findings
- No mention of authorization testing. Broken access control is the number one vulnerability category in web applications. If the report does not mention testing for IDOR, privilege escalation, or function-level access control, a major attack surface was skipped
Sharing the Report with Your Team
A pentest report is a sensitive document that contains detailed instructions for compromising your systems. Handle distribution carefully.
Engineering teams need the detailed findings section for remediation. They do not need the executive summary or strategic recommendations. Share the specific findings assigned to each team, not the full report.
Leadership and board members need the executive summary and strategic recommendations. They do not need reproduction steps. See our guide on explaining pentest results to your board for how to translate technical findings into board-ready communication.
Auditors and compliance teams need the full report including scope, methodology, findings, and remediation evidence. They use it to verify that security testing meets the requirements of the applicable framework.
Customers requesting it should generally receive a summary or attestation letter rather than the full report. The full report contains exploitation details that you should not share externally. Many firms provide a "customer-facing summary" that confirms testing was conducted, lists findings by severity count, and confirms remediation status without the technical reproduction details.
Security note: Store pentest reports with the same access controls you would apply to any sensitive security document. Limit access to people who need it. If the report is stored in a shared drive, restrict permissions. A leaked pentest report is a roadmap for anyone who wants to attack your systems.
Getting More Value from Future Reports
Every pentest builds on the previous one. Here is how to create a virtuous cycle where each assessment delivers more value than the last.
Share previous reports with your testing firm. Knowing what was found and fixed last time helps testers focus on new areas and verify that previous issues have not regressed. It also saves time during scoping because the firm already understands your architecture.
Track findings over time. Maintain a spreadsheet or use your attack surface management platform to track findings across engagements. Patterns emerge: if broken access control appears in every pentest, you have a systemic issue with your authorization model, not a series of isolated bugs.
Provide feedback to the testing firm. If certain findings were hard to understand, if remediation guidance was not specific enough, or if the report format did not work for your team, tell them. Good firms iterate on their deliverables based on client feedback.
Use findings to improve your development process. Pentest findings are training material. If the same vulnerability types appear repeatedly, build them into your CI/CD security checks, your code review checklist, and your developer onboarding materials. The goal is to catch these issues before they reach production, not after.
Need a pentest report your team can actually use?
Our reports include technology-specific remediation guidance, attack chain analysis, and compliance framework mapping. Every finding comes with clear reproduction steps and business impact context.