Your organization just completed a penetration test. The report lands in your inbox: 47 findings, including 8 critical and 14 high-severity vulnerabilities. The executive summary paints a clear picture of risk. The technical details include reproduction steps, evidence screenshots, and remediation guidance. Everything your engineering team needs to fix these issues is right there in the document.

Six months later, fewer than half of those findings have been addressed. The criticals got some attention. A few highs were patched in the immediate aftermath. But the majority of the report has been buried under sprint backlogs, product roadmaps, and the relentless pressure to ship features. Sound familiar?

This is the remediation gap, and it is one of the most pervasive and least discussed problems in application security. Organizations spend tens of thousands of dollars on penetration testing, receive detailed technical reports, and then fail to act on what they find. The result is a dangerous illusion: the feeling of security without the substance of it.

The Numbers: How Bad Is the Remediation Gap?


Cobalt's State of Pentesting report reveals a striking disconnect. Less than 48% of vulnerabilities identified during penetration tests are actually remediated.[1] That means more than half of all known, documented, reproducible security flaws remain in production after organizations have paid to find them.

What makes this statistic even more alarming is the context surrounding it. In the same research, 81% of organizations reported believing that their security posture is strong. There is a 33-point gap between perception and reality. Companies believe they are secure because they have completed a pentest, not because they have fixed what the pentest found.

This is not a fringe problem affecting a few under-resourced startups. It spans industries, company sizes, and maturity levels. Even organizations with dedicated security teams and formal vulnerability management programs struggle with remediation rates. The issue is structural, not incidental.

The core paradox: Organizations invest in finding vulnerabilities but systematically under-invest in fixing them. Discovery without remediation is not security. It is expensive documentation of risk you have chosen to accept by default.

The consequences compound over time. Unpatched vulnerabilities from previous engagements reappear in subsequent pentests. The same SQL injection found in Q1 shows up again in Q3 because nobody assigned ownership, nobody tracked it, and the original report is sitting in a SharePoint folder that three people have access to. Each cycle of testing and not fixing erodes the value of the testing itself and breeds cynicism among the security teams advocating for remediation.

Why Findings Do Not Get Fixed


Understanding why remediation fails is the first step toward fixing it. After working with hundreds of organizations across their pentest engagements, we see the same root causes repeatedly. None of them are about technical capability. Engineering teams are fully capable of fixing these issues. The failures are organizational, procedural, and motivational.

Engineering Bandwidth Is Always Oversubscribed

This is the most common reason, and it is the most honest one. Engineering teams are already running at capacity. Product managers have roadmaps. Sprints are full. Customer-facing features have deadlines. When a pentest report drops 47 findings into an already overloaded queue, it is competing against work that has visible stakeholders, revenue implications, and executive attention.

Security findings rarely have the same internal advocacy. No product manager is championing the fix for a cross-site scripting vulnerability. No sales team is asking when the insecure direct object reference will be resolved. Without a forcing function, remediation loses the prioritization battle every single sprint.

Unclear Ownership Creates Accountability Gaps

A pentest report identifies a vulnerability in the authentication service. Who owns it? The security team found it, but they do not own the code. The platform team maintains the service, but the vulnerability was introduced by the payments team's integration. The DevOps team manages the infrastructure where the service runs. Everyone has a reasonable argument for why it is not their responsibility.

When ownership is ambiguous, nothing happens. The finding sits in limbo between teams, each assuming someone else will handle it. This is not malice. It is the predictable result of a process that produces findings without assigning them to specific individuals with specific deadlines.

Lack of Context Makes Findings Unactionable

Not all pentest reports are created equal. A finding that says "SQL injection in login endpoint" with a single screenshot is technically accurate but practically useless to the developer who needs to fix it. They need to know: which parameter is injectable? What is the payload? What is the expected behavior versus the vulnerable behavior? What does the remediated code look like in their specific framework?

When findings lack sufficient context, developers spend more time understanding the problem than fixing it. That friction is often enough to push the finding to the bottom of the backlog. If it takes two hours to understand what a finding means and thirty minutes to fix it, the perceived effort is two and a half hours, and it will lose every prioritization conversation against a two-hour feature request.

Report Fatigue Is Real and Measurable

The average pentest report is 40 to 80 pages long. It arrives as a PDF, usually via email, weeks after the engagement concludes. By the time it reaches the engineering team, the context has gone cold. The developers who were available during testing have moved on to other work. The staging environment where the vulnerabilities were demonstrated may have changed.

Long PDF reports also suffer from a psychological problem: they feel overwhelming. A 60-page document with 47 findings triggers avoidance behavior. Engineers open the report, see the scope of work required, and close it. They intend to come back to it. They rarely do. The report becomes shelf-ware, referenced only when the next compliance audit asks "when was your last pentest?" and someone needs to produce the document.

The PDF Problem: Why Traditional Reporting Fails


The traditional pentest delivery model was designed for a different era. When penetration testing first became a standard practice, engagements were annual events. A team of consultants would spend two weeks attacking your perimeter, produce a comprehensive report, and deliver it in a formal readout meeting. The report was the deliverable. The engagement was complete.

This model made sense when applications shipped quarterly and infrastructure changed slowly. It does not make sense in an environment where teams deploy multiple times per day, infrastructure is defined in code, and the attack surface shifts weekly. A point-in-time report about a point-in-time test produces point-in-time value that depreciates rapidly.

The PDF format itself introduces friction at every stage of the remediation process:

The math is simple: If each step in the remediation process introduces 20% friction (delays, miscommunication, context loss), and there are five steps between finding and fix, only 33% of the original signal survives. That aligns almost exactly with the sub-48% remediation rate the industry reports.

How Real-Time Findings Delivery Changes Remediation Rates


The single most impactful change an organization can make to improve remediation rates is shifting from batch delivery (one report at the end) to continuous delivery (findings published as they are discovered). This is not a minor workflow adjustment. It fundamentally changes the remediation dynamic.

When a finding is delivered in real time, several things happen that do not happen with a PDF report:

Context is fresh. The developer who wrote the vulnerable code may still be working on the same feature. The tester who found the vulnerability is still actively engaged and available for questions. The staging environment is still in the same state. Every hour of delay between discovery and delivery degrades the quality of the context available to the person who needs to fix the issue.

Scope is manageable. Receiving one finding is not overwhelming. Receiving three findings in a day is manageable. Receiving 47 findings in a PDF is paralyzing. Real-time delivery breaks the remediation workload into digestible units that can be addressed incrementally rather than as a single monolithic effort.

Feedback loops tighten. When a developer fixes a finding and the tester can verify the fix within the same engagement window, the loop closes cleanly. The developer gets immediate confirmation that their fix works. The tester can identify if the fix introduced a regression or is incomplete. This back-and-forth is natural in a real-time model and nearly impossible in a PDF-and-email model.

Prioritization happens naturally. When findings arrive one at a time, engineering teams evaluate each one against their current workload and make a deliberate decision about when to address it. This is more effective than receiving a priority-sorted list, because the team's context about their own capacity and constraints is always more accurate than the tester's assumptions about it.

Organizations that adopt real-time findings delivery consistently report remediation rates 2-3x higher than those using traditional batch reporting.[2] The findings are the same. The vulnerabilities are the same. The difference is entirely in how and when they reach the people who can fix them.

Building a Remediation Workflow: Triage, Assign, Fix, Verify


Improving remediation rates requires more than better reporting. It requires a structured workflow that moves findings from discovery to verified resolution with clear accountability at each stage. The most effective remediation workflows follow a four-phase model.

Phase 1: Triage

Every finding needs an initial triage decision within 48 hours of discovery. Triage is not the same as fixing. It is the decision about what to do with the finding. The triage outcome should be one of four categories:

The critical discipline in triage is that "no decision" is not an option. Every finding must exit triage with a status and, if applicable, an owner and a target date. Findings that remain untriaged for more than 48 hours should automatically escalate.

Phase 2: Assign

Assignment must be specific. "The backend team will handle this" is not an assignment. "Sarah Chen will fix the broken access control in the user profile endpoint by March 28" is an assignment. Specificity creates accountability, and accountability drives completion.

Effective assignment also requires that the assignee has the context they need. This means the finding description, reproduction steps, evidence, and recommended remediation should be accessible in the tool the developer already uses. If your team lives in Jira, the finding should be a Jira ticket. If they use GitHub Issues, it should be an issue. Asking developers to check a separate portal for security findings guarantees that they will not check it.

Phase 3: Fix

The fix phase is where engineering teams do what they do best: write code. The remediation workflow's role in this phase is to minimize friction. That means providing clear remediation guidance (not just "fix the SQL injection" but "use parameterized queries with the PDO library, specifically the prepare() and execute() methods"), linking to relevant documentation, and making the tester available for questions.

One pattern that significantly improves fix quality is providing remediation code examples in the same language and framework the application uses. A Python developer does not benefit from a Java remediation example. An Express.js developer does not benefit from a Django example. Context-specific guidance reduces the cognitive load of remediation and produces better fixes.

Phase 4: Verify

A fix is not complete until it has been verified. This is the step most organizations skip, and it is the step that matters most for actually reducing risk. Verification means retesting the specific vulnerability with the same methodology used to discover it, confirming that the fix resolves the issue without introducing new ones.

Without verification, organizations are operating on faith. The developer believes they fixed the issue. The security team assumes it was fixed because the Jira ticket was closed. But closed tickets and actual security are different things. We regularly find during retests that 15-25% of "fixed" vulnerabilities are either incompletely remediated (the specific attack vector was blocked but a variant still works) or have regressed (the fix was overwritten by a subsequent deployment).

A remediation without retesting is a hypothesis, not a fact. You would not ship a feature without QA testing it. Security fixes deserve the same rigor. Build retesting into your remediation workflow as a mandatory step, not an optional nice-to-have.

The Role of Retesting in Closing the Loop


Retesting is the most undervalued component of the penetration testing lifecycle. It is also the component that most directly correlates with actual risk reduction. A pentest without retesting tells you what was wrong. A pentest with retesting tells you what is actually fixed.

Effective retesting programs share several characteristics:

Retesting is included in the engagement scope. When retesting requires a separate proposal, separate budget approval, and separate scheduling, it does not happen. The friction is too high. Organizations that include retesting credits as part of their pentest engagement see dramatically higher utilization of those credits and, consequently, higher verified remediation rates.

Retesting happens promptly. The window between fix and retest should be days, not weeks. When a developer pushes a fix on Tuesday and retesting does not happen until the following month, the feedback loop is broken. The developer has moved on mentally. If the fix is incomplete, they need to re-establish context, which means the second fix attempt takes longer and is more likely to introduce new issues.

Retesting validates, not just checks. A proper retest does not just replay the original proof of concept. It attempts to bypass the fix using variant techniques. If the original finding was a SQL injection via the username parameter, retesting should also check whether the same parameter is vulnerable to different injection techniques, whether other parameters in the same endpoint are vulnerable, and whether the fix was applied consistently across similar endpoints.

Retest results feed back into metrics. The retest pass rate (the percentage of findings that are confirmed fixed on the first retest attempt) is one of the most revealing metrics in vulnerability management. A low retest pass rate indicates that either the remediation guidance is insufficient, the developers lack the security knowledge to implement fixes correctly, or the fixes are not being tested locally before being submitted for retest.

How PTaaS Platforms Improve Remediation


Penetration Testing as a Service (PTaaS) represents a structural shift in how pentesting is delivered, and its impact on remediation is where the model's value is most evident. PTaaS platforms address the remediation gap not by producing better findings (though they often do), but by embedding remediation into the delivery model itself.

Integration with Developer Workflows

The most impactful feature of modern PTaaS platforms is direct integration with the tools engineering teams already use. When a finding automatically creates a Jira ticket in the correct project, with the correct priority, assigned to the correct team, and populated with reproduction steps and remediation guidance, the friction between discovery and action drops to near zero.

GitHub integration enables a similar workflow for teams that manage work through issues and pull requests. A finding creates an issue. A developer branches, fixes, and opens a PR referencing the issue. The PR is reviewed, merged, and deployed. The tester retests. The issue is closed. This workflow is natural to developers because it uses the same tools and processes they use for every other type of work.

Compare this to the PDF workflow: download the report, read the finding, copy the details into Jira manually, assign it, hope the developer reads the Jira ticket, hope they reference the original report for context, hope they fix it correctly, and then hope someone remembers to schedule a retest. Each "hope" in that chain is a point of failure.

Real-Time Collaboration

PTaaS platforms enable direct communication between testers and developers within the context of specific findings. A developer can ask "can you clarify the reproduction steps for finding #23?" and the tester can respond with additional detail, updated screenshots, or a screen recording, all attached to the finding itself. This eliminates the email chains, the "which finding are you referring to?" clarifications, and the context loss that plagues traditional engagements.

Built-In Retesting

In a PTaaS model, retesting is not a separate engagement. It is a button click. The developer marks a finding as fixed, the tester is notified, and retesting begins. Results are posted back to the same finding, creating a complete audit trail from discovery through remediation to verified resolution. This closed-loop model is what transforms pentesting from a compliance exercise into a genuine risk reduction program.

Metrics That Matter: Measuring Remediation Effectiveness


You cannot improve what you do not measure. But measuring the wrong things creates perverse incentives. The security industry has a history of tracking metrics that look good in dashboards but do not correlate with actual risk reduction. Here are the metrics that genuinely matter for remediation effectiveness.

Mean Time to Remediate (MTTR)

MTTR measures the average time between when a vulnerability is discovered and when it is confirmed fixed through retesting. The key word is "confirmed." Closing a Jira ticket is not remediation. Passing a retest is remediation. MTTR should be tracked by severity level, because a 30-day MTTR is excellent for medium findings and catastrophic for criticals.

Industry benchmarks for MTTR vary widely, but reasonable targets based on our experience are:

Track MTTR over time, not just as a snapshot. A declining MTTR trend indicates that your remediation process is maturing. An increasing trend is a leading indicator that your backlog is growing faster than your capacity to address it.

Fix Rate by Severity

Fix rate is the percentage of findings in each severity category that have been remediated within a defined timeframe. This metric reveals whether your organization is genuinely addressing risk or just cherry-picking easy wins. A common anti-pattern is a high overall fix rate driven by resolving many low-severity findings while critical and high-severity findings remain open.

A healthy fix rate profile looks like this: 100% of criticals fixed within SLA, 90%+ of highs, 75%+ of mediums, and 50%+ of lows (with the remainder explicitly risk-accepted). If your critical fix rate is below 95%, you have a process problem that needs immediate attention.

Retest Pass Rate

The retest pass rate measures what percentage of fixes are confirmed effective on the first retest attempt. This metric is a proxy for fix quality. A high retest pass rate (above 85%) indicates that your developers understand the vulnerabilities, have sufficient remediation guidance, and are testing their fixes locally before submitting them for verification.

A low retest pass rate (below 70%) signals one or more of the following problems: remediation guidance is insufficient, developers lack security-specific knowledge for the vulnerability types being found, or there is no local testing process before retest submission. Each of these root causes has a different solution, and tracking this metric over time helps identify which intervention is needed.

Findings Recurrence Rate

This metric tracks how often the same vulnerability type reappears in subsequent pentests. If you fix three cross-site scripting findings in Q1 and five new ones appear in Q3, your remediation is treating symptoms, not causes. A high recurrence rate indicates a need for developer training on specific vulnerability classes, architectural improvements (like implementing a centralized input validation library), or changes to the development process (like adding security-focused code review for relevant code changes).

Metric to avoid: Total number of findings. This metric is meaningless in isolation and creates the wrong incentives. A pentest that produces 100 findings is not inherently worse than one that produces 20. It may reflect a larger scope, more thorough testing, or different counting methodology. Track remediation effectiveness, not discovery volume.

A Practical Remediation Playbook


If your organization is starting from a low remediation rate, here is a prioritized action plan that addresses the structural causes of the remediation gap.

Week 1: Establish Ownership

Designate a single person as the remediation coordinator. This is not a full-time role. It is a responsibility, typically owned by a security-minded engineering manager or a senior developer with security interest. The coordinator's job is to ensure every finding from every pentest exits triage within 48 hours with an owner, a status, and a target date. They run a weekly 15-minute standup on remediation progress and escalate blocked findings.

Week 2: Integrate Findings into Developer Workflows

Stop distributing PDF reports. If your pentest provider delivers PDFs, copy the findings into your issue tracker manually (Jira, GitHub Issues, Linear, whatever your team uses). Each finding becomes a ticket with severity, reproduction steps, remediation guidance, and an assigned owner. If your provider offers a PTaaS platform with native integrations, enable them. The goal is zero friction between discovery and the developer's daily workflow.

Week 3: Define SLAs and Track MTTR

Establish remediation SLAs by severity and communicate them to the engineering organization. Start tracking MTTR from this point forward. You do not need a sophisticated tool for this. A spreadsheet with finding ID, discovery date, severity, owner, and resolution date is sufficient to start. The discipline of tracking creates visibility, and visibility creates accountability.

Week 4: Build Retesting into the Process

For every finding that moves to "fixed" status, schedule a retest. If your pentest provider includes retesting, use it. If they do not, negotiate retesting into your next engagement or switch to a provider that includes it. Without retesting, your remediation metrics are based on assumptions, not evidence.

Ongoing: Report on Remediation Monthly

Present remediation metrics (MTTR, fix rate by severity, retest pass rate) to engineering leadership monthly. This creates organizational visibility into remediation progress and ensures that security work competes fairly with feature work for engineering capacity. When leadership sees that 40% of critical findings are outside SLA, they make different resource allocation decisions than when the pentest report is a PDF in someone's inbox.

Conclusion: From Discovery to Actual Security


The remediation gap is not a technology problem. It is a workflow problem, a visibility problem, and an accountability problem. The vulnerabilities are found. The technical knowledge to fix them exists. What is missing is the connective tissue between discovery and resolution: the ownership, the tracking, the integration into daily work, and the verification that fixes actually work.

Organizations that close the remediation gap share three characteristics. First, they deliver findings to developers in the tools developers already use, not in PDF reports that require manual translation. Second, they assign specific ownership with specific deadlines and track progress visibly. Third, they verify fixes through retesting, treating "fixed" as a hypothesis to be tested rather than a status to be trusted.

The pentest industry has spent decades getting better at finding vulnerabilities. It is time to get equally good at fixing them. The value of a pentest is not the report. It is the risk reduction that happens after the report, when findings become fixes and fixes become verified improvements to your security posture.

Every finding that goes unremediated is risk you have paid to discover and then chosen to ignore. That is not a security program. That is an expensive documentation exercise. Close the gap.


Close the Remediation Gap with Real-Time PTaaS

Lorikeet delivers pentest findings in real time, integrates with your dev workflows, and includes retesting to verify every fix. Stop paying to discover vulnerabilities you never fix.

Book a Consultation View Pricing
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.