You have been through penetration tests before. Your team patches the findings, remediates the critical vulnerabilities, and checks the box for compliance. But somewhere in the back of your mind, a question lingers: would your organization actually detect and stop a real attacker? Not a consultant working through a checklist during business hours, but a motivated adversary with specific objectives, weeks of patience, and no rules about which attack surface to target first.
That question is what red teaming answers. A red team engagement is not a pentest with a different name. It is a fundamentally different exercise with a different philosophy, different methodology, and different outcomes. Where a penetration test asks "what vulnerabilities exist?", a red team engagement asks "can a determined adversary achieve this specific objective, and would anyone notice?"
This article breaks down what a red team engagement actually involves, from the initial planning through objective completion, so you understand what you are signing up for, what your team will learn, and whether your organization is ready for one.
What red teaming actually is
Red teaming is adversary simulation. The red team operates as a realistic threat actor with defined objectives: exfiltrate a specific dataset, compromise a critical system, gain access to the CEO's email, simulate a ransomware deployment, or physically breach a secure facility. The engagement tests not just whether vulnerabilities exist, but whether your people, processes, and technology can detect, respond to, and contain an active threat.
The concept originates from military exercises where a dedicated "red" force would attack the "blue" force's defenses to identify weaknesses that only surface under realistic adversarial pressure. In cybersecurity, the principle is the same. You cannot truly evaluate your defenses by running a vulnerability scanner or even by conducting a standard penetration test. You need someone actively trying to evade your detection, bypass your controls, and achieve a meaningful objective while your defenders are operating normally.
The critical distinction is stealth and realism. During a pentest, the security team typically knows testing is happening. The SOC might be told to whitelist the tester's IP addresses. The goal is to find vulnerabilities efficiently. During a red team engagement, the defenders usually do not know the engagement is in progress. Only a small group of senior stakeholders, sometimes called the "white team" or "trusted agents," are aware. Everyone else responds as they would to a real incident. This is what makes the results meaningful.
The core question a red team answers: If a sophisticated adversary targeted your organization with specific objectives, would your security team detect the intrusion, respond effectively, and prevent the attacker from achieving their goals? The answer is rarely a simple yes or no. It is a detailed narrative of what worked, what failed, and where the gaps are.
Red teaming versus penetration testing
The confusion between red teaming and penetration testing is understandable. Both involve offensive security professionals attempting to compromise systems. But the similarities end there. Understanding the differences is essential for choosing the right engagement and setting appropriate expectations. For a deeper comparison, see our dedicated article on red team versus pentest.
| Dimension | Penetration Test | Red Team Engagement |
|---|---|---|
| Primary goal | Find and document vulnerabilities | Test detection and response against a realistic adversary |
| Scope | Defined systems or applications | Entire organization (technical, physical, human) |
| Duration | 1-3 weeks | 4-12 weeks |
| Stealth | Not required; defenders often aware | Critical; defenders should not know |
| Methodology | Systematic testing against a checklist | Objective-driven, adversary-emulated TTPs |
| Findings | List of vulnerabilities with severity ratings | Attack narrative with detection timeline |
| Who benefits most | Development and engineering teams | SOC, IR team, security leadership |
| Prerequisite maturity | Any maturity level | Established security program with monitoring |
A penetration test is a diagnostic tool. A red team engagement is a stress test. You need both, but at different stages of your security program's maturity and for different reasons.
The MITRE ATT&CK framework and how red teams use it
The MITRE ATT&CK framework is the shared language of modern red teaming. ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) is a comprehensive knowledge base of adversary behaviors observed in real-world cyberattacks, organized into tactics (the "why" behind an attack action) and techniques (the "how").
Red teams use ATT&CK to structure their engagements around realistic adversary behavior. Rather than inventing attack patterns from scratch, they emulate the specific tactics, techniques, and procedures (TTPs) used by real threat groups. If your organization operates in financial services, the red team might emulate the TTPs of FIN7 or Carbanak. If you are in healthcare, they might model their approach after APT41 or APT10.
The framework covers the entire attack lifecycle through its 14 tactic categories:
- Reconnaissance - gathering information about the target before the operation
- Resource Development - establishing infrastructure, purchasing domains, creating accounts
- Initial Access - gaining the first foothold in the target environment
- Execution - running adversary-controlled code on target systems
- Persistence - maintaining access across restarts and credential changes
- Privilege Escalation - gaining higher-level permissions
- Defense Evasion - avoiding detection throughout the operation
- Credential Access - stealing credentials for further access
- Discovery - learning about the environment and what is available
- Lateral Movement - moving through the network to reach objectives
- Collection - gathering the data relevant to the objective
- Command and Control - maintaining communication with compromised systems
- Exfiltration - stealing data from the target environment
- Impact - disrupting availability or compromising integrity
The value of ATT&CK in red teaming is twofold. First, it ensures the engagement reflects real-world adversary behavior rather than artificial scenarios. Second, it provides a common framework for the red team and the defending blue team to discuss findings. When the report says "the red team used T1566.001 (Spearphishing Attachment) for initial access and T1053.005 (Scheduled Task) for persistence," both sides know exactly what happened and can map it to their detection capabilities.
Red team engagement phases
A red team engagement follows the adversary lifecycle. Unlike a penetration test, which systematically tests specific systems for vulnerabilities, a red team operates with a mission objective and adapts dynamically based on what they discover. Here is how each phase typically unfolds.
Phase 1: Reconnaissance
Reconnaissance in a red team engagement goes far deeper and wider than in a standard pentest. The red team is not just mapping your web application's attack surface. They are mapping your entire organization.
Open-source intelligence (OSINT) forms the foundation. The team collects information from public sources: corporate websites, social media profiles of employees, job postings that reveal technologies in use, conference talks by your engineers, GitHub repositories, patent filings, SEC filings, press releases, and vendor relationships. They build organizational charts, identify key personnel, understand your technology stack, and map your physical locations.
Technical reconnaissance maps your external footprint: domain names, IP ranges, cloud infrastructure, email gateways, VPN endpoints, remote access portals, and third-party services. The team identifies which services are exposed, what software versions are running, and where potential entry points exist. This is not just port scanning. It is building a comprehensive model of your digital presence.
Human reconnaissance identifies targets for social engineering. The team learns who has administrative access, who works in finance (for business email compromise scenarios), who has been with the company less than 90 days (less likely to question unusual requests), and who is active on social media (more likely to click links). They study communication patterns, internal terminology, and corporate culture.
This phase can take one to three weeks. The quality of reconnaissance directly determines the success of every subsequent phase. A red team that rushes this step will get caught early. One that invests properly will move through your environment like they belong there.
Phase 2: Initial access
Initial access is the moment the red team establishes their first foothold inside your environment. This is typically the highest-risk phase for the attackers because it is when they are most likely to be detected. A skilled red team will have multiple initial access plans prepared in case the first attempt fails or triggers an alert.
Phishing remains the most common initial access vector because it works. The red team crafts targeted emails using the intelligence gathered during reconnaissance. These are not obvious spam messages. They are carefully constructed communications that reference real projects, real people, and real events within your organization. The payload might be a document with a macro, a link to a credential harvesting page, or an HTML attachment that executes code. The red team monitors which users open the email, which click the link, and which enter credentials or execute the payload.
External exploitation targets vulnerabilities in internet-facing systems: VPN appliances, email gateways, web applications, or cloud services. The red team looks for unpatched systems, default credentials, misconfigurations, and exposed administrative interfaces. They may chain multiple lower-severity vulnerabilities together to achieve access that no single vulnerability would provide on its own.
Physical intrusion, when in scope, tests physical security controls. This might involve tailgating into office buildings, cloning access badges, planting rogue devices on the network, or accessing unsecured areas where credentials or sensitive information are visible. Physical intrusion often bypasses millions of dollars in technical security controls because the attacker is now on the internal network behind the firewall.
Supply chain and trusted relationships may also be explored. Can the red team compromise a vendor or partner that has access to your environment? Can they abuse trust relationships between systems to gain access through a side door?
Why multiple vectors matter: Real adversaries do not limit themselves to one attack vector. If phishing fails, they try exploitation. If external exploitation fails, they try physical intrusion. A red team engagement tests your defenses across all realistic attack surfaces, not just the ones that are convenient to test.
Phase 3: Persistence
Once the red team has initial access, their first priority is ensuring they do not lose it. Persistence mechanisms allow the attacker to maintain access even if the original entry point is discovered and closed, if a compromised system is rebooted, or if a user changes their password.
Common persistence techniques include scheduled tasks and services that execute the attacker's code at regular intervals, registry modifications that run payloads at startup, web shells placed on internet-facing servers, legitimate remote access tools installed to blend in with normal IT operations, and additional user accounts created to provide backup access paths.
The red team typically establishes multiple persistence mechanisms across different systems. If the blue team discovers and removes one, the red team can re-enter through another. This mirrors real-world adversary behavior: advanced persistent threat groups almost always establish redundant access paths because they know defenders will eventually find some of them.
This phase tests whether your organization can detect abnormal changes to startup processes, unauthorized service installations, unusual scheduled tasks, or new accounts appearing in your environment. Many organizations have logging for these events but lack the detection rules or analyst attention to catch them.
Phase 4: Lateral movement and privilege escalation
The initial foothold is rarely the objective. The red team needs to move through your network to reach the systems that contain the target data, the accounts with the required permissions, or the infrastructure they need to control.
Privilege escalation focuses on gaining higher-level access on compromised systems or within the domain. The red team looks for misconfigured services running as SYSTEM, unpatched local vulnerabilities, stored credentials in memory or files, weak service account passwords, and Group Policy misconfigurations. In Active Directory environments, they target Kerberoastable service accounts, unconstrained delegation, and ACL-based attack paths. A regular user account that can read a Group Policy Object containing a local administrator password is a common and devastating escalation path.
Lateral movement is how the red team spreads through the network. They use legitimate administrative tools and protocols to avoid triggering alerts: Remote Desktop Protocol, PowerShell remoting, Windows Management Instrumentation, SMB file shares, and SSH. The key is operating in ways that look like normal administrative activity. A skilled red team does not deploy noisy exploits when they can use a harvested administrator credential and a built-in Windows command to move to the next system.
Credential harvesting fuels both escalation and movement. The team extracts credentials from memory (using techniques like LSASS memory dumping), from files (configuration files, scripts, password managers), from network traffic (LLMNR/NBT-NS poisoning, man-in-the-middle attacks), and from users (keylogging, social engineering for MFA tokens). Each set of credentials opens new paths through the environment.
This phase often reveals the most significant security gaps. Organizations that invest heavily in perimeter security but neglect internal segmentation, monitoring, and credential hygiene find that an attacker who breaches the perimeter can move freely through the entire environment. If your domain administrators use the same credentials across servers, workstations, and cloud platforms, a single compromised machine can lead to total domain compromise.
Phase 5: Objective completion
Every red team engagement has defined objectives agreed upon during scoping. This is the phase where the team attempts to achieve them. The objectives are designed to represent realistic adversary goals that would cause meaningful business impact.
Data exfiltration objectives test whether the team can locate, collect, and extract sensitive data from the environment. Can they access the customer database? Can they exfiltrate intellectual property? Can they retrieve financial records? The red team also tests whether the exfiltration itself is detected: moving large volumes of data to an external destination should trigger alerts in a well-monitored environment.
Ransomware simulation objectives test the organization's resilience to destructive attacks without actually encrypting anything. The red team demonstrates that they have the access and permissions needed to deploy ransomware across the environment. They identify which systems they could reach, which backup systems they could compromise first (as real ransomware groups do), and how much of the environment they could impact. The simulation stops short of execution but provides a clear picture of the blast radius.
Business process compromise objectives target specific operational capabilities. Can the team modify financial transactions? Can they alter manufacturing parameters? Can they disrupt supply chain operations? These objectives are especially relevant for industries where the adversary's goal is not data theft but operational disruption.
The red team documents every step of the path from initial access to objective completion. This attack narrative becomes the centerpiece of the engagement report and the most valuable artifact for improving your security posture.
Real-world red team scenarios
Understanding red teaming in the abstract is useful, but concrete scenarios illustrate why organizations invest in these engagements. Here are four scenarios that represent common red team objectives.
Scenario 1: Data exfiltration
A financial services company wants to know if an attacker could access and exfiltrate its customer database, which contains account numbers, social security numbers, and transaction histories. The red team begins with OSINT against the company's employees, identifies a software engineer who recently posted about a new internal tool on a professional forum, and crafts a phishing email that appears to come from the tool's vendor. The engineer opens the attachment, which establishes a command-and-control channel. Over the next two weeks, the team escalates privileges through a misconfigured build server that stores deployment credentials, moves laterally to the database servers, queries customer records, and exfiltrates a sample dataset through an encrypted channel disguised as normal HTTPS traffic. The SOC never triggers an alert. The engagement reveals that the organization's data loss prevention tools are only monitoring email attachments, not encrypted web traffic, and that database access logging is not connected to the SIEM.
Scenario 2: Ransomware readiness
A healthcare organization wants to validate its ransomware resilience. The red team gains initial access through a vulnerable patient portal application, pivots to the internal network through a shared database server, and systematically maps the environment. They identify the backup infrastructure, the domain controllers, and the clinical systems. They demonstrate that they can compromise the backup servers first (a technique used by every major ransomware group to prevent recovery), then gain domain admin privileges through a Kerberoasting attack against a service account with a weak password. They document that they could deploy ransomware to 94% of the organization's endpoints within four hours. The engagement prompts the organization to implement offline backups, network segmentation between clinical and administrative systems, and privileged access workstations for domain administration.
Scenario 3: Physical intrusion
A technology company suspects its physical security is weaker than its cybersecurity. The red team conducts surveillance of the company's office locations, identifies badge access patterns, and discovers that the parking garage entrance does not require badge access during business hours. A team member enters the building through the garage, takes the elevator to a restricted floor by tailgating an employee, and plugs a small device into an open Ethernet port in a conference room. The device establishes a VPN tunnel to the red team's infrastructure, providing full internal network access. From there, the team compromises the internal Active Directory within 48 hours. The engagement reveals that physical access controls are not enforced consistently, unused network ports are not disabled, and the organization has no network access control (802.1X) to prevent rogue devices from connecting.
Scenario 4: Social engineering campaign
A government contractor wants to test its employees' resilience to social engineering. The red team registers a lookalike domain, builds a convincing replica of the organization's VPN login portal, and sends targeted phishing emails to 200 employees across multiple departments. The email warns of an urgent security update and directs recipients to the fake portal. Within 24 hours, 34 employees enter their credentials. Eight of those employees have MFA enabled, but the red team uses a real-time phishing proxy to capture and replay the MFA tokens, gaining access to the real VPN. From VPN access, the team moves to internal systems and eventually reaches classified project documentation. The engagement demonstrates that MFA alone does not prevent credential phishing when the phishing infrastructure can proxy authentication in real time, and it leads the organization to adopt phishing-resistant MFA (FIDO2 hardware keys) for all employees with access to classified systems.
Tools of the trade
Red teams use a combination of open-source, commercial, and custom tools to simulate adversary behavior. Understanding these tools at a high level helps you appreciate the sophistication of modern red team operations and the challenges your defenders face in detecting them.
Command and control (C2) frameworks
C2 frameworks are the backbone of red team operations. They provide the infrastructure for communicating with compromised systems, issuing commands, and managing implants across the target environment. Widely used frameworks include Cobalt Strike, a commercial platform used by both red teams and real adversaries; Sliver, an open-source alternative with similar capabilities; Mythic, a modular framework that supports multiple payload types and communication channels; and Brute Ratel, designed specifically for adversary simulation with a focus on defense evasion.
Modern C2 frameworks support encrypted communications over common protocols (HTTPS, DNS, SMB), malleable communication profiles that mimic legitimate traffic, and sleep/jitter functionality that makes automated detection difficult. The red team's C2 traffic is designed to blend in with your organization's normal network traffic, making it extremely difficult for defenders to identify through traffic analysis alone.
Phishing and social engineering platforms
Purpose-built platforms like GoPhish and Evilginx allow red teams to create convincing phishing campaigns with realistic landing pages, email templates, and tracking capabilities. Evilginx is particularly notable because it acts as a reverse proxy that can capture and replay MFA tokens in real time, demonstrating a critical weakness in push-based and TOTP-based multi-factor authentication.
Post-exploitation and lateral movement tools
Once inside the network, red teams rely on tools for privilege escalation, credential harvesting, and lateral movement. These include BloodHound for mapping Active Directory attack paths, Rubeus for Kerberos-based attacks, Mimikatz for credential extraction from memory, and Impacket for remote execution through Windows protocols. Many red teams also use legitimate system administration tools like PowerShell, WMI, and PsExec to move through the environment in ways that are difficult to distinguish from normal IT operations.
Physical intrusion tools
When physical testing is in scope, red teams use tools for badge cloning (Proxmark devices that read and replicate RFID access cards), lock bypass tools for physical lock picking and bypass, network implants (small devices that provide remote network access when plugged into an Ethernet port), and rogue wireless access points that capture credentials from nearby devices. These tools demonstrate that physical security is an integral part of the overall security posture, not a separate concern.
A note on responsible disclosure: Red teams use these tools under strict rules of engagement with legal authorization. The tools themselves are not inherently malicious. They are the same tools that defenders should understand in order to build effective detection capabilities. Knowing how an attacker operates is the first step in learning how to stop them.
What the blue team learns
The real value of a red team engagement is not the list of systems compromised. It is the detailed understanding of where your detection, response, and containment capabilities succeed and fail under realistic adversarial pressure.
Detection gaps
Red team engagements consistently reveal that organizations have significant blind spots in their monitoring. Common detection gaps include: no alerts for credential dumping tools running on endpoints, no monitoring of lateral movement through legitimate protocols like SMB and RDP, no detection of new scheduled tasks or services on servers, insufficient logging on critical systems (especially databases and file servers), and SIEM rules that are too noisy (generating so many alerts that real threats are lost in the noise) or too narrow (only detecting specific, known attack signatures).
These gaps exist not because the security team is incompetent but because detection engineering is extraordinarily difficult. You cannot build effective detection rules for attacks you have never seen. A red team engagement gives your blue team direct exposure to realistic adversary behavior so they can build detections that actually work.
Response time and escalation failures
Even when the blue team detects suspicious activity, the engagement tests whether they can respond quickly and effectively enough to prevent objective completion. Common response failures include: alerts that are seen but not investigated for hours or days, analysts who correctly identify suspicious activity but do not escalate because they are uncertain, escalation procedures that are documented but untested (and fail under pressure), lack of clear authority to isolate compromised systems without approval from multiple stakeholders, and incident response playbooks that assume a single compromised system rather than an attacker who has been in the environment for weeks.
The response timeline the red team documents, from initial detection to containment, is one of the most actionable metrics the engagement produces. If your SOC detects the intrusion on day three but does not contain it until day ten, that seven-day gap is where the real damage happens.
Assumptions that prove wrong
Every organization operates with security assumptions that have never been tested. A red team engagement has a way of shattering them. Common assumptions that fail under testing include: "our network segmentation prevents lateral movement" (but the red team found a dual-homed server), "our employees would never fall for phishing" (but 17% entered credentials on the fake portal), "our backups would survive a ransomware attack" (but the red team compromised the backup server first), and "domain admin credentials are protected" (but they were cached on a developer's workstation).
These lessons are painful but transformative. They turn abstract security risks into concrete, documented failures that justify investment in specific improvements.
Purple teaming: The collaborative evolution
Purple teaming is not a separate team. It is a methodology that combines the red team's offensive expertise with the blue team's defensive knowledge in a collaborative, iterative exercise. Instead of the red team operating covertly and delivering findings at the end, purple teaming involves both sides working together in real time.
In a purple team exercise, the red team executes a specific technique, say T1053.005 (Scheduled Task for persistence), and then immediately works with the blue team to determine: Did the SIEM receive the relevant event? Is there a detection rule for this behavior? If the rule exists, did it fire correctly? If it fired, was the alert prioritized appropriately? If there is no detection, what data sources and logic would be needed to detect this technique?
This collaborative approach is highly efficient. A traditional red team engagement might reveal 20 detection gaps, but the blue team receives that information weeks later in a report. A purple team exercise addresses each gap as it is discovered, often building and testing new detection rules during the exercise itself.
When to use each approach:
- Red teaming when you want an unbiased, realistic assessment of your overall detection and response capability. The blue team must operate without advance knowledge for the results to be meaningful.
- Purple teaming when you want to rapidly improve your detection engineering and incident response capabilities. Both sides collaborate to maximize the learning in a concentrated timeframe.
- Alternating both is the gold standard. Run a red team engagement to identify gaps, then run purple team sessions to close them, then run another red team engagement to validate improvements.
Many organizations start with purple teaming because it delivers immediate, measurable improvements. Once the security team has reached a higher maturity level, they graduate to full red team engagements to test their capabilities under realistic conditions.
When your organization is ready for red teaming
Red teaming is not for every organization, and it is not for every stage of security maturity. Running a red team engagement against an organization with no security monitoring is like stress-testing a building that does not have a foundation. You already know the result, and you will pay a lot of money to confirm it.
Your organization is likely ready for a red team engagement when the following conditions are true:
- You have completed multiple penetration tests and are consistently remediating findings. If your pentests still reveal critical vulnerabilities in basic areas (unpatched systems, default credentials, missing access controls), you need to fix those first.
- You have a security operations center or managed detection and response service. Someone needs to be monitoring your environment 24/7 for the engagement to test anything meaningful. If nobody is watching, the red team will succeed trivially, and you will learn nothing you did not already know.
- You have an incident response plan that has been at least tabletop-tested. The engagement will stress-test this plan under realistic conditions, but it needs to exist first.
- Your security team has basic detection capabilities. Endpoint detection and response (EDR) on critical systems, centralized logging, a SIEM or similar platform, and at least some detection rules in place. The red team engagement tests the effectiveness of these tools, not whether you own them.
- Leadership supports the engagement and understands that the purpose is to identify weaknesses, not to embarrass the security team. A red team engagement that results in blame rather than improvement is a wasted investment.
If you are not yet at this maturity level, that is entirely normal. Most organizations benefit from a progression of penetration testing, vulnerability management, and security monitoring improvements before they are ready for red teaming. The goal is to build a defense worth testing, and then test it.
Maturity progression: Vulnerability scanning and patching first. Then penetration testing to find what scanners miss. Then build monitoring and detection capabilities. Then red teaming to test whether those capabilities work under adversarial pressure. Skipping steps wastes money and produces results you cannot act on.
Scoping a red team engagement
Scoping a red team engagement is fundamentally different from scoping a penetration test. Instead of defining which systems to test, you define objectives, threat scenarios, rules of engagement, and boundaries.
Defining objectives
Objectives should be specific, measurable, and aligned with real adversary motivations relevant to your industry. Examples include: "Gain access to the customer PII database and demonstrate the ability to exfiltrate records," "Achieve domain administrator privileges starting from an external, unauthenticated position," "Compromise the CI/CD pipeline and demonstrate the ability to inject code into a production deployment," or "Access the executive team's email and demonstrate the ability to read sensitive communications."
Avoid vague objectives like "test our security" or "find vulnerabilities." Those are pentest objectives. Red team objectives should describe what a real adversary would want to achieve against your organization specifically.
Rules of engagement
The rules of engagement (ROE) document is the most critical artifact in the scoping process. It defines:
- Authorized actions: What the red team is permitted to do. This includes which attack vectors are in scope (phishing, physical, external exploitation, insider threat simulation), which systems may be targeted, and whether destructive actions (like demonstrating ransomware deployment capability) are permitted.
- Prohibited actions: Systems, networks, or methods that are explicitly off-limits. Production systems that cannot tolerate any risk of disruption, third-party environments, certain employee populations (executives, board members), or specific techniques (denial of service, actual data destruction).
- Emergency procedures: How the red team contacts the white team if they discover evidence of a real (non-exercise) compromise, if they inadvertently cause a service disruption, or if the blue team's response creates a safety concern (such as calling law enforcement).
- Legal authorization: A signed authorization letter (often called a "get out of jail free card") that the red team carries during physical intrusion activities and that documents executive-level approval for all engagement activities.
The white team
The white team consists of the small group of senior stakeholders who know the engagement is happening. This typically includes the CISO, the head of security operations, legal counsel, and the executive sponsor. The white team serves as the communication bridge between the red team and the organization, provides authorization for escalated activities, and can pause or terminate the engagement if necessary.
The white team must not share any information about the engagement with the blue team or other employees. The entire purpose of the engagement depends on the defending team operating under normal conditions.
What the report looks like
A red team report is structurally different from a pentest report. Where a pentest report lists vulnerabilities sorted by severity, a red team report tells a story. It is an attack narrative that documents the complete path from initial reconnaissance through objective completion, with detailed analysis of what was detected, what was missed, and why.
Attack narrative
The attack narrative is a chronological account of the engagement. It describes each action the red team took, when they took it, what MITRE ATT&CK techniques were used, what evidence was left behind, and whether any action was detected by the blue team. This narrative provides a complete picture of how a real attack would unfold against your organization.
A well-written attack narrative reads like a case study. It explains the red team's decision-making process: why they chose specific techniques, how they adapted when something did not work, and what alternatives they considered. This context helps the blue team understand not just what happened, but how an adversary thinks and operates.
Detection timeline
The detection timeline maps the red team's actions against the blue team's responses. For every significant red team action, the report documents: Was it logged? Was an alert generated? Was the alert investigated? Was the investigation escalated? Was the activity contained? How long did each step take?
This timeline reveals the true "dwell time," the duration an adversary can operate in your environment before being detected and contained. Industry data consistently shows that average dwell times range from weeks to months. A red team engagement gives you your organization's specific number and, more importantly, identifies exactly where the delays occur.
Findings and recommendations
Unlike a pentest report that focuses on technical vulnerabilities, red team findings span technical, procedural, and human dimensions:
- Technical findings: Specific vulnerabilities exploited, misconfigurations abused, and detection gaps in security tooling. These come with remediation recommendations similar to a pentest report.
- Process findings: Gaps in incident response procedures, escalation failures, communication breakdowns, and documentation that proved inadequate under pressure. These lead to recommendations for process improvements and updated playbooks.
- Human findings: Social engineering susceptibility rates, security awareness gaps, and behavioral patterns that enabled the attack. These inform targeted training and awareness programs.
- Strategic recommendations: High-level improvements to security architecture, monitoring strategy, and organizational structure that would meaningfully improve resilience. These are the recommendations that require executive attention and budget allocation.
MITRE ATT&CK mapping
The report maps every technique used during the engagement to the MITRE ATT&CK framework, with a detection assessment for each technique. This produces a heat map showing which adversary techniques your organization can detect and which it cannot. The heat map becomes a prioritized roadmap for detection engineering: your team knows exactly which ATT&CK techniques to build detections for next.
Getting the most value from a red team engagement
A red team engagement is a significant investment, often ranging from $40,000 to $150,000 or more depending on scope and duration. To maximize the return on that investment, approach the engagement with the right mindset and follow through on the results.
- Define clear, business-relevant objectives. "Can an attacker steal our customer data?" is a better objective than "test our defenses." Specific objectives lead to specific, actionable findings.
- Do not treat it as pass/fail. No organization "passes" a red team engagement. The value is in the details of what was detected, what was missed, and how quickly the team responded. Frame the engagement as a learning exercise, not an exam.
- Conduct a thorough debrief with both teams. Bring the red team and blue team together after the engagement for a collaborative debrief. The red team explains what they did and why; the blue team explains what they saw and how they responded. This is often where the most valuable learning happens.
- Build a remediation roadmap. Convert the findings into a prioritized plan with clear ownership, deadlines, and success metrics. The most critical gaps (usually detection blind spots and response time issues) should be addressed first.
- Re-test after remediation. Use purple team sessions to validate that detection improvements actually work, then schedule another red team engagement in 12 to 18 months to measure progress.
Red teaming is the most demanding test your security program can face. It strips away assumptions, exposes gaps that no other assessment can find, and gives your team direct experience with the adversary behaviors they are supposed to detect and contain every day.
It is not a replacement for penetration testing. It is what comes after your penetration testing program has matured and you are ready to answer the harder question: not "do we have vulnerabilities?" but "can a real adversary achieve their objectives against us, and would we know it was happening?"
The organizations that invest in red teaming consistently report that it transforms their security program. Not because of any single finding, but because it changes how the security team thinks about defense. Defenders who have been tested by a realistic adversary build better detections, write better playbooks, and respond to real incidents with the confidence that comes from experience.
Ready to Test Your Defenses Against a Real Adversary?
Our red team operates like an advanced threat actor with objectives that matter to your business. Find out what your defenders would actually see.
Explore Red Team Services Talk to Our Team