Most red team engagements that go wrong do not fail because of a technical mistake. They fail because the rules of engagement were poorly defined, ambiguous, or missing entirely. An operator accesses a system that was supposed to be off-limits. A SOC analyst triggers a full incident response against what they believe is a real breach. Legal counsel discovers after the fact that nobody authorized the social engineering campaign that just targeted the CFO. These are not hypothetical scenarios. They happen when teams skip or rush the ROE process.
The rules of engagement are the single most important document in any red team engagement. They define what the red team can do, what they cannot do, who knows the engagement is happening, what happens when something goes wrong, and how both sides are legally protected. A well-crafted ROE document prevents scope creep, protects operators from legal liability, keeps the client's production environment safe, and ensures the engagement produces actionable intelligence rather than chaos.
This guide covers everything that belongs in a red team ROE document, from legal authorization to deconfliction procedures, with enough detail that you can use it as a reference when drafting your own.
What are red team rules of engagement?
Red team rules of engagement are the formal, written agreement between the red team and the client organization that defines the legal authorization, operational boundaries, communication protocols, and behavioral constraints of an adversary simulation engagement. The ROE is a binding operational document that governs every action the red team takes from the first day of reconnaissance to the final report delivery.
If you have worked with penetration testing before, you are familiar with scope documents. A pentest scope typically lists the target systems, the testing window, and any off-limits areas. Red team rules of engagement go significantly further because the nature of the engagement demands it.
Red teams use techniques that pentests do not: social engineering, phishing campaigns targeting real employees, physical intrusion attempts, phone-based pretexting, wireless attacks against corporate networks, and multi-stage attack chains that may persist over weeks. These techniques carry higher operational and legal risk. A pentester scanning a web application is unlikely to trigger a company-wide incident response. A red team operator who clones an employee badge and walks into a data center might.
The ROE exists to ensure that every party, the red team operators, the client's executive sponsor, the legal team, and the limited set of people who know the engagement is happening, has an explicit, documented understanding of what is authorized, what is prohibited, and what to do when the unexpected occurs.
The essential components of a red team ROE document
A comprehensive ROE document is not a one-page form. It is a detailed operational agreement that covers every dimension of the engagement. Here is what each section should contain and why it matters.
Engagement objectives and success criteria
Every red team engagement must begin with clearly defined objectives. Unlike a pentest where the goal is broad vulnerability discovery, a red team operates toward specific outcomes: exfiltrate a particular dataset, compromise a critical system, gain access to a specific executive's accounts, or demonstrate the ability to deploy ransomware without detection.
The ROE should document both the primary objectives and the success criteria. What constitutes a successful engagement? Is it achieving the objective without detection? Achieving it at all, regardless of whether the blue team noticed? Documenting the full attack chain even if the objective was not reached? Without defined success criteria, there is no way to evaluate whether the engagement delivered value.
Scope: in-scope systems, networks, and physical locations
While red team scope is typically broader and more objective-driven than pentest scope, it still requires explicit boundaries. The ROE should list the networks, IP ranges, domains, cloud accounts, physical locations, and employee groups that the red team is authorized to target. If the engagement includes subsidiary companies, joint ventures, or shared infrastructure, each entity must be explicitly named and authorized.
For physical assessments, the ROE must specify which buildings, floors, and facilities are in scope. It should note any areas that require special handling, such as clean rooms, server rooms with sensitive equipment, or facilities with safety-critical systems.
Out-of-scope: what the red team must not touch
The out-of-scope section is arguably more important than the in-scope section. It defines the absolute boundaries that the red team must never cross, regardless of the tactical advantage.
- Production databases with real customer data that cannot tolerate any risk of corruption or exposure
- Safety-critical systems such as SCADA, ICS, medical devices, or any system where disruption could cause physical harm
- Specific IP addresses or hosts that belong to third parties, partners, or shared infrastructure not covered by the authorization
- Specific individuals who must not be targeted with social engineering, such as employees with known health conditions, executives in active legal proceedings, or anyone who has explicitly opted out
- Systems under active audit or compliance review where red team activity could contaminate audit logs or trigger compliance violations
Authorized techniques
The ROE must explicitly list every category of technique the red team is authorized to employ. Do not assume anything is implicitly authorized. If it is not in the document, it is not permitted.
- Network exploitation: vulnerability scanning, exploitation of discovered vulnerabilities, lateral movement, privilege escalation
- Social engineering: phishing emails, spear-phishing, vishing (phone-based pretexting), in-person impersonation
- Physical access: tailgating, lock picking, badge cloning, dumpster diving, surveillance
- Wireless attacks: rogue access points, WPA/WPA2 cracking, evil twin attacks, Bluetooth exploitation
- Application exploitation: web application attacks, API exploitation, thick client attacks, mobile application testing
- Cloud exploitation: IAM enumeration, storage bucket misconfiguration, cross-account pivoting, metadata service abuse
Prohibited techniques
Equally critical is the explicit list of what the red team must never do, even if the opportunity presents itself during the engagement.
- Denial of service: no attacks designed to disrupt availability of production systems
- Destructive actions: no deletion, encryption, or corruption of real data
- Real data exfiltration: no exfiltration of actual PII, financial data, or trade secrets. Proof of access is demonstrated through screenshots, metadata, or pre-seeded canary data
- Modification of safety systems: no tampering with fire suppression, physical security alarms, or life-safety infrastructure
- Third-party targeting: no pivoting to systems belonging to organizations not covered by the authorization
- Actions that could cause reputational harm: no public disclosure, no social media posts, no contacting customers or partners as part of the pretext
Communication protocols and emergency contacts
The ROE must establish a clear communication framework. This includes who on the client side is aware of the engagement (the "trusted agents"), the secure communication channel between the red team lead and the client point of contact, the schedule for status updates, and the emergency contact chain.
Emergency contacts must include at minimum: the client's trusted agent (reachable 24/7 during the engagement window), the red team lead, the client's general counsel or legal representative, and the red team's project manager. Every operator should have these contacts immediately accessible at all times.
Deconfliction procedures
Deconfliction is the process for determining whether detected suspicious activity is the red team or a real threat. This section of the ROE is critical and is covered in detail later in this guide.
Legal authorization and liability
The ROE must include or reference the formal legal authorization for the engagement, including the "get-out-of-jail" letter, indemnification provisions, and liability limitations. This is covered in depth in the legal considerations section below.
Data handling and evidence destruction timelines
During a red team engagement, operators will inevitably encounter sensitive information: credentials, internal documents, personal data, financial records. The ROE must specify how this data is handled during the engagement (encrypted storage, access controls, need-to-know restrictions) and when and how it is destroyed after the engagement concludes.
Typical provisions include: all engagement data stored on encrypted volumes, no client data stored on personal devices, all raw data and tooling artifacts destroyed within 30 days of report delivery, and the client receives written confirmation of data destruction.
Reporting requirements and classification
The ROE should define the reporting deliverables, including format, detail level, distribution restrictions, and timelines. This prevents misaligned expectations about what the client will receive and when. Reporting requirements are covered in detail in a dedicated section below.
Well-defined ROE vs. vague ROE
The quality of the rules of engagement directly determines the quality and safety of the engagement. Here is how the outcomes differ.
| Dimension | Well-Defined ROE | Vague ROE |
|---|---|---|
| Legal protection | Operators and client are covered by explicit authorization for every technique used | Gaps in authorization create exposure; operators risk prosecution for activities they assumed were permitted |
| Scope creep risk | Clear boundaries prevent the engagement from expanding beyond what was planned and budgeted | Ambiguous scope leads to the red team either being too conservative or accidentally crossing boundaries |
| Blue team impact | Deconfliction procedures prevent unnecessary incident response and wasted SOC resources | SOC may launch a full incident response, divert resources from real threats, and escalate to management unnecessarily |
| Executive confidence | Leadership trusts the process because the framework was reviewed and approved before testing began | Executives learn mid-engagement that activities they did not expect are happening, eroding trust in the red team and the security program |
| Engagement quality | Operators focus on objectives without second-guessing whether each action is authorized | Operators self-censor or waste time seeking ad hoc approvals, reducing the realism and value of the simulation |
| Post-engagement disputes | Clear documentation eliminates disagreements about what was tested, what was found, and how data was handled | Client and red team disagree on whether certain findings are valid, whether certain actions were authorized, and who is liable for any disruption |
Communication and deconfliction
Deconfliction is the most operationally sensitive aspect of any red team engagement. When the blue team or SOC detects suspicious activity, someone needs to determine quickly whether it is the red team or a real attacker, without revealing the engagement to the broader security team.
The trusted agent model
Every red team engagement requires at least one trusted agent on the client side. This is typically the CISO, security director, or VP of Engineering, someone senior enough to make rapid decisions but not part of the SOC or IR team being tested. The trusted agent is the single point of contact for deconfliction decisions.
The trusted agent's responsibilities include:
- Being available on a dedicated, secure communication channel during all engagement hours
- Making immediate deconfliction decisions when the SOC escalates suspicious activity
- Determining whether to let the blue team continue investigating (to test their response) or to quietly deconflict (to prevent wasted resources)
- Authorizing emergency stops if the engagement is causing unintended operational impact
- Ensuring the engagement remains confidential from the broader security team until the planned debrief
Emergency stop procedures
The ROE must define a clear emergency stop ("kill switch") process. If the engagement causes unintended production impact, if a real security incident occurs simultaneously, or if any safety concern arises, either party must be able to halt all red team activity immediately.
The emergency stop procedure should specify:
- Who has the authority to call an emergency stop (both client-side and red team-side)
- The communication channel and code word for triggering the stop
- What the red team does immediately upon receiving the stop signal (cease all activity, disconnect from client networks, preserve current state for forensic review)
- The process for resuming the engagement after the situation is resolved
- How to distinguish an emergency stop from a temporary pause (e.g., "stand down for the next 4 hours" vs. "engagement terminated")
Daily check-ins and status reporting
Even though the engagement is covert from the blue team's perspective, the trusted agent needs regular situational awareness. Best practice is a daily encrypted status report from the red team lead to the trusted agent that covers:
- High-level summary of activities conducted in the past 24 hours
- Any systems accessed or compromised
- Any near-misses or potential detection events
- Planned activities for the next 24 hours
- Any operational concerns or requests for authorization adjustments
Code words and deconfliction protocols
Establish pre-agreed code words or phrases that the trusted agent can use when the SOC escalates an alert. For example, if the SOC reports suspicious lateral movement and asks the trusted agent whether to escalate, the trusted agent can use the agreed protocol to either confirm it is a known activity (allowing the blue team to continue investigating as a training exercise) or flag it as unknown (indicating a real threat that requires genuine incident response).
The deconfliction protocol must handle the scenario where both a real attack and the red team engagement are happening simultaneously. The red team should log all activities with precise timestamps so that the trusted agent can cross-reference any SOC alerts against the red team's activity log and immediately identify events that are not attributable to the engagement.
Legal considerations
The legal framework for a red team engagement is non-negotiable. Without proper authorization, red team activities can constitute criminal offenses under computer fraud, trespassing, wire fraud, and data protection statutes. The legal section of the ROE is what separates a professional adversary simulation from an unauthorized intrusion.
The get-out-of-jail letter
The authorization letter, often called a "get-out-of-jail" letter, is a formal document signed by someone with the legal authority to authorize the testing activities. This is typically the CEO, CTO, CISO, or general counsel. The letter must:
- Be on company letterhead and signed by an authorized officer
- Specify the exact date range of the engagement
- List the authorized activities by category (network exploitation, social engineering, physical access, etc.)
- Identify the red team company and individual operators by name
- Explicitly state that the named individuals are authorized to perform the listed activities against the company's systems and facilities
- Include emergency contact information for legal counsel
Every red team operator should carry a copy of this letter during physical assessments. If an operator is detained by physical security or law enforcement, the letter provides immediate evidence of authorization. Operators should also carry identification and the emergency contact number for the client's trusted agent.
Authorization chains
Ensure the person signing the authorization actually has the authority to do so. A department head authorizing testing of the entire corporate network may not have that authority. The signer must have authority over all systems, facilities, and personnel included in the scope. When in doubt, escalate to the CEO or general counsel.
For organizations with subsidiaries, multiple legal entities, or shared infrastructure, each entity may require separate authorization. A parent company's CISO cannot necessarily authorize testing of a subsidiary's systems if the subsidiary is a separate legal entity.
Third-party and cloud provider considerations
Red team engagements that target cloud infrastructure must comply with the cloud provider's penetration testing policies. Each major provider has specific requirements:
- AWS: permits most penetration testing without prior approval as of their current policy, but prohibits DNS zone walking, DoS/DDoS, port flooding, and testing of AWS infrastructure itself. Review their current penetration testing policy before each engagement.
- Microsoft Azure: requires adherence to their Rules of Engagement for penetration testing. No prior notification required for most tests, but DoS testing requires approval through their DDoS test execution form.
- Google Cloud Platform: permits penetration testing of your own projects without notification, but prohibits testing that impacts other customers or Google infrastructure. Their Acceptable Use Policy governs all testing activities.
If the engagement could impact systems belonging to third parties (e.g., shared hosting, SaaS platforms, partner integrations), those third parties may need to be notified or excluded from scope. Accidentally compromising a third-party system without authorization exposes both the red team and the client to significant legal liability.
Insurance considerations
Both the red team provider and the client should carry appropriate insurance. The red team firm should have professional liability (errors and omissions) insurance and cyber liability insurance. The client should verify that their existing cyber insurance policy covers authorized security testing and does not contain exclusions that could void coverage if a red team engagement triggers a claim.
Some insurance policies require notification before penetration testing or red team engagements. Failing to notify the insurer could jeopardize coverage if the engagement causes an incident that triggers a claim.
Reporting requirements
The ROE should specify exactly what the client will receive at the end of the engagement and in what format. Misaligned expectations about reporting are a common source of post-engagement friction. Define these elements upfront.
Executive summary
The executive summary should be written for a non-technical audience: the board, C-suite, and legal counsel. It covers the engagement objectives, whether they were achieved, the overall risk posture, and the top recommendations. Specify the expected length (typically 2-4 pages) and whether it needs to align with a specific risk framework.
Technical detail and attack chain documentation
The technical report documents every step of the attack chain: initial access, persistence mechanisms, lateral movement, privilege escalation, and objective completion. Each step should include the tools and techniques used, the systems affected, timestamps, and detection opportunities the blue team missed.
The ROE should specify whether the client wants MITRE ATT&CK mapping for each technique, whether screenshots and evidence are required for every step, and how deep the technical documentation should go. Some clients want operator-level detail; others want a summary that their security team can act on without being overwhelmed.
Finding severity ratings
Agree on the severity rating framework before the engagement starts. Common options include CVSS, a custom severity scale aligned with the client's risk management framework, or a qualitative scale (Critical, High, Medium, Low, Informational). The ROE should specify which framework applies and how the red team determines severity for findings that represent detection gaps rather than traditional vulnerabilities.
Recommendations and remediation guidance
Define the expected depth of remediation guidance. Should the red team provide strategic recommendations only (e.g., "implement network segmentation"), or should they include specific technical implementation guidance (e.g., "create firewall rules between VLAN X and VLAN Y to block SMB traffic")? The level of detail affects both the report timeline and the engagement cost.
Retest provisions
The ROE should specify whether the engagement includes a retest period during which the red team will verify that remediations are effective. If retesting is included, define the window (e.g., within 90 days of report delivery), the scope of retesting (all findings or only critical and high), and whether retesting is included in the engagement price or billed separately.
Report classification and distribution
Red team reports contain information that could be used to attack the organization. The ROE should specify: who is authorized to receive the full report, whether a redacted version is needed for broader distribution, how the report should be stored and transmitted (encrypted, marked confidential), and the retention period after which the client should destroy or archive the report.
Red team ROE template sections
A complete red team ROE document should contain the following sections. Use this as a checklist when drafting or reviewing an ROE for your next engagement.
- Document control: version history, authors, reviewers, approval signatures, classification level
- Engagement overview: client name, red team provider, engagement dates, engagement type (full red team, assumed breach, targeted objective)
- Objectives and success criteria: primary objectives, secondary objectives, definition of success, metrics to be collected
- Scope definition: in-scope networks and IP ranges, domains, cloud accounts, physical facilities, personnel groups
- Exclusions: out-of-scope systems, networks, individuals, and any absolute prohibitions
- Authorized techniques: categories of permitted attack vectors with specific examples for each
- Prohibited techniques: explicit list of actions the red team must not perform under any circumstances
- Communication plan: trusted agent identification, secure communication channels, status reporting cadence, escalation procedures
- Deconfliction procedures: code words, trusted agent decision matrix, concurrent incident handling, activity logging requirements
- Emergency stop protocol: trigger criteria, stop signal, immediate actions, resumption process
- Legal authorization: authorization letter reference, signer authority confirmation, applicable laws and jurisdictions
- Third-party and cloud provider notifications: list of providers notified, applicable policies, any restrictions imposed
- Data handling: encryption requirements, storage restrictions, access controls, evidence preservation, destruction timelines and confirmation process
- Reporting deliverables: report format, severity framework, detail level, delivery timeline, distribution list, classification
- Retest provisions: retest window, scope, pricing, and scheduling process
- Insurance and liability: red team provider insurance details, client insurance acknowledgment, indemnification terms, liability caps
- Signatures: client authorized signatory, red team lead, legal counsel (both parties where applicable)
Common ROE mistakes
After reviewing hundreds of ROE documents across engagements, these are the mistakes that appear most frequently and cause the most problems.
Scope defined too broadly
"All company systems and networks" is not a scope definition. It is an invitation for confusion, scope creep, and legal exposure. Every system and network segment should be explicitly listed. If the scope is intentionally broad, it still needs boundaries: which subsidiaries, which geographies, which cloud accounts, which employee groups.
No emergency stop procedure
Some ROE documents define the scope and techniques but include no mechanism for halting the engagement if something goes wrong. Production systems can crash. Real incidents can coincide with the engagement. An employee can have a medical emergency during a social engineering scenario. Without a documented emergency stop, critical minutes are wasted figuring out how to pause the engagement.
No deconfliction process
If the blue team detects the red team and nobody has defined how to handle it, one of two things happens: the SOC wastes days investigating and responding to the red team as if it were a real breach, or the trusted agent panics and reveals the engagement prematurely, invalidating the rest of the test. Neither outcome is acceptable.
Missing or insufficient legal authorization
The authorization letter is signed by someone without authority over all in-scope systems. The letter does not mention physical access but the engagement includes tailgating. The letter authorizes testing of the parent company but not the subsidiary that shares the same network. Any gap between what the letter authorizes and what the red team actually does creates criminal liability exposure for the operators.
Undefined success criteria
Without success criteria, the engagement devolves into an open-ended exercise where neither the client nor the red team can objectively evaluate the outcome. Did the engagement succeed? Was the client's security adequate? Without predefined criteria, these questions become subjective debates rather than data-driven assessments.
No data handling policy
During a red team engagement, operators will encounter passwords, internal documents, personal information, and potentially regulated data. If the ROE does not specify how this data is handled, stored, and destroyed, the client has no assurance that sensitive information is being managed responsibly. This is especially critical in regulated industries subject to GDPR, HIPAA, or PCI DSS.
Forgetting cloud provider policies
Red teams that target cloud infrastructure without reviewing the provider's testing policies risk having the client's account suspended or flagged. AWS, Azure, and GCP all have specific rules about what testing is permitted. Violating these policies can result in account termination, which is significantly worse than any finding the red team was trying to discover.
At Lorikeet Security, every red team engagement starts with a detailed ROE review conducted jointly with the client's legal, IT, and executive teams. We do not begin testing until every stakeholder has reviewed the document, raised their concerns, and signed off. This process typically takes one to two weeks and is not optional. The ROE is the foundation that the entire engagement stands on. Cutting corners on the rules of engagement to save time is the single fastest way to turn a productive adversary simulation into a liability.
Planning a Red Team Engagement?
Lorikeet Security's red team and penetration testing services include comprehensive ROE development, legal review coordination, deconfliction planning, and post-engagement debriefs. We partner with your team from scoping through remediation.