Network segmentation is not technically required by PCI DSS. But without it, your entire network is in scope for assessment. Every server, every workstation, every network device, and every application becomes part of the cardholder data environment. The assessment cost alone makes segmentation essential, and the security benefits make it foundational.
We conduct PCI DSS segmentation testing as part of every PCI engagement, and the failure rate on first assessment is disturbingly high. Organizations believe their segmentation is effective because they have firewalls between networks. But firewalls with overly permissive rules, shared services that bridge zones, and management networks that span boundaries all invalidate segmentation. Here is how to architect segmentation that actually passes testing.
Why Segmentation Matters for PCI DSS
The primary purpose of network segmentation in PCI DSS is scope reduction. By isolating the cardholder data environment (CDE) from the rest of your network, you reduce the number of systems that must comply with PCI DSS requirements. This translates directly to reduced assessment effort, reduced compliance cost, and reduced attack surface.
Consider the difference: without segmentation, a company with 500 servers must assess all 500 for PCI DSS compliance. With effective segmentation that isolates cardholder data processing to 20 servers, only those 20 servers (plus the segmentation controls themselves) are in scope. The assessment cost difference can be hundreds of thousands of dollars annually.
Critical distinction: Segmentation does not remove your obligation to protect cardholder data. It reduces the number of systems that must be assessed for compliance. If segmentation fails, meaning an attacker or tester can traverse from an out-of-scope network into the CDE, then every system on the connected network comes back into scope. Failed segmentation is one of the most expensive findings in a PCI assessment.
CDE Architecture Patterns
There are several proven architecture patterns for CDE isolation. The right choice depends on your infrastructure, payment processing model, and operational requirements.
Pattern 1: Dedicated CDE VLAN with firewall isolation
The most common pattern for on-premises environments. CDE systems reside on dedicated VLANs with stateful firewall rules controlling all ingress and egress traffic. No traffic from out-of-scope networks is permitted to reach CDE systems except through defined, documented, and justified paths. This pattern is well-understood by QSAs and straightforward to validate.
Pattern 2: Cloud-native segmentation with VPC isolation
For cloud environments, the CDE resides in a dedicated VPC (AWS), VNet (Azure), or VPC (GCP) with security groups and network ACLs enforcing isolation. No peering connections exist between the CDE VPC and non-CDE VPCs unless explicitly justified and controlled. This pattern maps well to cloud-native architectures but requires careful management of IAM roles that could bridge the segmentation boundary.
Pattern 3: Microsegmentation with zero-trust controls
Microsegmentation applies segmentation at the workload level rather than the network level. Each CDE workload has its own security policy that restricts communication to only the systems and ports necessary for its function. Tools like Illumio, Guardicore, or cloud-native security groups implement this pattern. It provides stronger isolation than network-level segmentation but is more complex to implement and maintain.
Pattern 4: Third-party processor with tokenization
The most effective scope reduction strategy is to never touch cardholder data at all. By using a PCI-compliant payment processor with tokenization, cardholder data never enters your environment. Your systems only handle tokens that have no value if compromised. This eliminates the CDE entirely and typically reduces your assessment to an SAQ A or SAQ A-EP, depending on your payment page architecture.
| Pattern | Best For | Complexity | Scope Reduction |
|---|---|---|---|
| Dedicated VLAN | On-premises data centers with traditional infrastructure | Moderate | High, but CDE systems remain in scope |
| Cloud VPC Isolation | Cloud-native environments on AWS, Azure, or GCP | Moderate | High, with careful IAM management |
| Microsegmentation | Complex environments with mixed workloads | High | Very high, granular per-workload isolation |
| Tokenization | E-commerce, SaaS with payment features | Low to Moderate | Maximum, potentially eliminates CDE |
Common Segmentation Failures
These are the segmentation failures we find most frequently during segmentation penetration testing. Each one invalidates the segmentation and brings out-of-scope systems back into PCI DSS scope.
Overly permissive firewall rules
The most common failure. Organizations create segmentation boundaries with firewalls but then add rules that allow broad access. "Allow all from management network to CDE" effectively bridges the segmentation. Every firewall rule that permits traffic from an out-of-scope network to the CDE must be documented with a business justification and restricted to specific source IPs, destination IPs, and ports. Any-any rules between zones invalidate segmentation.
Shared services that bridge zones
DNS servers, NTP servers, SIEM collectors, backup servers, and Active Directory domain controllers frequently span segmentation boundaries. If a shared DNS server sits in the corporate network and the CDE systems use it, that DNS server is now in scope because it provides services to the CDE. The same applies to any shared service. Either deploy dedicated instances within the CDE or ensure the shared service is assessed as an in-scope connected system.
Management network bridges
Systems administrators often use the same management workstation to administer both CDE and non-CDE systems. If that workstation can reach both networks, it bridges the segmentation boundary. Jump hosts, bastion servers, and privileged access management (PAM) solutions must be segmented so that CDE management access does not transit non-CDE networks.
VPN and remote access misconfigurations
Remote access VPNs that place users onto the CDE network segment rather than routing them through a controlled jump host effectively extend the CDE to every remote device. This is especially problematic when the same VPN provides access to both CDE and corporate resources.
Cloud IAM cross-account access
In cloud environments, IAM roles that can assume roles across account boundaries can bridge segmentation even when network-level controls are correct. A developer with an IAM role in a non-CDE account that can assume a role in the CDE account has effectively traversed the segmentation boundary at the identity layer. Cloud segmentation must address both network and identity boundaries.
Segmentation Testing Methodology
PCI DSS Requirement 11.4.5 mandates segmentation testing to verify that controls are operational and effective. The testing must confirm that segmentation methods isolate the CDE from all out-of-scope systems and networks. Here is what a proper segmentation test includes.
Network-layer testing: From every out-of-scope network segment, attempt to reach every CDE system on every port. This includes both TCP and UDP protocols. Any successful connection from an out-of-scope network to a CDE system on any port indicates a segmentation failure.
Reverse direction testing: From within the CDE, attempt to reach out-of-scope systems. While ingress to the CDE is the primary concern, unrestricted egress from the CDE can facilitate data exfiltration and represents a security risk that should be documented.
Service-layer testing: Beyond port-level connectivity, test whether shared services (DNS, AD, SNMP) allow information or access to traverse between zones. A DNS server that resolves internal CDE hostnames for out-of-scope requestors is a segmentation concern.
Identity-layer testing: For cloud environments, test whether IAM roles, service accounts, or federation configurations allow cross-boundary access. This layer is frequently missed in network-only segmentation tests.
| Testing Frequency | Merchants | Service Providers |
|---|---|---|
| Routine testing | At least annually | At least every six months |
| After segmentation changes | Required | Required |
| After infrastructure changes | Required if changes affect segmentation | Required if changes affect segmentation |
Cloud-Native Segmentation Strategies
Cloud environments require a different approach to segmentation because the network boundaries are virtual and the control plane introduces an additional attack surface.
AWS segmentation
Use a dedicated AWS account for the CDE (AWS Organizations provides account-level isolation). Within the CDE account, use a dedicated VPC with no VPC peering to non-CDE accounts. Security groups and network ACLs enforce port-level controls. AWS PrivateLink provides private connectivity to shared services without VPC peering. AWS Transit Gateway, if used, must have route tables that prevent CDE traffic from routing through non-CDE networks.
Azure segmentation
Use a dedicated Azure subscription for the CDE. Virtual networks (VNets) with network security groups (NSGs) enforce segmentation. Azure Private Link and service endpoints keep traffic on the Microsoft backbone. Azure Firewall or third-party NVAs provide stateful inspection between zones. Management Group policies can enforce segmentation at scale.
GCP segmentation
Use a dedicated GCP project for the CDE. VPC firewall rules and hierarchical firewall policies enforce segmentation. Shared VPC configurations require careful review because they can inadvertently bridge segmentation boundaries. VPC Service Controls provide an additional layer of isolation for GCP services.
Cloud segmentation pitfall: Network-level segmentation in the cloud is necessary but not sufficient. If a developer has IAM permissions to deploy resources in the CDE account from their non-CDE workstation, the identity layer bridges the segmentation. Cloud segmentation must address network isolation, IAM boundaries, resource policies, and service-level controls.
Documenting Your Segmentation for QSA Review
Your QSA needs to understand and validate your segmentation architecture. Good documentation makes the assessment faster and reduces the risk of misunderstandings that lead to scope disagreements.
Network diagrams: Provide diagrams showing all CDE components, segmentation boundaries, firewall placement, and traffic flows between zones. These diagrams must be current (updated within the last 12 months and after any significant changes).
Firewall rule justification: Every firewall rule that permits traffic to or from the CDE must have a documented business justification. Rules should specify source, destination, port, protocol, and the business reason for the connection. Rules without justification are findings.
Segmentation test results: The results of your most recent segmentation test, showing which out-of-scope networks were tested, which CDE systems were targeted, and the results of each test. Any failures must have documented remediation and retest evidence.
Connected systems inventory: A list of all systems that connect to the CDE but are not part of it (connected systems). These systems are in scope for relevant PCI DSS requirements even though they do not store, process, or transmit cardholder data. Common examples include jump hosts, logging servers, and monitoring systems.
Need segmentation testing for PCI DSS?
We validate segmentation controls from every out-of-scope network segment, testing network, service, and identity layers. Reports are structured for QSA review with clear pass/fail results.