Why CC7.x Is Where Most SOC 2 Audits Get Uncomfortable
If you have been through a SOC 2 audit before, you already know the pattern: access controls feel manageable, change management is tedious but straightforward, and then the auditor asks about your monitoring program. That is when things get quiet.
The CC7.x series -- formally titled "System Operations" under the Common Criteria within the Trust Services Criteria -- covers how your organization detects, evaluates, and responds to system anomalies, security events, and incidents. Unlike controls that can be demonstrated with a single screenshot or policy document, continuous monitoring requires ongoing evidence that your detection capabilities actually work.
The problem most organizations face is not a lack of tools. It is a lack of structure. You might have CloudWatch alarms, a Datadog dashboard, and PagerDuty rotations, but if you cannot show an auditor the thread from detection to evaluation to response to remediation, those tools are just expensive noise generators.
The core question auditors are asking with CC7.x: "If something bad happened in your environment last Tuesday at 2 AM, would you have known about it, and can you prove it?"
Breaking Down CC7.1 Through CC7.5: What Each Control Actually Requires
Before building your monitoring program, you need to understand what each CC7 control is evaluating. The AICPA's Trust Services Criteria descriptions are deliberately broad, which gives you flexibility but also makes it easy to under-deliver. Here is what each control means in practice.
CC7.1 -- Detection and Monitoring Mechanisms
CC7.1 requires that you implement mechanisms to detect changes to configurations, new vulnerabilities, and suspicious activity across your infrastructure. This is the foundation control -- without it, CC7.2 through CC7.5 have nothing to work with.
In practical terms, CC7.1 means:
- Centralized log collection from servers, applications, databases, network devices, and cloud services
- Configuration monitoring that detects unauthorized changes to production systems
- Vulnerability scanning on a defined schedule (weekly or monthly at minimum)
- Alerting thresholds tied to your risk assessment and security policies
- Network monitoring for anomalous traffic patterns or unauthorized connections
Auditors typically ask for: log aggregation architecture diagrams, a list of alert rules with thresholds, evidence of vulnerability scan results over the audit period, and proof that monitoring covers all in-scope systems.
CC7.2 -- Monitoring System Components for Anomalies
Where CC7.1 is about having the detection mechanisms in place, CC7.2 focuses on actively monitoring those mechanisms to identify events that could indicate malicious activity, natural disasters, or operational errors.
This is where many organizations stumble. Having Datadog installed is CC7.1. Actually reviewing the alerts it generates and having a documented process for who reviews them and when -- that is CC7.2.
Evidence auditors expect for CC7.2:
- Alert review logs showing who acknowledged alerts and when
- On-call schedules and escalation procedures
- Documentation of alert tuning -- false positive reduction over time
- Proof that monitoring covers after-hours and weekends
CC7.3 -- Evaluating Security Events
CC7.3 requires a triage and classification process for detected events. Not every alert is an incident, and your auditor wants to see that you have a defined methodology for determining severity, impact, and whether an event escalates to an incident.
You need a documented classification framework. Most organizations use a tiered approach:
| Severity Level | Definition | Response Time | Example |
|---|---|---|---|
| Critical (P1) | Active data breach or system compromise affecting customer data | 15 minutes | Unauthorized database export detected |
| High (P2) | Potential security breach requiring immediate investigation | 1 hour | Multiple failed login attempts from unusual geography |
| Medium (P3) | Security anomaly requiring investigation within business hours | 4 hours | Configuration change outside change window |
| Low (P4) | Informational event to be reviewed during normal operations | 24 hours | New vulnerability detected on non-production system |
CC7.4 -- Incident Response
CC7.4 requires that confirmed incidents are responded to using a defined incident response plan. This control bridges monitoring into action. Your incident response plan must be documented, tested, and the auditor will ask for evidence of both.
Critical elements auditors verify for CC7.4:
- A written incident response plan that defines roles, responsibilities, communication procedures, and escalation paths
- Evidence of incident response plan testing (tabletop exercises or simulations) at least annually
- Incident tickets or records from actual incidents during the audit period, showing the response followed the documented plan
- Post-incident review documentation showing root cause analysis and corrective actions
- Communication templates for notifying affected parties, management, and if applicable, customers
CC7.5 -- Recovery and Lessons Learned
CC7.5 closes the loop. After an incident is contained and resolved, you must have a process for identifying root causes, documenting lessons learned, and implementing changes to prevent recurrence. This feeds back into CC7.1 by potentially adding new monitoring rules based on what you learned.
Auditors look for post-incident reports that include: timeline of events, root cause analysis, remediation actions taken, and any control improvements implemented as a result.
Building a Continuous Monitoring Program That Auditors Actually Accept
Understanding the controls is one thing. Building a program that produces clean audit evidence every day without manual heroics is another. Here is the architecture we recommend to clients preparing for their SOC 2 readiness assessment.
Layer 1: Log Aggregation and Centralization
Every SOC 2 continuous monitoring program starts with centralized logging. Your auditor needs to see that logs from all in-scope systems flow into a central location where they are retained, searchable, and protected from tampering.
What must be logged at minimum:
- Authentication events -- successful logins, failed attempts, MFA challenges, password resets, session creation and destruction
- Authorization events -- privilege escalation, role changes, access to sensitive resources, user access modifications
- System events -- server starts/stops, configuration changes, software installations, patch applications
- Network events -- firewall rule changes, VPN connections, unusual outbound traffic, DNS queries to known malicious domains
- Application events -- API errors, data export operations, administrative actions, failed input validation
- Database events -- schema changes, bulk data operations, privilege grants, direct query access
Retention matters: SOC 2 Type II audits cover a minimum period of 6 months. Your log retention must exceed your audit period. We recommend 12 months of hot storage and 24 months of cold/archive storage. Auditors will ask about your retention policy and verify it matches reality.
Layer 2: Alerting and Detection Rules
Raw logs without alerting rules are just expensive storage. Your monitoring program needs a defined set of detection rules that map to actual threats relevant to your environment. The key is not having thousands of rules -- it is having well-tuned rules that generate actionable alerts.
Start with these baseline detection categories:
- Brute force detection -- multiple failed authentication attempts within a time window
- Impossible travel -- login from geographically distant locations within an impossible timeframe
- Privilege escalation -- user granted admin rights or accessing resources outside their normal scope
- Configuration drift -- production system configurations changed outside approved change windows
- Data exfiltration signals -- unusually large data transfers, bulk API calls, or database exports
- Vulnerability threshold alerts -- new critical or high severity vulnerabilities detected on production systems
- Availability monitoring -- system downtime, performance degradation, or resource exhaustion
Each alert rule should be documented with: the detection logic, the threshold, the severity classification, the expected response procedure, and the owner responsible for tuning it.
Layer 3: Response Workflow and Escalation
When an alert fires, there must be a defined path from detection to resolution. This is where CC7.2, CC7.3, and CC7.4 converge. Your workflow should look like:
- Alert triggers and is routed to the on-call responder via your notification system (PagerDuty, Opsgenie, or similar)
- Responder acknowledges the alert within your defined SLA (this acknowledgment is audit evidence)
- Triage and classification -- responder evaluates the alert against your severity matrix and determines if it is a false positive, an informational event, or a potential incident
- Escalation if needed -- incidents above a defined severity threshold trigger your incident response plan, notifying the incident commander and relevant stakeholders
- Containment and response -- follow documented runbooks for the specific incident type
- Post-incident review -- document findings, root cause, and any monitoring improvements
The critical audit evidence here is the trail of timestamps. Auditors will sample alerts from your audit period and trace the entire lifecycle. If an alert fired and nobody acknowledged it for three days, that is a finding.
Common CC7.x Gaps That Create Audit Findings
After helping dozens of organizations through SOC 2 audits, we see the same monitoring gaps repeatedly. If any of these sound familiar, address them before your auditor arrives. For a broader look at what trips organizations up, see our guide on common SOC 2 audit findings.
- Monitoring gaps in scope -- Your monitoring covers your primary application but misses supporting infrastructure like CI/CD pipelines, internal admin tools, or staging environments that have access to production data
- No evidence of alert review -- Alerts fire into a Slack channel (or Teams channel) that nobody is accountable for reviewing. There is no ticket, no acknowledgment, no documentation
- Alert fatigue leading to ignored alerts -- You have 500 alerts per day, most are false positives, and the team has learned to ignore them. Auditors will ask about your false positive rate and tuning process
- Incident response plan never tested -- Your IRP exists as a PDF that was written two years ago. Nobody has run a tabletop exercise, and key personnel listed in the plan have left the company
- No post-incident reviews -- Incidents get resolved but nobody documents what happened, why, or what changed as a result. CC7.5 requires this loop to be closed
- Vulnerability scans without remediation tracking -- You run scans but there is no documented process for triaging findings, assigning owners, and tracking remediation to completion
- Log retention shorter than audit period -- Your logs roll off after 30 days but your audit period is 12 months. The auditor cannot verify that monitoring was operational for the full period
Pro tip: Before your audit, run a self-assessment. Pick 5 random dates from the past 6 months and verify that for each date, you can produce: evidence that monitoring was active, at least one alert that was acknowledged and triaged, and log data from all in-scope systems. If you cannot, you have a gap to close.
Tool Selection: What You Actually Need (and What You Do Not)
Organizations frequently over-invest in tooling while under-investing in process. Here is a realistic assessment of what you need at different company stages.
| Company Stage | Recommended Stack | Estimated Monthly Cost | What Auditors Accept |
|---|---|---|---|
| Startup (under 50 employees) | Cloud-native logging (CloudTrail, GCP Audit Logs) + Datadog or equivalent + PagerDuty + manual review process | $500 - $2,000 | Documented manual review process with evidence trail |
| Growth (50-200 employees) | SIEM (Panther, Sumo Logic, or Elastic) + vulnerability scanner + incident management platform + on-call rotation | $2,000 - $8,000 | Automated alerting with documented triage and response |
| Enterprise (200+ employees) | Enterprise SIEM (Splunk, Sentinel) + SOAR platform + dedicated security operations team + threat intelligence feeds | $10,000+ | SOC with 24/7 coverage, automated playbooks, and metrics |
For startups pursuing SOC 2, the key insight is that auditors care more about consistency and documentation than expensive tools. A well-documented manual review process with daily log review evidence will pass audit. A $50,000 SIEM with no documented review process will not.
Compliance Automation Platforms
Tools like Vanta, Drata, and Secureframe can significantly reduce the manual burden of evidence collection for CC7.x controls. They integrate with your cloud providers, identity providers, and monitoring tools to automatically pull evidence and flag gaps. For a deeper look at how these platforms work, see our guide on compliance automation for SOC 2 and ISO 27001.
However, these platforms do not replace your monitoring program -- they complement it. You still need the actual detection rules, alert response processes, and incident management workflows. The automation platform just makes it easier to prove to your auditor that those things are working.
Evidence Collection: What to Save and How to Organize It
The difference between a clean audit and a painful one is often just evidence organization. For CC7.x specifically, you should be collecting and organizing the following evidence continuously -- not scrambling to assemble it two weeks before your audit. For comprehensive guidance on evidence across all trust services criteria, see our SOC 2 evidence collection guide.
CC7.1 Evidence Checklist
- Architecture diagram showing log flow from all in-scope systems to your centralized logging platform
- List of all active alert rules with descriptions, thresholds, and severity classifications
- Vulnerability scan reports from each scan cycle during the audit period
- Configuration monitoring policies and evidence of drift detection
- Screenshots or exports showing monitoring dashboard coverage
CC7.2 Evidence Checklist
- On-call schedules covering the entire audit period
- Alert acknowledgment records (timestamps showing response within SLA)
- Monthly or quarterly alert tuning reports showing false positive reduction
- Evidence of after-hours monitoring coverage
CC7.3 Evidence Checklist
- Event classification criteria document (your severity matrix)
- Sample of triage records showing classification decisions
- Escalation logs for events classified as potential incidents
CC7.4 Evidence Checklist
- Current incident response plan with version history
- Tabletop exercise or simulation results from the audit period
- Incident tickets showing response followed documented procedures
- Communication records from actual incidents
CC7.5 Evidence Checklist
- Post-incident review reports with root cause analysis
- Evidence of corrective actions implemented (new alert rules, policy updates, architecture changes)
- Tracking of remediation items to completion
Mapping CC7.x to Penetration Testing Requirements
Continuous monitoring and penetration testing are complementary controls under SOC 2. Your monitoring program detects known threat patterns in real time, while penetration testing validates that your monitoring actually catches real attacks.
Smart organizations use penetration test results to improve their monitoring program:
- Red team findings become detection rules -- if a penetration tester exploited a path that your monitoring missed, that is a new detection rule to implement
- Test your detection coverage -- ask your penetration testing partner to document which of their activities triggered alerts and which did not. This gap analysis is gold for CC7.1 improvement
- Validate response procedures -- if your penetration test triggers an alert, did the on-call team respond correctly? This tests CC7.2 through CC7.4 in a real scenario
Recommendation: Schedule your penetration test at least 3 months before your SOC 2 audit window ends. This gives you time to implement any monitoring improvements identified during testing and demonstrate those improvements to your auditor.
Building Your 90-Day Continuous Monitoring Roadmap
If you are starting from scratch or need to significantly improve your monitoring program before an upcoming audit, here is a realistic 90-day implementation plan.
Days 1-30: Foundation
- Inventory all in-scope systems and verify log collection is active for each one
- Implement centralized log aggregation if not already in place
- Define your severity classification matrix and document it in your security policy
- Set up on-call rotation with clear escalation procedures
- Implement baseline alert rules for authentication, authorization, and configuration changes
Days 31-60: Operationalization
- Establish daily log review process with documented evidence trail
- Implement vulnerability scanning on a weekly or monthly cadence
- Write or update your incident response plan with current personnel and contact information
- Create response runbooks for the top 5 most likely incident types
- Begin collecting and organizing evidence in your compliance platform or shared repository
Days 61-90: Validation and Tuning
- Conduct a tabletop exercise to test your incident response plan
- Review and tune alert rules based on the first 60 days of operation -- reduce false positives
- Run the self-assessment: pick random dates and verify you can produce complete evidence
- Schedule a readiness assessment to identify any remaining gaps before formal audit
- Document your monitoring program in a formal Continuous Monitoring Policy
The Difference Between Type I and Type II for Monitoring
If you are pursuing a SOC 2 Type I versus Type II, the monitoring evidence requirements differ significantly. Type I evaluates whether your controls are suitably designed at a point in time. Type II evaluates whether they operated effectively over a period.
For CC7.x specifically:
- Type I -- Auditor verifies that monitoring tools are configured, alert rules exist, processes are documented, and the incident response plan is in place. You do not need months of operational evidence.
- Type II -- Auditor samples evidence across the entire audit period (minimum 6 months). They will pull alert response records from random dates, verify log retention covers the full period, and examine actual incident records. Your monitoring must have been operational and documented consistently throughout.
This distinction matters for planning. If your monitoring program is new, consider pursuing Type I first to validate your design, then transition to Type II after 6-12 months of consistent operation.
Need Help Building Your SOC 2 Monitoring Program?
Lorikeet Security helps organizations design continuous monitoring programs that satisfy CC7.x requirements and survive audit scrutiny. From readiness assessments to penetration testing that validates your detection capabilities.