How to Build a Risk Register That Actually Gets Used | Lorikeet Security Skip to main content
Back to Blog

How to Build a Risk Register That Actually Gets Used

Lorikeet Security Team February 26, 2026 8 min read

Almost every startup that goes through SOC 2 or ISO 27001 compliance ends up with a risk register. And almost every one of those risk registers becomes a dead document within six months. It gets created during the initial compliance push, presented to the auditor, filed away, and never looked at again until the next audit cycle.

This is a problem. Not just for compliance, but for actual security. A risk register that nobody uses provides zero value to the organization. It does not inform decisions, it does not drive remediation, and it gives you a false sense of security by making it look like risks are being managed when they are not.

The fix is not a better template. It is a fundamentally different approach to how the register is built, maintained, and integrated into your workflows. Here is how to do it right.


Why Most Risk Registers Fail

Before building a better risk register, it helps to understand why the current approach fails so consistently. The root causes are predictable:

They are built for auditors, not for the team

When the primary audience for your risk register is an external auditor who looks at it once a year, the content optimizes for audit satisfaction rather than operational utility. Risks are described in generic terms ("unauthorized access to systems"), scores are assigned to look reasonable, and treatment plans are vague enough to avoid accountability. The result is a document that passes audit but provides no value to the people actually managing risk.

They live in the wrong place

A risk register stored in a spreadsheet on someone's Google Drive or buried in a compliance platform that only one person logs into is invisible to the rest of the organization. If the engineering team cannot see it during sprint planning, and the leadership team does not see it during strategic reviews, it does not exist in any meaningful sense.

The scoring is meaningless

"Likelihood: Medium. Impact: High." What does that actually tell you? Without clear definitions and calibrated scoring criteria, risk scores become arbitrary labels that different people interpret differently. When two people disagree on whether a risk is "medium" or "high," there is no way to resolve it because the terms are not defined.

There is no ownership

Risks without owners are risks without accountability. If every risk in your register is owned by "the security team" or "engineering," nobody is specifically responsible for monitoring, mitigating, or accepting each risk. Without clear ownership, treatment plans do not get executed.


What Goes Into a Useful Risk Register

A risk register that actually drives decisions needs specific fields that are defined clearly and maintained consistently. Here is the structure we recommend for startups:

Field Purpose Example
Risk ID Unique identifier for tracking R-2026-014
Risk Description Specific, actionable description of the risk Compromised developer credentials used to push malicious code to production
Category Groups risks for reporting and analysis Supply Chain / Development Pipeline
Likelihood Calibrated probability score (1-5) 3 (Possible: 10-50% chance per year)
Impact Calibrated business impact score (1-5) 4 (Major: $100K-$500K impact)
Risk Score Likelihood x Impact 12 (High)
Existing Controls What is already in place to mitigate Code review required, branch protection, MFA on GitHub
Residual Risk Risk level after existing controls Medium (8)
Treatment Accept, Mitigate, Transfer, or Avoid Mitigate
Treatment Plan Specific actions with deadlines Implement signed commits by Q2; add SAST to CI pipeline by Q3
Owner Named individual accountable Sarah Chen, Head of Engineering
Review Date Next scheduled review 2026-06-30

Notice the difference from the typical risk register. The risk description is specific enough that someone reading it for the first time understands exactly what the risk is. The treatment plan has concrete actions and deadlines. There is a named owner, not a team.


A Scoring Methodology That Means Something

The single biggest improvement you can make to your risk register is calibrating your scoring criteria so that everyone interprets the numbers the same way. Here is a practical scoring framework:

Likelihood scale

Impact scale

These dollar amounts should be calibrated to your organization's size. A $50K impact is "moderate" for a company doing $5M ARR but "severe" for a pre-revenue startup. Adjust the scale to match your business, but keep it consistent once defined.

Scoring tip: When assessing likelihood, base your estimates on actual data wherever possible. Penetration test findings, attack surface monitoring alerts, industry incident reports, and your own incident history all provide objective inputs. A risk scored as "likely" should have supporting evidence, not just a gut feeling.


Populating Your Risk Register: Where to Start

The hardest part of building a risk register is identifying the initial set of risks. Here are the sources that produce the most useful inputs for startups:

Penetration test findings

Every finding from a penetration test represents a validated risk. These are not theoretical; someone actually exploited or verified them. Convert critical and high findings directly into risk register entries. Even remediated findings should be tracked as risks with existing controls in place.

Compliance framework mapping

SOC 2 trust service criteria and ISO 27001 Annex A controls map to specific risk categories. Walk through the framework requirements and ask "what could go wrong if this control fails?" for each one. This ensures coverage across standard risk domains.

Incident history

Every past incident, near-miss, and close call is a data point. Review your incident history and extract recurring themes. If you have had three phishing-related incidents in the past year, that risk deserves a prominent place in the register.

Vendor and dependency risks

For SaaS companies, third-party dependencies are often the largest source of risk. Map your critical vendors and consider what happens if each one experiences a breach, outage, or goes out of business. Third-party risk management findings should feed directly into the register.

Threat intelligence

What are threat actors currently targeting in your industry? If you are a fintech company and there is a wave of attacks targeting payment processing APIs, that should be reflected in your risk register. Keep the register connected to the current threat landscape, not just your internal environment.


Integrating the Risk Register Into Existing Workflows

A risk register that lives outside your daily workflows will be ignored. The key to making it useful is embedding it into the processes your team already follows:

Tie it to sprint planning

When engineering plans sprints, high-risk items from the register should inform priority decisions. If a risk is scored as "high" and the treatment plan includes a specific engineering task, that task should compete for sprint capacity alongside feature work. Make the risk register a standing input to your planning process.

Include it in leadership reviews

Monthly or quarterly leadership meetings should include a risk register summary. Not the full spreadsheet. A one-page view showing the top 5 risks, any risks that changed score since last review, treatment plan progress for the highest-priority items, and any new risks added. This keeps leadership engaged and ensures risk treatment gets the resources it needs.

Connect it to your ticketing system

Treatment plan items should exist as tickets in Jira, Linear, or whatever tool your team uses. Link the risk register entry to the ticket so that progress is visible and trackable. When a treatment plan task is completed, the risk register should be updated to reflect the new control and adjusted residual risk score.

Make it accessible

The risk register should be accessible to anyone who might need to reference it: engineering leads, product managers, the security team, and executives. Do not lock it in a compliance tool that requires a separate login. If you are using a spreadsheet, keep it in a shared workspace. If you are using a GRC platform, make sure the relevant views are shared with the right people.


Risk Treatment Decisions

Every risk in the register needs a treatment decision. There are four options, and each has specific implications:

Auditor expectation: Auditors do not expect every risk to be mitigated to zero. They expect every risk to have a documented, rational treatment decision made by an appropriate authority. An accepted risk with clear documentation is better than a mitigated risk with no evidence that the mitigation was implemented.


Maintaining the Register Over Time

The register must be a living document. Here is the maintenance cadence that balances thoroughness with practicality:

Quarterly reviews

Every quarter, review the full register. For each risk, ask: has the likelihood changed based on new information? Has the impact changed based on business growth? Are existing controls still effective? Has the treatment plan progressed as expected? Update scores and notes accordingly.

Event-driven updates

Certain events should trigger immediate register updates: a security incident, a significant penetration test finding, a new compliance requirement, a major architectural change, a new vendor dependency, or a change in the threat landscape that affects your industry.

Annual recalibration

Once a year, step back and recalibrate the entire register. Are your scoring definitions still appropriate for your organization's current size and risk appetite? Are there risk categories that are missing? Are there risks that are no longer relevant and should be retired? This is also the time to align with your annual compliance cycle and ensure the register reflects the latest audit findings.


A Starter Template for Startups

Here are 10 risks that belong in nearly every SaaS startup's risk register. Use these as a starting point and customize based on your specific environment:

  1. Unauthorized access through compromised credentials (Likelihood: 4, Impact: 4)
  2. Data breach through application vulnerability (Likelihood: 3, Impact: 5)
  3. Service outage from cloud provider failure (Likelihood: 2, Impact: 4)
  4. Ransomware encryption of production systems (Likelihood: 3, Impact: 5)
  5. Supply chain compromise through third-party dependency (Likelihood: 3, Impact: 4)
  6. Data loss from backup failure (Likelihood: 2, Impact: 5)
  7. Insider threat from over-provisioned employee access (Likelihood: 2, Impact: 4)
  8. Regulatory non-compliance penalty (Likelihood: 2, Impact: 3)
  9. API abuse or data scraping (Likelihood: 4, Impact: 3)
  10. Key person dependency for critical systems (Likelihood: 3, Impact: 4)

For each of these, fill in the full register structure defined above: existing controls, residual risk, treatment decision, specific treatment plan, named owner, and review date.

The bottom line: A risk register is only as good as the decisions it drives. If your register is not influencing how you allocate engineering resources, what you test, what you insure, and what you accept, it is not doing its job. Build it for your team, not your auditor, and it will satisfy both.

Need help identifying your real risks?

Penetration testing and attack surface management give you the validated data your risk register needs. We help startups build security programs grounded in real findings, not guesswork.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!