Threat Modeling for Developers: How to Find Security Flaws Before Writing a Single Line of Code | Lorikeet Security Skip to main content
Back to Blog

Threat Modeling for Developers: How to Find Security Flaws Before Writing a Single Line of Code

Lorikeet Security Team April 8, 2026 10 min read

TL;DR: Penetration tests and scanners find implementation bugs — SQL injection, XSS, misconfigurations. They cannot find design-level flaws: an authorization model that allows horizontal privilege escalation by design, a data flow that routes sensitive information through an untrusted third party, or an architecture that has no revocation mechanism for compromised API keys. Threat modeling catches these flaws during design — when fixing them costs hours instead of sprints. Every development team should threat model new features, architecture changes, and third-party integrations before writing code.

What Threat Modeling Actually Is

Threat modeling is a structured process for identifying what can go wrong in a system's design and deciding what to do about it. It answers four questions: What are we building? What can go wrong? What are we going to do about it? Did we do a good enough job?

The output is not a document that sits in a wiki — it is a list of concrete threats with agreed-upon mitigations that become requirements, backlog items, or architectural constraints. A threat model for a new payment integration might produce: "The webhook endpoint accepts unsigned payloads — an attacker can forge payment confirmations. Mitigation: require HMAC signature verification on all incoming webhooks. Owner: backend team. Priority: must-have before launch."

At Lorikeet Security, we routinely find vulnerabilities during penetration tests that threat modeling would have caught during design. An API that returns full user objects instead of projections (information disclosure by design), a multi-tenant system where tenant isolation depends on application logic rather than database-level separation (one bug away from cross-tenant access), or an authentication flow that stores session state client-side without integrity protection. These are not implementation mistakes — they are design decisions that created vulnerabilities.


When to Threat Model

Threat modeling is most valuable — and most cost-effective — at specific points in the development lifecycle:

You do not need to threat model every bug fix or CSS change. The trigger is: "Does this change alter how data flows, who can access what, or where trust boundaries exist?" If yes, spend 30 minutes on a threat model. If no, ship it.


STRIDE: The Practical Framework

STRIDE is Microsoft's threat classification model. For each component in your system, you systematically consider six threat categories:

Category Threat Example Typical Mitigation
Spoofing Impersonating a user or system Forged webhook calls from a payment provider Authentication, HMAC signatures, mutual TLS
Tampering Unauthorized data modification Modifying order totals in client-side state Server-side validation, integrity checks, signed tokens
Repudiation Denying actions without accountability Admin deletes records with no audit trail Audit logging, append-only logs, digital signatures
Information Disclosure Unauthorized data exposure API returns full user objects including password hashes Data projections, field-level access control, encryption
Denial of Service Disrupting availability Unbounded GraphQL query crashes the database Rate limiting, query complexity limits, circuit breakers
Elevation of Privilege Gaining unauthorized access levels Changing user role via mass assignment in API request RBAC, input allowlisting, server-side role enforcement

The power of STRIDE is its completeness as a checklist. When a developer looks at a new API endpoint and asks "Can someone spoof the caller? Can someone tamper with the input? Is there an audit trail?" — they catch threats that would otherwise surface as pentest findings months later.


Data Flow Diagrams and Trust Boundaries

A threat model starts with a data flow diagram (DFD) — a visual representation of how data moves through your system. It does not need to be formal UML. A whiteboard sketch with four element types is enough:

Trust boundaries are the critical addition. Draw a dotted line everywhere the level of trust changes: between the browser and your API (user input crosses into your backend), between your API and a third-party service (your data crosses into their control), between your application and the database (authenticated application code interacts with the data layer). Every trust boundary is where threats concentrate — it is where spoofing, tampering, and information disclosure are most likely to occur.

Walk through each data flow that crosses a trust boundary and apply STRIDE. "User submits a form to our API" — can the user spoof another user's identity? (Check authentication.) Can they tamper with fields they should not control? (Check for mass assignment.) Does the API response disclose data from other users? (Check authorization on the response.) This systematic approach is what makes threat modeling reproducible rather than dependent on one person's intuition.


Prioritizing Threats: Risk-Based Approach

Not every identified threat requires immediate mitigation. Prioritize based on two factors: likelihood (how easy is the attack and how motivated are attackers?) and impact (what is the worst-case outcome?). A simple High/Medium/Low matrix works for most teams:

The DREAD model (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) provides more granular scoring when you need it, but for most sprint-based threat modeling the simple matrix is faster and produces equally actionable results.


Common Design-Level Flaws

These are the categories of findings that threat modeling catches and pentests cannot — because the code works exactly as designed, but the design itself is insecure:


Lightweight Threat Modeling for Agile Teams

The biggest barrier to threat modeling adoption is perceived overhead. Full STRIDE analysis with formal DFDs and DREAD scoring can take days for a complex system. Agile teams need a 30-minute approach that fits into sprint planning:

  1. 5 minutes — Sketch the data flow. Whiteboard or shared doc. Draw the components involved in this feature, the data flows between them, and the trust boundaries. It does not need to be pretty.
  2. 10 minutes — Walk through STRIDE at each trust boundary. For each data flow crossing a boundary, ask: Can someone spoof, tamper, repudiate, disclose, deny service, or escalate? Write down anything that gets a "maybe" or "I don't know."
  3. 10 minutes — Prioritize and assign. For each identified threat, decide: fix before launch, fix in this sprint, or accept the risk. Create tickets for anything that needs fixing.
  4. 5 minutes — Document decisions. Record what you considered and decided — especially accepted risks. This becomes invaluable context for the next pentest and for auditors who ask "did you consider X?"

The participants matter: include the developer implementing the feature, a security champion (a developer with security interest — not necessarily a security engineer), and optionally someone who understands the business context of the data being processed. Three people, 30 minutes, concrete outcomes.


Choosing a Framework

Framework Complexity Focus Best For
STRIDE Low–Medium Technical threats per component Development teams, feature-level analysis
PASTA High Business-aligned risk analysis Enterprise architecture, compliance-driven orgs
LINDDUN Medium Privacy threats GDPR/CCPA compliance, data-heavy applications
Attack Trees Medium Goal-oriented attacker modeling High-value targets, threat intelligence integration
VAST High Scalable, automated threat modeling Large organizations, DevSecOps pipelines

For most development teams, start with STRIDE. It is simple enough to learn in an afternoon, structured enough to be repeatable, and comprehensive enough to catch the threats that matter. Graduate to PASTA or attack trees when your security program matures and you need business-context or attacker-motivation modeling.

Validate Your Architecture's Security

Lorikeet Security offers security architecture reviews and application security assessments that complement your threat modeling program. We identify design-level flaws through expert analysis and validate implementations through manual penetration testing.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!