TL;DR: Penetration tests and scanners find implementation bugs — SQL injection, XSS, misconfigurations. They cannot find design-level flaws: an authorization model that allows horizontal privilege escalation by design, a data flow that routes sensitive information through an untrusted third party, or an architecture that has no revocation mechanism for compromised API keys. Threat modeling catches these flaws during design — when fixing them costs hours instead of sprints. Every development team should threat model new features, architecture changes, and third-party integrations before writing code.
What Threat Modeling Actually Is
Threat modeling is a structured process for identifying what can go wrong in a system's design and deciding what to do about it. It answers four questions: What are we building? What can go wrong? What are we going to do about it? Did we do a good enough job?
The output is not a document that sits in a wiki — it is a list of concrete threats with agreed-upon mitigations that become requirements, backlog items, or architectural constraints. A threat model for a new payment integration might produce: "The webhook endpoint accepts unsigned payloads — an attacker can forge payment confirmations. Mitigation: require HMAC signature verification on all incoming webhooks. Owner: backend team. Priority: must-have before launch."
At Lorikeet Security, we routinely find vulnerabilities during penetration tests that threat modeling would have caught during design. An API that returns full user objects instead of projections (information disclosure by design), a multi-tenant system where tenant isolation depends on application logic rather than database-level separation (one bug away from cross-tenant access), or an authentication flow that stores session state client-side without integrity protection. These are not implementation mistakes — they are design decisions that created vulnerabilities.
When to Threat Model
Threat modeling is most valuable — and most cost-effective — at specific points in the development lifecycle:
- New features with security implications: Any feature touching authentication, authorization, payment processing, file uploads, third-party integrations, or user data handling
- Architecture changes: Migrating from monolith to microservices, adding a new data store, introducing a message queue, changing deployment topology
- New third-party integrations: Every external dependency is a trust boundary. What data do you send them? What data do they send you? How do you verify authenticity?
- Changes to data flows: Any time sensitive data moves to a new location, passes through a new intermediary, or is accessible to a new audience
- Incident-driven review: After a security incident or pentest finding, threat model the affected component to identify related flaws the original finding hints at
You do not need to threat model every bug fix or CSS change. The trigger is: "Does this change alter how data flows, who can access what, or where trust boundaries exist?" If yes, spend 30 minutes on a threat model. If no, ship it.
STRIDE: The Practical Framework
STRIDE is Microsoft's threat classification model. For each component in your system, you systematically consider six threat categories:
| Category | Threat | Example | Typical Mitigation |
|---|---|---|---|
| Spoofing | Impersonating a user or system | Forged webhook calls from a payment provider | Authentication, HMAC signatures, mutual TLS |
| Tampering | Unauthorized data modification | Modifying order totals in client-side state | Server-side validation, integrity checks, signed tokens |
| Repudiation | Denying actions without accountability | Admin deletes records with no audit trail | Audit logging, append-only logs, digital signatures |
| Information Disclosure | Unauthorized data exposure | API returns full user objects including password hashes | Data projections, field-level access control, encryption |
| Denial of Service | Disrupting availability | Unbounded GraphQL query crashes the database | Rate limiting, query complexity limits, circuit breakers |
| Elevation of Privilege | Gaining unauthorized access levels | Changing user role via mass assignment in API request | RBAC, input allowlisting, server-side role enforcement |
The power of STRIDE is its completeness as a checklist. When a developer looks at a new API endpoint and asks "Can someone spoof the caller? Can someone tamper with the input? Is there an audit trail?" — they catch threats that would otherwise surface as pentest findings months later.
Data Flow Diagrams and Trust Boundaries
A threat model starts with a data flow diagram (DFD) — a visual representation of how data moves through your system. It does not need to be formal UML. A whiteboard sketch with four element types is enough:
- Processes: Code that transforms data (your API, a background worker, a Lambda function)
- Data stores: Where data lives (databases, caches, file systems, S3 buckets)
- External entities: Actors or systems outside your control (users, third-party APIs, CDNs)
- Data flows: Arrows showing data movement between elements (HTTP requests, database queries, queue messages)
Trust boundaries are the critical addition. Draw a dotted line everywhere the level of trust changes: between the browser and your API (user input crosses into your backend), between your API and a third-party service (your data crosses into their control), between your application and the database (authenticated application code interacts with the data layer). Every trust boundary is where threats concentrate — it is where spoofing, tampering, and information disclosure are most likely to occur.
Walk through each data flow that crosses a trust boundary and apply STRIDE. "User submits a form to our API" — can the user spoof another user's identity? (Check authentication.) Can they tamper with fields they should not control? (Check for mass assignment.) Does the API response disclose data from other users? (Check authorization on the response.) This systematic approach is what makes threat modeling reproducible rather than dependent on one person's intuition.
Prioritizing Threats: Risk-Based Approach
Not every identified threat requires immediate mitigation. Prioritize based on two factors: likelihood (how easy is the attack and how motivated are attackers?) and impact (what is the worst-case outcome?). A simple High/Medium/Low matrix works for most teams:
- Critical (High likelihood + High impact): Fix before launch. Example: unauthenticated API endpoint that returns customer PII
- High (either High likelihood or High impact): Fix in current sprint. Example: missing rate limiting on authentication endpoint
- Medium: Schedule for next sprint. Example: verbose error messages that disclose stack traces
- Low (Low likelihood + Low impact): Accept the risk or add to backlog. Example: theoretical timing attack on a non-sensitive comparison
The DREAD model (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) provides more granular scoring when you need it, but for most sprint-based threat modeling the simple matrix is faster and produces equally actionable results.
Common Design-Level Flaws
These are the categories of findings that threat modeling catches and pentests cannot — because the code works exactly as designed, but the design itself is insecure:
- Authorization models that allow horizontal escalation by design: APIs that use sequential integer IDs and rely entirely on application logic (not database queries) to enforce ownership. One missed check and every object is accessible to every user.
- Missing revocation mechanisms: API keys or tokens that cannot be invalidated without rotating the entire signing secret. If a key is compromised, there is no way to revoke just that key.
- Sensitive data in client-side state: Pricing, permissions, or feature flags stored in JWTs, localStorage, or URL parameters where users can modify them. The server trusts client-provided values without re-validation.
- Over-privileged service accounts: A microservice that needs read access to one table is given a database connection string with full DDL privileges because it was easier during development.
- No tenant isolation at the data layer: Multi-tenant applications where tenant separation is enforced by application-level WHERE clauses rather than database-level row security or separate schemas. One ORM bug exposes all tenants.
Lightweight Threat Modeling for Agile Teams
The biggest barrier to threat modeling adoption is perceived overhead. Full STRIDE analysis with formal DFDs and DREAD scoring can take days for a complex system. Agile teams need a 30-minute approach that fits into sprint planning:
- 5 minutes — Sketch the data flow. Whiteboard or shared doc. Draw the components involved in this feature, the data flows between them, and the trust boundaries. It does not need to be pretty.
- 10 minutes — Walk through STRIDE at each trust boundary. For each data flow crossing a boundary, ask: Can someone spoof, tamper, repudiate, disclose, deny service, or escalate? Write down anything that gets a "maybe" or "I don't know."
- 10 minutes — Prioritize and assign. For each identified threat, decide: fix before launch, fix in this sprint, or accept the risk. Create tickets for anything that needs fixing.
- 5 minutes — Document decisions. Record what you considered and decided — especially accepted risks. This becomes invaluable context for the next pentest and for auditors who ask "did you consider X?"
The participants matter: include the developer implementing the feature, a security champion (a developer with security interest — not necessarily a security engineer), and optionally someone who understands the business context of the data being processed. Three people, 30 minutes, concrete outcomes.
Choosing a Framework
| Framework | Complexity | Focus | Best For |
|---|---|---|---|
| STRIDE | Low–Medium | Technical threats per component | Development teams, feature-level analysis |
| PASTA | High | Business-aligned risk analysis | Enterprise architecture, compliance-driven orgs |
| LINDDUN | Medium | Privacy threats | GDPR/CCPA compliance, data-heavy applications |
| Attack Trees | Medium | Goal-oriented attacker modeling | High-value targets, threat intelligence integration |
| VAST | High | Scalable, automated threat modeling | Large organizations, DevSecOps pipelines |
For most development teams, start with STRIDE. It is simple enough to learn in an afternoon, structured enough to be repeatable, and comprehensive enough to catch the threats that matter. Graduate to PASTA or attack trees when your security program matures and you need business-context or attacker-motivation modeling.
Validate Your Architecture's Security
Lorikeet Security offers security architecture reviews and application security assessments that complement your threat modeling program. We identify design-level flaws through expert analysis and validate implementations through manual penetration testing.