Threat Modeling for Engineering Teams: A Practical Guide That Does Not Require a PhD in Security | Lorikeet Security Skip to main content
Back to Blog

Threat Modeling for Engineering Teams: A Practical Guide That Does Not Require a PhD in Security

Lorikeet Security Team March 2, 2026 11 min read
Security Engineering

Find threats at design time, not in production incident reports.

Why Most Engineering Teams Skip Threat Modeling

Here is a pattern we see in nearly every penetration test we conduct: the vulnerability we find was architecturally predictable. An IDOR exists because nobody asked "what happens if a user changes the ID in this request?" during design. An SSRF exists because nobody mapped the trust boundary between the application and internal services. A privilege escalation exists because nobody modeled what an authenticated but unauthorized user could do.

These are not obscure attack techniques. They are the predictable consequences of building features without thinking about how they can be misused. And they are expensive. The average cost of fixing a vulnerability found in production is 6 to 15 times higher than catching it during design, according to NIST and IBM research spanning two decades.

So why do most engineering teams skip threat modeling? Three reasons come up repeatedly:

The core insight: Threat modeling is not about predicting every possible attack. It is about systematically asking "what could go wrong?" before you write code, so that you catch the architectural mistakes that are expensive and painful to fix later.


What Threat Modeling Actually Is

Threat modeling is not a tool. It is not a document template. It is not a compliance checkbox. At its core, threat modeling is a structured conversation about security risks in a system's design.

The goal is to answer four questions, formalized by Adam Shostack (who led threat modeling at Microsoft and literally wrote the book on it):

  1. What are we building? Draw a diagram of the system: components, data flows, trust boundaries, and external dependencies. You cannot analyze what you cannot see
  2. What can go wrong? Systematically identify threats using a framework like STRIDE. For each component and data flow, ask how it could be attacked
  3. What are we doing about it? For each identified threat, decide on a mitigation: fix it, accept the risk, transfer the risk (insurance, SLA), or avoid it (remove the feature)
  4. Did we do a good job? Validate the model against reality. Were the threats realistic? Did the mitigations work? Update the model as the system evolves

That is it. No proprietary methodology. No expensive tooling required. No security certification needed. If your team can whiteboard a system architecture, they can threat model it.


STRIDE Made Simple

STRIDE is a threat classification framework developed at Microsoft in the early 2000s. It has endured because it is easy to remember, comprehensive enough to be useful, and simple enough to apply without training. Each letter represents a category of threat, and for each component in your system, you ask whether that category applies.

Category Threat SaaS Example Mitigation Pattern
Spoofing Pretending to be another user or system Forging a JWT to impersonate an admin user Strong authentication, token validation, MFA
Tampering Modifying data in transit or at rest Changing the price in an API request before it reaches the payment processor Input validation, integrity checks, signed payloads
Repudiation Denying an action was performed A user deletes records and claims they never accessed them because there are no audit logs Audit logging, immutable event stores, digital signatures
Info Disclosure Exposing data to unauthorized parties API endpoint returns other users' PII because authorization checks are missing Authorization on every endpoint, data classification, encryption
Denial of Service Making the system unavailable An unauthenticated endpoint triggers expensive database queries, enabling a single user to exhaust connection pools Rate limiting, query complexity limits, circuit breakers
Elevation of Privilege Gaining access beyond your authorization A standard user changes their role to "admin" via a mass assignment vulnerability in the profile update endpoint Least privilege, server-side role enforcement, allowlisted attributes

How to apply STRIDE in practice

For each component in your system diagram, walk through the six STRIDE categories and ask: "Does this threat apply here?" You do not need to find threats in every category for every component. A database might be primarily concerned with Tampering and Information Disclosure. An authentication service is primarily concerned with Spoofing and Elevation of Privilege. A logging service is primarily concerned with Repudiation and Denial of Service.

The value of STRIDE is not that it is an exhaustive taxonomy of all possible attacks. The value is that it gives your team a structured checklist that prevents you from only thinking about the threats you have personally experienced. Without a framework, most engineers will think about SQL injection and XSS (because those are the threats they have heard of) and miss IDOR, SSRF, mass assignment, and business logic issues (which are what we actually find in penetration tests).


When to Threat Model

Threat modeling is most valuable when done at the right time. Too early and you are modeling a system that does not exist yet. Too late and the architecture is already built.

Always threat model during these moments

Signs you should have threat modeled (but did not)


A 60-Minute Threat Modeling Session

Here is a step-by-step format for running an effective threat modeling session with your engineering team. This is designed to be lightweight enough to do regularly and structured enough to produce actionable results.

Before the session (5 minutes of prep)

Minutes 0-10: Draw the system

As a group, draw a diagram of the component you are modeling. Include:

Tip: The diagram does not need to be beautiful. It needs to be accurate enough to reason about. If the team disagrees about how the system actually works, that disagreement is itself a valuable finding. Misunderstandings about system architecture are a source of security bugs.

Minutes 10-20: Identify assets and trust boundaries

On the diagram, identify and label:

Minutes 20-45: Identify threats using STRIDE

For each trust boundary and data flow, walk through the STRIDE categories. Use these prompts:

Write down every threat, even if someone immediately says "oh, we already handle that." The point is to capture the threats first and evaluate mitigations second. Do not dismiss threats during brainstorming.

Minutes 45-60: Prioritize and plan mitigations

For each identified threat, the team decides:

Create tickets for any mitigations that need to be implemented. These tickets should be treated as part of the feature delivery, not as separate "security work" that gets deprioritized.

Draw System 10 min
Assets + Boundaries 10 min
STRIDE Analysis 25 min
Prioritize + Plan 15 min

Common Threats We Find in Penetration Tests That Threat Modeling Would Have Caught

After conducting hundreds of penetration tests across SaaS applications, APIs, and cloud infrastructure, certain vulnerability classes appear with predictable regularity. Nearly all of them are design-level issues that a threat modeling session would have surfaced before they were built.

IDOR (Insecure Direct Object References)

The pattern: An API endpoint accepts a resource ID (user ID, document ID, order ID) and returns the resource without verifying that the requesting user is authorized to access it. An attacker changes the ID and accesses another user's data.

What threat modeling catches: During the "Information Disclosure" and "Elevation of Privilege" STRIDE analysis, the team would ask: "Can someone access resources they should not?" The answer for any endpoint that takes a resource ID as input should trigger a mitigation: server-side authorization checks that verify the requesting user owns or has permission to access the resource.

Broken authentication and session management

The pattern: Password reset tokens that do not expire, session tokens that survive password changes, authentication bypasses through API endpoints that were not included in the auth middleware, or JWTs signed with weak or default secrets.

What threat modeling catches: The "Spoofing" analysis asks: "Can someone pretend to be another user?" This prompt surfaces questions about token lifecycle, session invalidation, and which endpoints require authentication. Drawing the trust boundary between authenticated and unauthenticated access on the diagram immediately highlights endpoints that are on the wrong side of that line.

SSRF (Server-Side Request Forgery)

The pattern: The application accepts a URL as input (for webhooks, file imports, URL previews, or PDF generation) and the server fetches that URL. An attacker provides an internal URL (like http://169.254.169.254/latest/meta-data/ for cloud metadata) and the server dutifully fetches it, leaking internal data.

What threat modeling catches: Drawing data flows on the diagram reveals that user-supplied input controls where the server makes HTTP requests. The "Tampering" and "Information Disclosure" analysis asks: "Can someone modify this input to access unintended resources?" The mitigation is straightforward: URL allowlisting, blocking internal IP ranges, and using a dedicated egress proxy for outbound requests. These are design decisions that are trivial to implement before the feature is built and expensive to retrofit after.

Mass assignment

The pattern: An API endpoint accepts a JSON body to update a user profile. The endpoint binds the JSON directly to the database model. An attacker includes "role": "admin" in the request body, and the server updates their role because it does not filter which fields are writable.

What threat modeling catches: The "Elevation of Privilege" analysis asks: "Can someone gain unauthorized access through this endpoint?" If the team draws the data flow from the request body to the database, they will identify that unfiltered input reaches the model and ask what happens if an attacker adds unexpected fields. The mitigation is explicit allowlisting of writable fields, which takes minutes to implement during development but may require a significant refactor if discovered in a penetration test months later.


Tools That Help

You do not need specialized tools to threat model. A whiteboard and sticky notes work. But if you want to standardize the practice across your organization, several tools can help.

Tool Best For Cost
Microsoft Threat Modeling Tool Teams already in the Microsoft ecosystem. Generates threats automatically from DFD diagrams. Strong STRIDE integration Free
OWASP Threat Dragon Open-source, cross-platform. Good for teams that want a lightweight, web-based tool without vendor lock-in Free
IriusRisk Enterprise teams that need integration with Jira, CI/CD pipelines, and compliance frameworks. Auto-generates threats from architecture patterns Paid (enterprise)
draw.io / diagrams.net Teams that want maximum flexibility. No threat-specific features, but great for diagramming data flows and trust boundaries collaboratively Free
Threagile Teams that prefer code-over-GUI. Define your architecture in YAML, and Threagile generates threat models programmatically. Good for GitOps workflows Free (open-source)

Our recommendation for teams starting out: use draw.io and a shared spreadsheet. Diagram the system in draw.io, run the STRIDE analysis in a meeting, and capture threats and mitigations in a spreadsheet that becomes the living threat model. Once the practice is established and you are running threat modeling sessions regularly, evaluate whether a dedicated tool adds value.


Integrating Threat Modeling into Your Sprint Cycle

Threat modeling fails when it is treated as a one-time event or a gate that slows down delivery. It succeeds when it is integrated into the existing development workflow so that it feels like a natural part of building features, not an additional burden.

When to trigger a threat modeling session

Not every feature needs a formal threat modeling session. Use these criteria to decide:

Make it part of the definition of done

Add threat modeling to your team's definition of done for relevant features. A feature is not "ready for development" until the design has been threat modeled and mitigations are captured as tickets in the sprint backlog. This prevents the common pattern of threat modeling being deferred to "later" and then never happening.

Keep threat models alive

A threat model is not a document that gets written once and filed away. It is a living artifact that should be updated as the system changes. Store your threat models alongside your architecture documentation (in your wiki, your repo, or your diagramming tool). When the system changes, update the model. When you run a penetration test, compare the findings against your threat model to see what you missed and improve the model for next time.

Build a threat modeling culture

The bottom line: Threat modeling is the most cost-effective security activity your engineering team can do. It requires no tools, no certifications, and no security background. It requires only the willingness to ask "what could go wrong?" before you build, rather than discovering the answer in a penetration test report or, worse, a breach notification.

Want to validate your threat models with real-world testing?

Threat modeling identifies design-level risks. Penetration testing validates whether those risks are exploitable. Lorikeet Security combines both to give your engineering team a complete view of your security posture.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!