TL;DR: A secure SDLC is not a gate that blocks deployments — it's a set of automated checks, human review points, and cultural practices that catch vulnerabilities before they reach production. The organizations that do this well integrate threat modeling into design, run SAST/DAST/SCA in CI/CD, maintain a security champions network, enforce secrets scanning in pre-commit hooks, and measure their program with metrics that matter. The organizations that do this poorly bolt a SAST tool onto their pipeline, ignore 90% of the findings, and declare themselves DevSecOps.
The Secure SDLC at Each Phase
| SDLC Phase | Security Activity | Tooling / Process |
|---|---|---|
| Requirements | Security requirements, abuse cases | Security user stories, risk classification |
| Design | Threat modeling | STRIDE, attack trees, data flow diagrams |
| Development | Secure coding, secrets scanning | Pre-commit hooks (gitleaks, detect-secrets), IDE plugins |
| Code Review | Security-focused PR review | Security champions, SAST in PR checks |
| Build / CI | SAST, SCA, container scanning | Semgrep, Snyk, Trivy, GitHub Advanced Security |
| Testing / QA | DAST, API security testing | OWASP ZAP, Burp Suite Enterprise, Nuclei |
| Deployment | Security gates, IaC scanning | Policy-as-code (OPA), Checkov, tfsec |
| Production | Monitoring, vulnerability management | Runtime protection, bug bounty, penetration testing |
Threat Modeling in the Design Phase
Threat modeling is the highest-ROI security activity in the SDLC because it catches architectural vulnerabilities before any code is written. Fixing a broken trust boundary in a design document costs a conversation. Fixing the same flaw after it's been implemented, deployed, and discovered in a penetration test costs a sprint of refactoring and a potential security incident in the interim.
The most practical threat modeling approach for engineering teams is lightweight and integrated into existing design review processes. When an engineer writes a design document for a new feature, they include a data flow diagram showing where data enters the system, how it moves between components, where it crosses trust boundaries, and where it is stored. The security champion or security team reviews the diagram using STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) to identify threats at each boundary crossing.
The output is not a 50-page document. It is a list of threats relevant to the specific feature, mitigations for each threat, and security requirements that the implementation must satisfy. This list becomes acceptance criteria for the feature — the same way functional requirements are acceptance criteria.
When to Threat Model
Not every change needs a threat model. The trigger should be: new features that handle sensitive data, changes to authentication or authorization logic, new external integrations or API endpoints, changes to data flow or trust boundaries, and infrastructure changes that affect the security perimeter. Bug fixes and UI changes generally do not require threat modeling.
SAST: Static Analysis That Developers Don't Hate
Static Application Security Testing analyzes source code for vulnerability patterns without executing the application. Modern SAST tools (Semgrep, CodeQL, SonarQube) can identify SQL injection, cross-site scripting, insecure deserialization, hardcoded credentials, and dozens of other vulnerability classes directly in the codebase.
The problem with SAST is not capability — it's noise. Out-of-the-box SAST configurations produce overwhelming numbers of findings, many of which are false positives or low-severity issues that do not represent exploitable vulnerabilities. Developers quickly learn to ignore SAST output if the signal-to-noise ratio is poor, and the tool becomes shelfware.
The solution is aggressive tuning. Start with a small set of high-confidence rules focused on critical vulnerability classes — SQL injection, command injection, path traversal, hardcoded secrets. Disable rules that produce excessive false positives in your specific codebase and framework. Add rules incrementally as the team builds confidence in the tooling. The goal is a SAST pipeline where every finding is worth investigating — even if that means the tool catches fewer total issues initially.
Integrating SAST into Pull Requests
The optimal integration point for SAST is the pull request. When a developer opens a PR, SAST runs against the diff (not the entire codebase) and posts findings as inline comments on the changed lines. The developer sees the finding in context, can assess whether it is a true positive, and can fix it before the code is merged. This is dramatically more effective than running SAST against the full codebase weekly and dropping a spreadsheet of findings on the security backlog.
DAST: Testing the Running Application
Dynamic Application Security Testing complements SAST by testing the running application from the outside — the same perspective an attacker has. DAST tools send malicious inputs to the application's endpoints and analyze the responses for evidence of vulnerabilities. They find issues that SAST cannot: server misconfigurations, missing security headers, authentication flaws in the deployed configuration, and vulnerabilities that only manifest at runtime.
DAST is typically integrated later in the pipeline — against a staging environment that mirrors production. The tool crawls the application, submits forms, fuzzes parameters, and reports findings. Modern DAST tools can authenticate to the application and test authenticated functionality, though coverage of complex workflows (multi-step forms, JavaScript-heavy SPAs) varies by tool.
The limitation of DAST is speed: a thorough DAST scan of a large application takes hours. This makes it impractical as a PR-level check. Instead, run DAST on a schedule (nightly or after each staging deployment) and feed findings back to the development team through the same triage and tracking process as SAST findings.
Software Composition Analysis and Dependency Management
Modern applications are more dependency than original code. A typical Node.js application has hundreds of transitive dependencies, each a potential vector for known vulnerabilities or supply chain attacks. SCA tools (Snyk, Dependabot, Renovate, npm audit) scan your dependency tree against vulnerability databases and alert you when a dependency has a known CVE.
The challenge is prioritization. Not every CVE in a dependency is exploitable in your application. A vulnerable function in a library is only a risk if your code actually calls that function with attacker-controlled input. The best SCA tools provide reachability analysis — determining whether the vulnerable code path is actually reachable from your application code — which dramatically reduces noise.
Beyond known vulnerabilities, dependency management should include: pinning dependency versions (not using floating ranges), reviewing new dependencies before adding them (checking maintenance status, download counts, and security history), using lock files to ensure reproducible builds, and monitoring for dependency confusion attacks (where an attacker publishes a malicious package with the same name as an internal package on a public registry).
Secrets Scanning and Pre-Commit Hooks
Hardcoded secrets in source code — API keys, database passwords, private keys, tokens — are one of the most common findings in penetration tests and one of the easiest to prevent. Once a secret is committed to a Git repository, it exists in the repository's history forever (or until a force-push rewrite, which most teams are reluctant to do). Even if the secret is removed in a subsequent commit, anyone with repository access can find it in the history.
Pre-commit hooks using tools like gitleaks, detect-secrets, or trufflehog scan every commit for patterns that match known secret formats — AWS access keys, GitHub tokens, private key headers, high-entropy strings. If a secret is detected, the commit is blocked before it reaches the repository. This is the ideal intervention point: the developer is told immediately, before the secret enters version control.
Pre-commit hooks should be supplemented by server-side scanning in the CI pipeline. Not all developers will have pre-commit hooks installed (new team members, CI-generated commits, direct pushes), so a server-side check ensures nothing slips through. When a secret is detected, the response should be immediate: rotate the secret, revoke the exposed credential, and investigate whether it was accessed.
The Security Champions Program
A security team of five cannot review every pull request, threat model every feature, and triage every SAST finding for an engineering organization of two hundred. The security champions model scales security knowledge by embedding a security-aware developer in each engineering team. Champions are not security engineers — they are developers who receive additional security training and serve as the team's security point of contact.
Champions review security-sensitive PRs for their team, triage SAST and SCA findings, participate in threat modeling sessions, and escalate complex issues to the central security team. They attend a regular security sync (biweekly works well) where the security team shares new threats, reviews findings across teams, and provides targeted training.
The program works when champions have dedicated time for security activities (typically 10-20% of their week), receive meaningful training (not just an annual awareness course), and are recognized for their contributions. It fails when champions are "volunteered" without buy-in, given no time allocation, or expected to be security experts after a single training session.
Security Gates in CI/CD
Security gates are policy enforcement points in the CI/CD pipeline that prevent code with known security issues from reaching production. A typical gate configuration blocks deployments when: SAST finds a critical or high-severity vulnerability, SCA identifies a dependency with a critical CVE and available patch, secrets scanning detects a credential, or container scanning finds a base image vulnerability above a severity threshold.
The key to effective security gates is calibration. Gates that are too strict — blocking on every medium-severity finding — create developer friction and lead to teams finding workarounds (suppressing findings without review, disabling the gate for "urgent" deployments that become permanent exceptions). Gates that are too lenient provide false assurance without meaningful risk reduction.
Start with gates that block only on high-confidence, high-severity findings. Provide a clear, fast exception process for false positives (reviewed and approved by the security champion or security team). Track gate blocks, exceptions, and resolution times as metrics. Tighten the gates gradually as the team matures and the tooling is tuned.
Measuring SDLC Security Maturity
You cannot improve what you do not measure. Effective SDLC security metrics include: mean time to remediate findings by severity (are critical findings fixed in days or months?), percentage of PRs with SAST coverage, percentage of features with threat models, secrets detected in pre-commit vs. post-commit (are hooks catching issues before they reach the repo?), dependency vulnerability backlog age, and penetration test finding trends over time (is the same class of vulnerability recurring?).
The most telling metric is what your penetration tests find. If external penetration tests consistently discover SQL injection, XSS, or hardcoded credentials, your SDLC security controls are not working — regardless of what your SAST dashboard shows. Penetration test findings are the ground truth that validates whether your SDLC security investments are producing results.
Validate Your SDLC Security Controls
Lorikeet Security helps engineering organizations build and validate secure development lifecycles — from threat modeling workshops to CI/CD security gate configuration to penetration testing that validates your controls are working. Let's assess where your SDLC stands today.