Every engineering team we talk to has some form of static analysis running in their CI pipeline. Semgrep, SonarQube, CodeQL, Snyk, Checkmarx - pick your flavor. The dashboards are green, the scan reports show a manageable number of findings, and the team feels confident that security is being handled.
Then a manual code review finds an authorization bypass that lets any authenticated user access any other user's data. Or a race condition in the payment flow that allows double-spending. Or a JWT implementation that accepts unsigned tokens because someone copied a Stack Overflow snippet without understanding the algorithm parameter.
None of those findings would ever appear in a SAST report. They are not syntax-level bugs. They are logic flaws, and they are the vulnerabilities that actually get companies breached.
This article walks through what a real secure code review process looks like, where SAST tools genuinely help, where they fall short, and how to get the most out of both. If you have been relying on automated scanning as your primary application security strategy, this is the reality check.
What SAST tools actually do well
Before we go any further, let us give credit where it is due. SAST tools are not useless. They solve a real problem, and for certain categories of vulnerabilities, they solve it efficiently and at scale. Here is what they are genuinely good at:
- Known vulnerability patterns: SQL injection via string concatenation, cross-site scripting through unsanitized output, path traversal with user-controlled file paths. These are well-defined patterns with clear signatures, and SAST tools detect them reliably.
- Dependency scanning: Identifying known CVEs in third-party libraries and packages. Tools like Snyk and Dependabot excel here because it is fundamentally a database lookup problem.
- Hardcoded secrets: API keys, passwords, and tokens committed to source code. Pattern matching works well for this category.
- Basic taint analysis: Tracing user input from source to sink through straightforward code paths. If the data flow is linear and within a single function or file, SAST can often flag it.
- Coding standard enforcement: Catching use of deprecated functions, insecure defaults, and banned API calls. This is essentially linting with a security flavor.
- Scale and consistency: A SAST tool scans every line of code on every commit without getting tired, distracted, or forgetting to check that one file. It provides a consistent baseline that human reviewers cannot match for sheer coverage.
If your codebase has no SAST tooling at all, adding it is an immediate win. It catches the low-hanging fruit, enforces hygiene, and frees up human reviewers to focus on the harder problems. The mistake is treating it as a replacement for human review rather than a complement to it.
Where SAST falls short
The fundamental limitation of SAST is that it analyzes code syntactically, not semantically. It can tell you that a function receives user input and passes it to a database query. It cannot tell you whether the authorization model around that query is correct, whether the business logic it implements makes sense, or whether the interaction between that function and three other services creates an exploitable race condition.
Here is how SAST and manual code review compare across the dimensions that matter most for real-world application security:
| Dimension | SAST Tools | Manual Code Review |
|---|---|---|
| Logic flaws | Cannot detect. No understanding of intended behavior. | Primary strength. Reviewer understands what the code should do vs. what it actually does. |
| Business logic bypass | Invisible. SAST has no concept of business rules. | Traced end-to-end by reviewing workflows, state machines, and conditional logic. |
| Authentication design | Can flag missing auth middleware on routes. Cannot evaluate auth architecture. | Evaluates session management, token lifecycle, MFA implementation, and credential storage holistically. |
| Authorization model | Cannot reason about role hierarchies, tenant isolation, or permission inheritance. | Maps the entire authorization model and tests boundary conditions across roles and tenants. |
| Race conditions | Extremely limited. Some tools flag basic threading issues in specific languages. | Identifies TOCTOU bugs, double-spend scenarios, and concurrent state mutation across distributed systems. |
| Data flow analysis | Limited to single-file or simple cross-file taint tracking. | Traces data across services, queues, databases, and third-party integrations with full context. |
| Context-aware findings | No context. A SAST finding cannot tell you if the vulnerability is actually exploitable. | Every finding includes exploitation context, real-world impact, and prioritized remediation. |
| False positive rate | High. Industry estimates range from 30-70% depending on tool and codebase. | Near zero. A human reviewer does not report findings they cannot validate. |
The pattern is clear: SAST tools handle pattern-matching problems well and struggle with everything that requires understanding. Security vulnerabilities that matter in the real world - the ones that lead to data breaches, account takeovers, and financial fraud - almost always live in the "understanding" category.
What a real secure code review process looks like
A manual secure code review is not "reading through the code and looking for problems." It is a structured, methodical process that examines the application from multiple angles, guided by threat modeling and prioritized by risk. Here is how we approach it at Lorikeet Security.
1. Scoping and threat modeling
Before reading a single line of code, we establish what the application does, who uses it, and what assets need protection. We work with the engineering team to identify the critical data flows, the trust boundaries, and the areas of the codebase that carry the most risk.
This is where most automated approaches fall down immediately. Without understanding what the application is supposed to do, you cannot determine whether it is doing it securely. A SAST tool treats a banking application and a social media platform identically. We do not.
The scoping phase produces a threat model that guides the rest of the review. It tells us where to look first, what to look for, and what constitutes a meaningful finding versus noise.
2. Architecture review
We examine the application's architecture for structural security issues: how services communicate, where trust boundaries exist, how data is partitioned between tenants, and how the system handles failure modes. Architectural flaws are the most expensive to fix later and the most impactful to catch early.
This includes reviewing service-to-service authentication, message queue security, database access patterns, and the separation (or lack thereof) between control plane and data plane operations. We also look at how AI-generated code has been integrated, since LLM-written services frequently skip inter-service authentication entirely.
3. Authentication and authorization deep-dive
Authentication and authorization are where the highest-impact vulnerabilities live. We review these systems with extreme scrutiny:
- How are user sessions created, stored, and invalidated?
- Is the JWT implementation correct? Are algorithms pinned? Are tokens validated on every request?
- How does the application handle password reset, account recovery, and MFA enrollment?
- Is the authorization model role-based, attribute-based, or something custom? Are permissions checked at every access point or only at the API gateway?
- In multi-tenant applications: can one tenant access another tenant's data under any circumstances?
We have seen well-built applications where every endpoint has auth middleware, but the middleware itself has a flaw. We have seen systems where authorization is checked on the primary resource but not on related resources accessed through joins or nested queries. These are the kinds of issues that require a human to trace through and reason about.
4. Data flow tracing
We trace sensitive data from the moment it enters the system to the moment it leaves. Every input source (HTTP requests, webhooks, file uploads, message queues, third-party API responses) is tracked through the application to every output sink (database writes, API responses, logs, emails, external service calls).
Along the way, we verify that validation is applied at the right points, that encoding is context-appropriate (HTML encoding for web output, parameterized queries for SQL, etc.), and that sensitive data is not leaked into logs, error messages, or analytics platforms.
5. Business logic analysis
This is the phase that separates a security code review from a SAST scan. We examine the application's business logic for flaws that an attacker could exploit:
- Can a user manipulate the sequence of operations to skip a required step (e.g., bypassing payment verification)?
- Are there edge cases in pricing, discounting, or credit logic that could be abused?
- Can an attacker influence referential integrity by creating, modifying, or deleting resources in an unexpected order?
- Are rate limits and abuse controls applied consistently, or can they be bypassed by switching endpoints or parameters?
Business logic vulnerabilities are application-specific by definition. No tool can generically detect them because they depend entirely on understanding what the application is supposed to do. This is where experienced security engineers earn their keep.
6. Cryptography and secrets management
We review how the application handles cryptographic operations and sensitive credentials:
- Are passwords hashed with a modern algorithm (bcrypt, Argon2) with appropriate work factors?
- Is encryption used correctly? Are keys managed through a proper KMS or hardcoded in configuration files?
- Are API keys, database credentials, and signing secrets stored securely and rotated appropriately?
- Is TLS configured correctly for all external communications? Are certificate validations bypassed anywhere?
Cryptography bugs are particularly dangerous because they are silent. The application works perfectly from a functional perspective while providing zero actual security. A SAST tool might flag use of MD5, but it will not catch a custom encryption scheme that misuses AES-ECB or reuses initialization vectors.
7. Third-party integration review
Modern applications rarely exist in isolation. We review how the application integrates with payment processors, identity providers, cloud services, and other external systems. Each integration is a potential attack surface:
- Are webhook payloads validated (signature verification) before being processed?
- Are OAuth flows implemented correctly, including state parameter validation and redirect URI restrictions?
- Are third-party API responses treated as untrusted input and validated before use?
- Are service account credentials scoped to minimum required permissions?
8. Findings report with prioritized remediation
The final deliverable is not a list of findings sorted by CVSS score. It is a prioritized remediation roadmap that accounts for exploitability, business impact, and fix complexity. Every finding includes:
- A clear description of the vulnerability and affected code
- A proof-of-concept or detailed exploitation scenario
- The real-world impact if the vulnerability were exploited
- Specific remediation guidance with code examples
- Priority ranking based on risk, not just severity
We also provide a debrief session with the engineering team to walk through findings, answer questions, and discuss architectural improvements. The goal is not just to find bugs but to help the team build more securely going forward.
Real examples of what manual review catches
Abstract explanations only go so far. Here are four categories of vulnerabilities we routinely find during manual code reviews that SAST tools consistently miss. These are composites drawn from real engagements with details changed to protect client confidentiality.
IDOR in multi-tenant API
- The API endpoint
/api/documents/:idchecked that the user was authenticated but never verified the document belonged to the user's organization - SAST saw a parameterized query with proper input validation and reported no issues
- Manual review traced the authorization logic and discovered any authenticated user could access any document by iterating IDs
- Impact: full cross-tenant data exposure affecting every customer on the platform
Race condition in payments
- The checkout flow checked the user's credit balance, processed the purchase, then deducted the balance in three separate database operations
- By sending concurrent requests, an attacker could spend the same credits multiple times before the balance was decremented
- SAST tools do not model concurrent execution or database transaction isolation levels
- Impact: unlimited free purchases until the race window was closed with proper database locking
Insecure file upload path
- File uploads were validated by extension and MIME type, but the storage path was constructed using a user-controlled
folderparameter - An attacker could set
folder=../../publicto write files into the web-accessible directory, achieving remote code execution - SAST flagged the upload endpoint for "missing virus scanning" but missed the path traversal entirely because the path construction logic spanned three functions across two files
- Impact: remote code execution on the application server
JWT algorithm confusion
- The application used RS256 (asymmetric) JWT signing but the verification library accepted the
algheader from the token itself - An attacker could craft a token with
"alg": "HS256"and sign it using the public key (which was exposed via a JWKS endpoint) as the HMAC secret - SAST tools checked that JWT was being used and that tokens were validated, and reported the implementation as secure
- Impact: complete authentication bypass allowing any attacker to forge valid tokens for any user, including administrators
Every one of these vulnerabilities passed SAST scanning with zero findings. Every one of them would have resulted in a serious security incident if discovered by an attacker instead of a reviewer. This is not a hypothetical argument. As we discussed in our code review vs. pentest breakdown, these are the categories of bugs that show up consistently in manual assessments and almost never in automated ones.
When you need both SAST and manual review
This is not an either-or decision. The strongest security posture comes from layering automated and manual approaches to cover each other's blind spots.
The layered approach: Use SAST tools in your CI/CD pipeline to catch known vulnerability patterns, enforce coding standards, and flag dependency issues on every commit. This gives you continuous, consistent coverage for the categories of bugs that pattern matching handles well.
Then layer manual code reviews at key milestones: before major releases, after significant refactors, when adding new authentication or payment flows, and at minimum annually. The manual review focuses on the logic, architecture, and design-level issues that automated tools cannot evaluate.
The result: SAST handles the volume (every commit, every file, every dependency) while manual review handles the depth (threat modeling, logic analysis, architectural assessment). Neither approach alone provides adequate coverage. Together, they address fundamentally different categories of risk with the right tool for each.
Think of it like testing. Unit tests catch regressions automatically on every build. But you still need a QA engineer to evaluate whether the feature actually works correctly from a user's perspective. Automated and manual security assessments serve the same complementary roles.
How to prepare your codebase for a code review
If you are planning to engage a security team for a manual code review, a small amount of preparation goes a long way. It helps the reviewers ramp up faster and ensures they spend their time on security analysis rather than trying to understand your build system. Here is what we recommend:
- Provide a working development environment. The reviewer should be able to build and run the application locally. Docker Compose setups, clear README instructions, or a pre-configured dev environment (Codespaces, Gitpod) all work well.
- Document your architecture. Even a rough diagram showing services, databases, message queues, and external integrations saves hours. If you do not have one, a 30-minute whiteboard session at the start of the engagement is just as effective.
- Identify your critical paths. Tell the reviewers which parts of the application handle authentication, authorization, payments, PII, and other sensitive operations. Do not make them guess.
- Share your threat model if you have one. If you do not, that is fine - the review team will create one during scoping. But if you have already thought about your threats, sharing that context accelerates the process.
- Flag areas of concern. If there is a section of the codebase that was written under time pressure, generated by AI, or maintained by a developer who has since left, say so. Reviewers will find the issues regardless, but pointing them in the right direction maximizes the value of the engagement.
- Run your existing SAST tools first. Fix the obvious findings before the manual review begins. There is no point paying a security engineer to find the same SQL injection that SonarQube already flagged. Let them focus on what only a human can find.
- Ensure the review team has access to everything they need. Source code repositories, API documentation, environment variables (sanitized), database schemas, and CI/CD configuration. Incomplete access leads to incomplete reviews.
The better prepared your team is, the more your review budget goes toward finding real vulnerabilities instead of environmental setup and context gathering.
The bottom line
SAST tools are a necessary part of a modern application security program. They are not a sufficient one. If your security strategy begins and ends with automated scanning, you are catching the bugs that are easy to find and missing the ones that actually matter.
A manual secure code review is not a luxury. It is the only reliable way to evaluate authentication design, authorization models, business logic, race conditions, and the dozens of other vulnerability categories that require a human to understand and assess. The companies that get breached are not the ones with zero security tooling. They are the ones that mistook tooling for strategy.
Run your SAST tools. Fix what they find. And then bring in a security engineer who can look at your application the way an attacker would - with patience, context, and the ability to reason about what the code is actually doing.
Ready for a security code review that goes beyond SAST?
Our security engineers manually review your application's source code for logic flaws, authorization bypasses, and the vulnerabilities that automated tools miss. Every finding comes with prioritized remediation guidance your team can act on immediately.