We have performed hundreds of security code reviews across SaaS platforms, fintech applications, healthcare systems, and AI-powered startups. The technology stacks vary. The team sizes vary. The industries vary. But the vulnerabilities? They are remarkably consistent.
The same ten categories of findings appear in the vast majority of codebases we review. Some of these are well-known vulnerabilities that have existed for decades. Others are newer patterns that have emerged with the rise of AI-generated code and modern JavaScript frameworks. All of them are exploitable, and all of them are fixable.
This is not a theoretical list pulled from a textbook. These are the actual findings we write up in client reports, ranked by how frequently we encounter them. For each one, we will show you what the vulnerable code looks like, explain why it is dangerous, and give you the exact fix.
1 Broken Access Control / Missing Authorization Checks Critical
This is the single most common finding in our code reviews, and it is also the most dangerous. Broken access control means that your application checks whether a user is logged in but does not check whether that user is allowed to do what they are asking to do. Authentication answers "who are you?" Authorization answers "are you allowed to do this?" Most applications get the first part right and completely skip the second.
Why it happens
Developers build features from the perspective of the happy path. They think about what a legitimate user would do: log in, view their own profile, update their own settings. The idea that User A would manually change an ID in the URL to access User B's data simply does not cross their mind during development. Frameworks often provide authentication middleware out of the box, but authorization logic is almost always left to the developer to implement manually, and it gets missed on endpoint after endpoint.
Real-world impact
An attacker who discovers this pattern can access, modify, or delete any user's data simply by iterating through ID values. We have seen this lead to full account takeovers, exposure of medical records, theft of financial data, and exfiltration of entire customer databases. OWASP ranks Broken Access Control as the #1 web application security risk for good reason.
Vulnerable code
$user_id = $_GET['id'];
// Authentication check exists...
if (!isset($_SESSION['user_id'])) {
header('Location: /login');
exit;
}
// But NO authorization check - any logged-in user can view any profile
$stmt = $pdo->prepare('SELECT * FROM users WHERE id = ?');
$stmt->execute([$user_id]);
$profile = $stmt->fetch();
The fix
$user_id = $_GET['id'];
$session_user = $_SESSION['user_id'];
// Option 1: Explicit ownership check
if ($user_id != $session_user && !isAdmin($session_user)) {
http_response_code(403);
exit('Forbidden');
}
// Option 2 (preferred): Scope queries to the authenticated user's tenant
$stmt = $pdo->prepare('SELECT * FROM users WHERE id = ? AND company_id = ?');
$stmt->execute([$user_id, $_SESSION['company_id']]);
The most robust approach is to scope every database query to the authenticated user's tenant or organization. This way, even if an attacker manipulates IDs, the query itself prevents cross-tenant data access. We call this "authorization by construction" because the protection is built into the query rather than bolted on as a conditional check that can be missed on one endpoint out of fifty.
2 Hardcoded Secrets and API Keys Critical
Every code review we perform involves a search for hardcoded secrets, and we find them in roughly 70% of codebases. API keys, database passwords, JWT signing secrets, OAuth client secrets, encryption keys, and third-party service credentials embedded directly in source code files that end up in version control.
Why it happens
This has always been a problem, but AI-generated code has made it significantly worse. When developers ask an AI coding assistant to "connect to the Stripe API" or "set up JWT authentication," the generated code almost always includes placeholder values that look like real credentials. Developers replace those placeholders with actual keys during development, intend to move them to environment variables later, and never do. The secret gets committed, pushed, and forgotten.
Real-world impact
Secrets in source code are trivially discoverable. Attackers scrape public GitHub repositories in real time looking for API keys. Internal repositories are accessible to every employee, contractor, and anyone who compromises a developer's laptop. A single leaked AWS key can result in hundreds of thousands of dollars in fraudulent compute charges. A leaked database password gives an attacker direct access to your entire data store, bypassing every application-level security control you have built.
Vulnerable code
$db_host = 'prod-db.us-east-1.rds.amazonaws.com';
$db_password = 'xK9#mP2$vL7nQ4wR';
$jwt_secret = 'super-secret-jwt-key-2026';
$stripe_key = 'sk_live_51ABC123DEF456...';
$openai_key = 'sk-proj-abc123def456...';
The fix
$db_host = getenv('DB_HOST') ?: throw new RuntimeException('DB_HOST not set');
$db_password = getenv('DB_PASSWORD') ?: throw new RuntimeException('DB_PASSWORD not set');
$jwt_secret = getenv('JWT_SECRET') ?: throw new RuntimeException('JWT_SECRET not set');
$stripe_key = getenv('STRIPE_SECRET_KEY') ?: throw new RuntimeException('STRIPE_SECRET_KEY not set');
// .gitignore must include: .env, *.pem, *credentials*
// Use a secrets manager (AWS Secrets Manager, HashiCorp Vault) in production
Environment variables are the minimum standard. For production systems, use a dedicated secrets manager that provides rotation, auditing, and access controls. And always run a secrets scanner like gitleaks or trufflehog as a pre-commit hook to catch secrets before they ever reach your repository.
3 SQL Injection via String Concatenation Critical
SQL injection has been the most well-documented vulnerability in web security for over 25 years. Parameterized queries have been the standard defense for almost as long. And yet, we still find SQL injection in a significant percentage of codebases we review. It remains the number one vulnerability category in terms of potential damage when it is present.
Why it happens
Developers learn SQL by writing queries with string concatenation. Tutorials still teach it this way. AI code generators frequently produce concatenated queries because they appear in a massive volume of training data. When building search features, filtering systems, or dynamic reports, it feels natural to build the query string by appending user input. The developer may not realize they have created an injection point, especially in complex queries with multiple dynamic conditions.
Real-world impact
SQL injection is a "game over" vulnerability. An attacker can read your entire database, modify or delete data, bypass authentication, and in some configurations, execute operating system commands on the database server itself. A single SQL injection in a login form can give an attacker administrative access. A single SQL injection in a search feature can dump every customer record in your system.
Vulnerable code
$search = $_GET['q'];
$sort = $_GET['sort'];
// VULNERABLE - direct concatenation of user input
$sql = "SELECT * FROM products WHERE name LIKE '%" . $search . "%' ORDER BY " . $sort;
$result = $pdo->query($sql);
// Attacker input: q=' UNION SELECT username,password FROM users--
The fix
$search = '%' . $_GET['q'] . '%';
// Whitelist allowed column names for ORDER BY (cannot be parameterized)
$allowed_sorts = ['name', 'price', 'created_at'];
$sort = in_array($_GET['sort'], $allowed_sorts) ? $_GET['sort'] : 'name';
$stmt = $pdo->prepare("SELECT * FROM products WHERE name LIKE ? ORDER BY {$sort}");
$stmt->execute([$search]);
Use parameterized queries for all user-supplied values. For parts of the query that cannot be parameterized, such as column names, table names, and sort directions, use a strict whitelist of allowed values. If the input does not match the whitelist, fall back to a safe default. Never trust the user to provide valid SQL identifiers.
4 Cross-Site Scripting (XSS) from Unescaped Output High
Cross-Site Scripting occurs when user-controlled data is rendered in the browser without proper encoding. The attacker injects JavaScript that executes in the context of your application, giving them the ability to steal session tokens, redirect users to phishing pages, modify page content, or perform actions on behalf of the victim.
Why it happens
Modern frameworks like React and Vue escape output by default, which has reduced XSS in newer single-page applications. But we still find it consistently in three places: server-rendered PHP templates where output is echoed directly, JavaScript code that sets innerHTML or uses jQuery's .html() method, and admin dashboards that display user-submitted content under the assumption that "only admins see this, so it is safe." That last assumption is exactly how stored XSS in admin panels leads to privilege escalation.
Real-world impact
Stored XSS is especially dangerous because it executes every time a user views the affected page. An attacker who injects a script into a support ticket description can steal the session cookie of every support agent who views it. From there, the attacker has admin access without ever needing to crack a password. We have seen stored XSS in CMS platforms used to deface customer-facing pages, inject cryptocurrency miners, and redirect checkout flows to attacker-controlled payment pages.
Vulnerable code
<h2>Welcome, <?= $user['display_name'] ?></h2>
<div class="bio"><?= $user['bio'] ?></div>
// JavaScript setting innerHTML with unsanitized data
const renderComment = (comment) => {
document.getElementById('comments').innerHTML +=
`<div class="comment">${comment.body}</div>`;
};
The fix
<h2>Welcome, <?= htmlspecialchars($user['display_name'], ENT_QUOTES, 'UTF-8') ?></h2>
<div class="bio"><?= htmlspecialchars($user['bio'], ENT_QUOTES, 'UTF-8') ?></div>
// JavaScript: Use textContent instead of innerHTML
const renderComment = (comment) => {
const div = document.createElement('div');
div.className = 'comment';
div.textContent = comment.body;
document.getElementById('comments').appendChild(div);
};
The rule is simple: encode all output, everywhere, every time. In PHP, use htmlspecialchars() with the ENT_QUOTES flag. In JavaScript, use textContent instead of innerHTML whenever possible. When you absolutely must render user-supplied HTML, use a battle-tested sanitization library like DOMPurify. Never rely on "this field will not contain malicious input" as a security control.
5 Insecure JWT Implementation High
JSON Web Tokens have become the default authentication mechanism for APIs and single-page applications. The problem is not JWT itself but the way it is implemented. We regularly find JWT implementations that are vulnerable to algorithm confusion attacks, fail to validate signatures properly, store sensitive data in the payload, or use weak signing keys that can be brute-forced offline.
Why it happens
JWT looks simple on the surface: encode some data, sign it, send it to the client. Libraries make it easy to generate tokens. But the security of JWT depends entirely on the details of implementation, and those details are easy to get wrong. Developers copy JWT code from tutorials that skip critical validation steps. They use symmetric keys that are too short. They put user roles, email addresses, and permissions in the payload without understanding that JWT payloads are base64-encoded, not encrypted, and can be read by anyone.
Real-world impact
An attacker who can forge a JWT can impersonate any user, escalate to admin privileges, or bypass authentication entirely. The classic "algorithm none" attack, setting the algorithm header to "none" to bypass signature verification, still works against poorly configured libraries. If the signing key is weak, it can be cracked offline using tools like hashcat in minutes. We have seen JWT vulnerabilities used to steal entire SaaS customer databases by forging admin tokens.
Vulnerable code
const jwt = require('jsonwebtoken');
// VULNERABLE: Accepts any algorithm, including 'none'
const decoded = jwt.verify(token, 'secret123');
// VULNERABLE: Weak key + sensitive data in payload
const token = jwt.sign({
userId: user.id,
email: user.email, // Sensitive data in payload
role: 'admin', // Role in payload = forgeable
ssn: user.ssn // PII in an unencrypted token
}, 'secret123');
The fix
// Use a strong, random key (256+ bits) from environment
const JWT_SECRET = process.env.JWT_SECRET;
// ALWAYS specify the allowed algorithm explicitly
const decoded = jwt.verify(token, JWT_SECRET, {
algorithms: ['HS256'], // Reject 'none' and RS/HS confusion
issuer: 'your-app.com',
audience: 'api.your-app.com',
});
// Keep payload minimal - look up roles from the database
const token = jwt.sign({
sub: user.id, // Only the user identifier
}, JWT_SECRET, {
algorithm: 'HS256',
expiresIn: '1h', // Short expiration
issuer: 'your-app.com',
});
Three rules for safe JWT: always specify the algorithm explicitly in the verification call, use a strong random key stored in environment variables (not source code), and keep the payload minimal. Roles and permissions should be looked up from the database on each request, not trusted from the token. If you need to store sensitive data in the token itself, use JWE (encrypted tokens) instead of JWS (signed tokens).
6 Mass Assignment / Over-Posting High
Mass assignment occurs when an application binds HTTP request parameters directly to internal data models without filtering which fields the user is allowed to set. The user sends a request to update their profile name, but they also include is_admin: true or price: 0 in the request body, and the application blindly saves it to the database.
Why it happens
Modern frameworks encourage patterns like Model::create($request->all()) or Object.assign(user, req.body) because they are fast and convenient during development. The developer sees it working on the frontend, where the form only sends name and email, and does not consider that an attacker can send any fields they want using a tool like Burp Suite or a simple curl command. This vulnerability was famously exploited in the 2012 GitHub mass assignment incident, where a user gave themselves commit access to any repository on the platform.
Real-world impact
Depending on the data model, an attacker can escalate privileges to admin, modify pricing on products and orders, change ownership of resources, bypass approval workflows, or manipulate any field that exists on the database model. It is especially dangerous in e-commerce applications where order amounts, discount codes, and shipping fees are model attributes that sit alongside user-editable fields like name and address.
Vulnerable code
app.put('/api/users/:id', async (req, res) => {
// VULNERABLE: Spreads ALL request body fields into update
await User.findByIdAndUpdate(req.params.id, req.body);
res.json({ success: true });
});
// Attacker sends: { "name": "Hacker", "role": "admin", "verified": true }
The fix
// Explicitly pick only the fields users are allowed to update
const allowedFields = {
name: req.body.name,
email: req.body.email,
bio: req.body.bio,
};
// Remove undefined values so we don't overwrite with nulls
Object.keys(allowedFields).forEach(key =>
allowedFields[key] === undefined && delete allowedFields[key]
);
await User.findByIdAndUpdate(req.params.id, allowedFields);
res.json({ success: true });
});
Always use an explicit allowlist of fields that can be set by the user. Never pass raw request data directly to model update methods. In frameworks that support it, use features like Laravel's $fillable property or Mongoose's schema-level field restrictions to enforce which fields are writable. Treat the request body as untrusted input, because that is exactly what it is.
7 Missing Rate Limiting on Authentication Medium
Login endpoints, password reset flows, MFA verification, and account registration forms that accept unlimited requests at any speed. No rate limiting. No account lockout. No CAPTCHA. Nothing preventing an attacker from trying millions of password combinations or flooding the password reset endpoint to enumerate which email addresses are registered in your system.
Why it happens
Rate limiting is not a feature that appears in product requirements. It does not show up in user stories or sprint planning. It is invisible to legitimate users when it works correctly, and its absence is only noticed during an attack. Developers building authentication flows focus on making login work correctly for the happy path, not on what happens when someone sends 10,000 login attempts per minute from a botnet.
Real-world impact
Without rate limiting, an attacker can perform credential stuffing attacks using lists of breached passwords from other services, brute-force short or common passwords, enumerate valid usernames via timing differences or inconsistent error messages, and abuse password reset flows to spam users or discover which email addresses are registered. These attacks are automated, cheap to run, and operate continuously against any exposed login page.
Vulnerable code
app.post('/api/auth/login', async (req, res) => {
const { email, password } = req.body;
const user = await User.findOne({ email });
if (!user || !await bcrypt.compare(password, user.password)) {
return res.status(401).json({ error: 'Invalid credentials' });
}
// No rate limiting, no lockout, no delay
// Attacker can try millions of passwords
res.json({ token: generateToken(user) });
});
The fix
// Apply strict rate limiting to authentication endpoints
const authLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minute window
max: 10, // 10 attempts per window
skipSuccessfulRequests: true,
keyGenerator: (req) => req.body.email || req.ip,
message: { error: 'Too many attempts. Try again later.' },
});
app.post('/api/auth/login', authLimiter, async (req, res) => {
// ... authentication logic ...
});
// Also rate limit password reset and registration
app.post('/api/auth/forgot-password', authLimiter, async (req, res) => {
// Always return the same response whether email exists or not
res.json({ message: 'If that email exists, a reset link has been sent.' });
});
Rate limiting should be applied to every authentication-related endpoint: login, registration, password reset, MFA verification, and API key generation. Use a sliding window or token bucket algorithm, and key the limit on both the IP address and the target account to prevent distributed attacks. Return generic error messages that do not reveal whether a specific account exists.
8 Insecure File Upload Handling High
File upload functionality is one of the most dangerous features an application can offer. We regularly find upload endpoints that trust the client-provided filename, validate file type only by checking the extension or Content-Type header (both of which the attacker fully controls), store uploaded files in a directory that is directly accessible via the web server, and fail to scan for malicious content.
Why it happens
File uploads are a UX requirement for almost every application: profile pictures, document attachments, CSV imports, support ticket screenshots. Developers implement the feature to meet the functional requirement and move on. The file upload looks like it works perfectly in testing because nobody tries to upload a PHP webshell during QA. The validation checks the extension, which feels like security, but the attacker controls the extension too.
Real-world impact
If an attacker can upload a file with a .php, .jsp, or .aspx extension to a web-accessible directory, they can execute arbitrary code on the server, giving them full control of the machine. Even without code execution, path traversal in filenames can let an attacker overwrite critical configuration files. Uploaded HTML files can be used for convincing phishing attacks hosted on your domain. Uploaded SVG files can contain embedded JavaScript that triggers XSS.
Vulnerable code
$filename = $_FILES['avatar']['name']; // Trusting client filename
$upload_dir = './uploads/'; // Web-accessible directory
// Only checks extension - attacker sends "shell.php.jpg" or "shell.pHp"
$ext = pathinfo($filename, PATHINFO_EXTENSION);
if (in_array($ext, ['jpg', 'png', 'gif'])) {
move_uploaded_file($_FILES['avatar']['tmp_name'], $upload_dir . $filename);
}
The fix
$allowed_types = ['image/jpeg', 'image/png', 'image/gif'];
// Check actual file content (magic bytes), not the client-provided type
$finfo = new finfo(FILEINFO_MIME_TYPE);
$mime = $finfo->file($_FILES['avatar']['tmp_name']);
if (!in_array($mime, $allowed_types)) {
http_response_code(400);
exit('Invalid file type');
}
// Generate random filename with correct extension from MIME
$ext_map = ['image/jpeg' => 'jpg', 'image/png' => 'png', 'image/gif' => 'gif'];
$safe_name = bin2hex(random_bytes(16)) . '.' . $ext_map[$mime];
// Store OUTSIDE web root, serve via a controller with proper headers
$upload_dir = '/var/uploads/'; // NOT in /var/www/html/
move_uploaded_file($_FILES['avatar']['tmp_name'], $upload_dir . $safe_name);
Secure file upload requires multiple layers: validate the MIME type from the file content using magic bytes (not headers or extensions), generate a random filename that the attacker cannot predict or control, store files outside the web root, serve them through a controller that sets Content-Disposition: attachment, and enforce a strict file size limit. For production applications, store uploads in a separate storage service like S3 with a different domain to prevent cookie leakage and session hijacking.
9 Race Conditions in Financial Operations Critical
Race conditions occur when two or more requests are processed concurrently and the result depends on the order of execution. In financial operations, this creates opportunities for double-spending, duplicate coupon redemption, balance manipulation, and inventory theft. These are time-of-check to time-of-use (TOCTOU) bugs, and they are among the hardest vulnerabilities to detect through automated testing or even manual testing that sends one request at a time.
Why it happens
Developers write code that is logically correct when requests arrive one at a time: check the balance, verify it is sufficient, deduct the amount. But when two identical requests arrive within milliseconds of each other, both pass the balance check before either has deducted the funds. The result is a double withdrawal from a single balance. This class of bug is invisible in normal functional testing because testers click buttons sequentially. It only manifests under concurrent load or deliberate exploitation with tools like Burp Suite's turbo-intruder.
Real-world impact
We have found race conditions that allowed users to redeem a single coupon code multiple times, transfer more money than their available balance, purchase items while paying only once, and claim signup bonuses repeatedly. In one fintech code review, we identified a race condition in the withdrawal flow that would have allowed an attacker to drain the company's entire operational float by sending parallel withdrawal requests faster than the balance could be decremented.
Vulnerable code
$sender = getUser($sender_id);
$amount = (float) $_POST['amount'];
// CHECK: Does sender have enough balance?
if ($sender['balance'] < $amount) {
exit('Insufficient funds');
}
// USE: Deduct and credit (TOCTOU gap here!)
updateBalance($sender_id, $sender['balance'] - $amount);
updateBalance($recipient_id, $recipient['balance'] + $amount);
// Two parallel requests both pass the check before either deducts
The fix
$pdo->beginTransaction();
try {
// SELECT ... FOR UPDATE locks the row until commit
$stmt = $pdo->prepare(
'SELECT balance FROM users WHERE id = ? FOR UPDATE'
);
$stmt->execute([$sender_id]);
$balance = $stmt->fetchColumn();
if ($balance < $amount) {
$pdo->rollBack();
exit('Insufficient funds');
}
// Atomic deduction - no TOCTOU gap
$pdo->prepare('UPDATE users SET balance = balance - ? WHERE id = ?')
->execute([$amount, $sender_id]);
$pdo->prepare('UPDATE users SET balance = balance + ? WHERE id = ?')
->execute([$amount, $recipient_id]);
$pdo->commit();
} catch (Exception $e) {
$pdo->rollBack();
throw $e;
}
The fix for race conditions is atomicity. Use database transactions with SELECT ... FOR UPDATE to lock rows during critical operations. Use atomic SQL operations like balance = balance - ? instead of reading the balance, computing the new value in application code, and writing it back. For distributed systems, consider advisory locks, idempotency keys, or optimistic concurrency control with version columns. Test for race conditions by sending parallel requests with tools like turbo-intruder or a simple multi-threaded script.
10 Insufficient Logging and Error Handling Medium
This finding is less about what an attacker can exploit directly and more about what happens after they do. We consistently find applications with no meaningful audit logs for security-relevant events, error handling that exposes stack traces and internal details to users in production, sensitive data like passwords, tokens, and credit card numbers written to log files in cleartext, and no alerting for suspicious activity patterns.
Why it happens
Logging and error handling are treated as afterthoughts. During development, verbose error output is helpful for debugging, and developers leave it enabled when deploying to production. Nobody writes user stories for "log all failed authentication attempts" or "redact sensitive fields from error responses." Security monitoring requires infrastructure that most startups do not build until after they have already been breached and need to figure out what the attacker accessed.
Real-world impact
Stack traces in error responses leak framework versions, file paths, database table names, and query structures, all of which is valuable reconnaissance information for an attacker mapping your system. Sensitive data in logs creates a secondary exposure point: anyone with access to log storage, whether it is a developer, a monitoring SaaS, or a cloud provider, can see passwords and tokens. And without audit logs, when a breach does happen, the incident response team is flying blind with no visibility into what the attacker accessed, when they accessed it, or how they got in.
Vulnerable code
ini_set('display_errors', 1); // Stack traces visible to users
try {
processPayment($card_number, $amount);
} catch (Exception $e) {
// Logs full credit card number
error_log("Payment failed for card $card_number: " . $e->getMessage());
// Returns internal error details to the user
echo "Error: " . $e->getMessage() . " in " . $e->getFile();
}
The fix
ini_set('display_errors', 0);
ini_set('log_errors', 1);
try {
processPayment($card_number, $amount);
} catch (Exception $e) {
// Log safely: mask sensitive data, include request context
$masked_card = '****' . substr($card_number, -4);
error_log(json_encode([
'event' => 'payment_failed',
'card' => $masked_card,
'error' => $e->getMessage(),
'user_id' => $_SESSION['user_id'],
'ip' => $_SERVER['REMOTE_ADDR'],
'timestamp' => date('c'),
]));
// Return a generic message to the user
http_response_code(500);
echo json_encode(['error' => 'An internal error occurred. Please try again.']);
}
Production error handling should follow three rules: never expose internal details to users, never log sensitive data in cleartext, and always log enough context for incident response. Use structured logging in JSON format so you can search, filter, and alert on specific event types. At a minimum, log all authentication events, authorization failures, input validation failures, and administrative actions with timestamps, user IDs, and source IP addresses.
All 10 Findings at a Glance
Here is a summary of every finding we covered, its severity rating, how often we encounter it in engagements, and whether static analysis tools (SAST) reliably catch it without a human reviewer.
| # | Finding | Severity | Frequency | SAST Catches It? |
|---|---|---|---|---|
| 1 | Broken Access Control | Critical | ~85% of reviews | Rarely |
| 2 | Hardcoded Secrets | Critical | ~70% of reviews | Often |
| 3 | SQL Injection | Critical | ~55% of reviews | Sometimes |
| 4 | Cross-Site Scripting (XSS) | High | ~50% of reviews | Sometimes |
| 5 | Insecure JWT Implementation | High | ~45% of reviews | Rarely |
| 6 | Mass Assignment | High | ~40% of reviews | Rarely |
| 7 | Missing Rate Limiting | Medium | ~60% of reviews | No |
| 8 | Insecure File Uploads | High | ~35% of reviews | Sometimes |
| 9 | Race Conditions | Critical | ~30% of reviews | Almost never |
| 10 | Insufficient Logging | Medium | ~75% of reviews | No |
Notice the SAST column. The findings that automated tools catch reliably, like hardcoded secrets and some SQL injection patterns, are important, but they represent a minority of what we find. The most impactful vulnerabilities, broken access control, race conditions, insecure JWT implementations, and mass assignment, require a human reviewer who understands the application's business logic and data flow to identify. This is why a security code review is not the same as running a scanner. Scanners are a useful first pass, but they are not a substitute for an experienced security engineer reading your code.
If you recognize three or more of these patterns in your codebase, it is time for a professional code review. These vulnerabilities do not exist in isolation. A codebase with broken access control almost always has hardcoded secrets and missing rate limiting too. The patterns compound, and so does the risk. A focused security code review can identify all of these issues in a single engagement and give your team a prioritized remediation plan that addresses the most dangerous findings first.
What to do next
If you are a developer, start with a self-audit. Search your codebase for the patterns described above: string concatenation in SQL queries, raw $_GET or req.body values passed directly to database operations, innerHTML assignments with user data, and any file that contains strings resembling API keys or passwords. Fix what you can and flag what you cannot for a professional review.
If you are a founder or engineering leader, understand that these findings are not a reflection of your team's competence. They are the natural result of building fast under pressure, using AI coding tools that optimize for functionality over security, and working without dedicated security expertise on staff. The fastest path to fixing them is to bring in someone who looks for exactly these patterns every day.
Want to understand more about how code reviews fit into your overall security program? Read our guide on choosing between a code review and a penetration test, explore the secure code review process in detail, or learn about the specific security risks in AI-generated code that are making several of these findings more common than ever.
Get these vulnerabilities out of your codebase
Our security engineers review code every day and know exactly where to look. A focused code review identifies these issues, prioritizes them by risk, and gives your team clear remediation guidance with working code examples.