Cursor, Copilot, and Claude: Security Risks in AI Code Assistants | Lorikeet Security Skip to main content
Back to Blog

Cursor, Copilot, and Claude: Security Risks in AI Code Assistants

Lorikeet Security Team February 26, 2026 11 min read

AI code assistants have fundamentally changed how software gets built. Cursor, GitHub Copilot, and Claude (via API or through tools like Claude Code) are now part of the daily workflow for millions of developers. They write boilerplate, generate entire functions, scaffold applications, and autocomplete complex logic faster than any human could.

They also introduce security vulnerabilities at a rate that should concern every engineering leader.

This is not a theoretical risk. We have spent the past year reviewing dozens of AI-built applications, and the pattern is consistent: AI-generated code is functional but frequently insecure. The tools optimize for "does it work" and not "is it safe." They generate code that compiles, passes basic tests, and ships to production, complete with SQL injection vectors, missing authentication checks, hardcoded secrets, and broken access controls.

Here is what each tool gets wrong, what they get right, and how to use them without shipping critical vulnerabilities.


Why AI Code Assistants Generate Insecure Code

The fundamental issue is not that these tools are poorly built. They are remarkably capable. The issue is that they are trained on public code, and most public code is insecure.

GitHub alone has millions of repositories containing SQL queries built with string concatenation, API keys hardcoded in configuration files, authentication checks that validate on the client side only, and encryption using deprecated algorithms. When an AI model trains on this corpus, it learns the most common patterns, and the most common patterns are often the insecure ones.

The training data problem

AI models learn by statistical frequency. If 70% of the database query examples in the training data use string concatenation instead of parameterized queries, the model will default to string concatenation. It does not understand that parameterized queries are the secure pattern. It understands that string concatenation is the most frequently observed pattern.

This creates a subtle but dangerous dynamic. The AI consistently generates code that looks correct, follows recognizable patterns, and works functionally, but defaults to the insecure version of every security-critical operation unless explicitly prompted otherwise.

The context gap

Human developers understand context that AI assistants do not. When a human developer builds an API endpoint, they think about who should be able to access it, what data it exposes, and what happens if the input is malicious. An AI assistant generates the endpoint that satisfies the functional requirement described in the prompt. If the prompt says "create an endpoint that returns user data," the AI creates an endpoint that returns user data. It does not add authorization checks unless you ask for them, because authorization was not part of the request.

This is how we end up with applications that have functional CRUD operations but no access control. The developer prompted for features. The AI delivered features. Nobody prompted for security.


Cursor: Codebase-Aware but Still Pattern-Dependent

Cursor distinguishes itself by being codebase-aware. It indexes your entire repository and uses that context to generate code that matches your existing patterns and conventions. This is a double-edged sword from a security perspective.

What Cursor gets right

Because Cursor understands your codebase, it is less likely to generate code that conflicts with your existing security patterns. If your codebase consistently uses parameterized queries, Cursor will follow that pattern. If you have an authentication middleware, Cursor may suggest using it. The codebase context acts as a guardrail that other tools lack.

What Cursor gets wrong

Cursor inherits whatever security posture your codebase already has. If your codebase has inconsistent security patterns (and most do), Cursor will generate code that is inconsistently secure. We have seen Cursor-generated code that uses the ORM for some queries and raw SQL for others within the same application, because both patterns existed in the codebase.

Cursor also tends to be aggressive with code generation, sometimes producing entire files or feature modules in a single pass. The sheer volume of generated code can overwhelm code review. When a developer gets a 200-line file that looks correct and works functionally, the temptation to commit it without thorough security review is strong.

In our reviews of Cursor-generated applications, the most common findings are: missing input validation on user-controlled parameters, overly permissive CORS configurations, error messages that leak internal system details, and API endpoints that check authentication but not authorization (you are logged in, but that does not mean you should access this specific resource).


GitHub Copilot: The Volume Problem

Copilot is the most widely adopted AI code assistant, embedded directly in VS Code and other editors. Its inline suggestions and chat mode make it effortless to accept generated code, which is exactly the problem.

What Copilot gets right

GitHub has invested significantly in security features around Copilot. Secret scanning integration, code scanning alerts, and Copilot's awareness of security advisories are genuine improvements. The tool can detect when it is about to suggest a known-vulnerable dependency and will sometimes flag it.

What Copilot gets wrong

Copilot's autocomplete model encourages rapid acceptance of suggestions. A developer typing a function signature gets an entire implementation suggested inline, and accepting it is a single Tab keystroke. This low-friction acceptance model means insecure code enters the codebase with less scrutiny than code a developer typed manually.

Research from Stanford and other institutions has consistently shown that developers using Copilot are more likely to introduce security vulnerabilities than developers writing code manually, and less likely to recognize those vulnerabilities during review. The trust heuristic ("the AI suggested it, so it is probably fine") is a measurable phenomenon.

Common Copilot security issues we encounter include: hardcoded placeholder credentials that were never replaced (the classic password = "changeme" that shipped to production), JWT implementations with algorithm confusion vulnerabilities, file upload handlers without type or size validation, and redirect endpoints that accept arbitrary URLs (open redirect).


Claude: More Cautious, Still Imperfect

Claude (Anthropic's AI) tends to be more security-conscious in its code generation than Copilot or Cursor. It frequently includes security commentary, warns about potential vulnerabilities, and defaults to more defensive patterns. But "more cautious" is not the same as "secure."

What Claude gets right

Claude is more likely to add input validation, suggest parameterized queries, and include comments about security considerations in its generated code. When asked to build an authentication system, Claude typically includes rate limiting, password hashing with bcrypt, and session management, whereas other tools may generate a basic username/password check with plain text comparison.

Claude also tends to explain the security rationale for its choices, which educates the developer and makes security-relevant decisions more visible during code review.

What Claude gets wrong

Claude's security awareness is inconsistent. It depends heavily on how the question is framed. When asked to "build a quick prototype," Claude will cut corners the same way a human developer would, omitting validation, skipping authorization, and using simplified error handling. The security guidance only appears consistently when the prompt signals that security matters.

Claude also struggles with application-specific security context. It knows that SQL injection is bad in general, but it does not know that your specific application uses a custom ORM that has its own injection vectors, or that your authentication system has a non-standard session management approach that requires specific handling.

In our reviews of Claude-generated code, the most common issues are: overly broad exception handling that catches and silences security-relevant errors, authorization logic that checks roles but not resource ownership (horizontal privilege escalation), and API rate limiting that is suggested in comments but not actually implemented in the code.


Security Feature Comparison Across Tools

Security Feature Cursor GitHub Copilot Claude
Codebase context Full repository indexing Open file + limited context Conversation context / project files
Secret detection Limited (relies on linters) GitHub secret scanning integration Warns about hardcoded secrets in output
Security commentary Minimal Minimal Frequently includes security notes
Default to secure patterns Follows existing codebase patterns Follows training data frequency More likely to default secure, but inconsistent
Vulnerability awareness Limited Security advisory integration General knowledge, not real-time
Data privacy Code sent to AI provider Code sent to GitHub/OpenAI (opt-out available) Depends on deployment (API vs. enterprise)
Input validation habits Inconsistent Frequently omitted More consistent but prompt-dependent
Auth/authz patterns Copies from existing code Basic auth, often missing authz Includes both, but may miss edge cases

The table makes it clear: no AI code assistant provides reliable security out of the box. Each has relative strengths, but none eliminates the need for security review. The tool that generates your code is not responsible for its security. You are.


Real Insecure Patterns We Find in AI-Generated Code

These are not theoretical vulnerabilities. These are patterns we find repeatedly during web application penetration tests and code reviews of applications built with AI assistance.

SQL injection through string interpolation

AI assistants frequently generate database queries using template literals or string concatenation instead of parameterized queries. This is the most common vulnerability in web applications and AI tools perpetuate it because it is the most common pattern in their training data. The code works perfectly in testing because developers test with clean input. An attacker uses crafted input and exfiltrates the entire database.

Missing authorization on API endpoints

The AI generates a REST API with full CRUD operations. Each endpoint checks that the user is authenticated (logged in). None of them check that the user is authorized to access the specific resource they are requesting. User A can read, modify, and delete User B's data by changing an ID in the URL. This is IDOR (Insecure Direct Object Reference), and it is endemic in AI-generated APIs.

Hardcoded secrets in generated code

When AI assistants generate configuration files, database connection strings, or API integrations, they frequently include placeholder secrets: API_KEY=sk-your-api-key-here or DATABASE_URL=postgres://admin:password@localhost/mydb. These placeholders are intended to be replaced, but they often are not. They get committed to version control and deployed to production. We have found production applications with JWT_SECRET=your-secret-key still in the environment configuration.

Client-side only validation

AI tools generating frontend code will add input validation to forms: email format checks, password strength requirements, length limits. But the same AI generating the backend API does not add server-side validation for those same fields. An attacker bypasses the frontend entirely and sends raw requests to the API, which accepts whatever it receives.

Overly permissive CORS

To make the application "work" during development, AI assistants frequently set CORS to allow all origins: Access-Control-Allow-Origin: *. This ships to production because nobody changes it, and it enables cross-site request attacks against your API from any website on the internet.

The pattern is consistent: AI code assistants generate code that works correctly for the happy path but fails catastrophically under adversarial input. They build the house but forget to lock the doors. This is why AI-generated code needs security review before it touches production.


What to Look for in Code Review of AI-Generated Code

Reviewing AI-generated code requires a different mindset than reviewing human-written code. With human code, you look for mistakes the developer made. With AI code, you look for security considerations the AI did not make.

The AI code review checklist

Treat AI-generated code the way you would treat code from a talented but security-unaware junior developer: assume it works but verify that it is safe.


How to Use AI Code Assistants Safely

The answer is not to stop using these tools. They are too productive to abandon. The answer is to use them with guardrails that catch the security gaps before they reach production.

Establish security-aware prompting patterns

When prompting AI tools, explicitly include security requirements: "Generate this endpoint with input validation, parameterized queries, and authorization checks that verify the requesting user owns the resource." The AI will not add these by default, but it will add them when asked. Build security prompting into your team's practices.

Automate what you can

SAST tools (Semgrep, CodeQL, SonarQube) catch a significant percentage of AI-generated vulnerabilities automatically. Run them in your CI/CD pipeline on every pull request. They will not catch everything (they miss business logic flaws and complex authorization issues), but they catch SQL injection, XSS, hardcoded secrets, and insecure configurations reliably.

Require manual security review for sensitive code

Any code that touches authentication, authorization, payment processing, personal data, or cryptography should receive a manual security review regardless of who (or what) wrote it. This is where an external penetration test or secure code review provides the most value: validating that the AI-generated security logic actually works under adversarial conditions.

Keep secrets out of the AI context

Never paste production credentials, API keys, or customer data into an AI assistant's context window. Use environment variable placeholders in your prompts. Configure your IDE's AI integration to exclude .env files, credential stores, and sensitive configuration from the AI's context.

Test like an attacker, not a user

AI-generated code passes user acceptance testing because it handles the expected inputs correctly. Security testing means sending unexpected inputs: SQL injection payloads, oversized inputs, missing required fields, negative numbers where positives are expected, and special characters in every field. If your testing only covers the happy path, you are not testing security.

Building with AI? Let us check the security.

Lorikeet Security specializes in reviewing AI-generated applications for the vulnerabilities that tools like Cursor, Copilot, and Claude consistently introduce. Get a professional security assessment before your AI-built code reaches production.

-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.

Lory waving

Hi, I'm Lory! Need help finding the right service? Click to chat!