Something has fundamentally shifted in how software gets built. Tools like GitHub Copilot, Cursor, Replit Agent, and Claude are letting founders, solo developers, and small teams ship production applications in days instead of months. The barrier to going from idea to deployed product has never been lower.
This is genuinely great for innovation. But it has created a serious blind spot that we see exploited over and over again: AI coding tools optimize for functionality, not security. They will absolutely get your app working. They will not make sure it is safe.
We have reviewed dozens of AI-generated and "vibe-coded" codebases over the past year. The patterns are remarkably consistent. The same categories of vulnerability appear in nearly every single one. This post breaks down exactly what we find and what you can do about it before your launch turns into your first incident.
What "Vibe Coding" Actually Means for Security
The term "vibe coding" describes a development style where a developer describes what they want in natural language, and an AI assistant generates the implementation. The developer reviews the output at a high level, maybe tests the happy path, and moves on to the next feature. The focus is on velocity and iteration, not on line-by-line code review.
This workflow works surprisingly well for getting features built. The problem is that security vulnerabilities rarely show up in the happy path. A login page can work perfectly while being trivially bypassable. An API can return the right data to the right user while also returning anyone's data to anyone who asks. AI tools do not think adversarially. They build what you ask for, not what you need to defend against.
The Six Vulnerability Patterns We See in Every Codebase
After reviewing a significant number of AI-generated projects, we have identified six categories of issues that appear with near-universal consistency. These are not theoretical risks. These are real findings from real codebases that were headed for production.
1. Hardcoded API Keys and Secrets in Source Code
This is the most common issue we find, and it is often the most dangerous. AI tools will happily generate working code that includes your Stripe secret key, your database connection string, your AWS credentials, or your JWT signing secret directly in the source file. When prompted to "connect to the database" or "add Stripe payments," the generated code frequently inlines the credentials.
const stripe = require('stripe')('sk_live_51ABC123...');
const dbUri = 'mongodb+srv://admin:P@[email protected]/prod';
// What it should generate:
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);
const dbUri = process.env.DATABASE_URL;
If this code gets pushed to a public GitHub repository, or even a private one with overly broad access, those credentials are compromised. Automated bots scrape GitHub for exactly these patterns. We have seen live Stripe keys, AWS root credentials, and database passwords exposed this way.
2. Missing or Broken Authorization Checks on Routes
AI-generated backends frequently implement authentication (verifying who someone is) without implementing authorization (verifying what they are allowed to do). Routes get created that check whether a user is logged in but never check whether that user should have access to the specific resource they are requesting.
app.get('/api/users/:id/billing', authMiddleware, async (req, res) => {
const billing = await Billing.findOne({ userId: req.params.id });
res.json(billing);
});
// Any authenticated user can access ANY other user's billing data
// by simply changing the :id parameter in the URL
This is a textbook Insecure Direct Object Reference (IDOR) vulnerability. It is one of the most exploited vulnerability classes on the web, and AI tools produce it constantly because they build what you describe (a billing endpoint) without considering who should be allowed to call it.
3. Insecure Default CORS Configurations
When an AI tool encounters a cross-origin request error during development, the standard fix it generates is the most permissive one possible: allow everything from everywhere.
app.use(cors({ origin: '*', credentials: true }));
// Or in raw headers:
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Allow-Credentials', 'true');
This configuration tells the browser that any website on the internet can make authenticated requests to your API. An attacker can build a malicious page that, when visited by one of your users, silently makes requests to your API with that user's session cookies. This is the foundation for cross-site request attacks and data theft.
4. Verbose Error Messages That Leak Stack Traces
AI-generated applications almost never implement proper error handling for production environments. When something breaks, the default behavior is to return the full error object to the client, including stack traces, file paths, database query details, and sometimes even environment variables.
app.use((err, req, res, next) => {
res.status(500).json({ error: err.message, stack: err.stack });
});
// What an attacker sees in the response:
// "error": "relation \"users\" does not exist",
// "stack": "at /app/src/controllers/userController.js:47:12..."
This information is invaluable to an attacker. It reveals the database technology you are using, your file structure, which frameworks and versions you run, and sometimes even partial query logic. This turns a black-box attack into a much easier targeted one.
5. Outdated Dependencies with Known CVEs
AI models are trained on code that existed at a point in time. When they generate a package.json, requirements.txt, or Gemfile, they frequently pin versions that are months or years behind the current release. Those older versions often have publicly disclosed vulnerabilities with ready-made exploit code.
We routinely find projects with dependencies carrying critical or high-severity CVEs. These are not obscure risks. Tools like npm audit, pip audit, and Snyk will flag them immediately, but if nobody is running those checks, the vulnerabilities ship to production unnoticed.
6. Missing Security Headers
Security headers are the low-hanging fruit of web application defense. They instruct browsers to enable protections like preventing clickjacking, blocking mixed content, and restricting inline script execution. AI-generated applications almost universally ship without them.
The headers we most commonly find missing include:
- Content-Security-Policy (CSP): prevents cross-site scripting and data injection attacks
- Strict-Transport-Security (HSTS): forces HTTPS connections and prevents protocol downgrade attacks
- X-Frame-Options: prevents your application from being embedded in malicious iframes (clickjacking)
- X-Content-Type-Options: prevents browsers from MIME-type sniffing
- Referrer-Policy: controls how much referrer information is shared with third parties
- Permissions-Policy: restricts browser features like camera, microphone, and geolocation access
None of these are difficult to add. But AI tools do not add them unless you specifically ask, and most developers do not know to ask.
Why a Full Pentest Is Not Always the Right Answer
Here is where we break from what most security companies will tell you. If you are an early-stage startup that just vibe-coded an MVP and you are preparing to launch, a full penetration test is probably not what you need right now.
Traditional pentests are scoped for mature applications with established architectures. They typically run one to three weeks, cost $7,500 or more, and produce 40-page reports that assume you have an engineering team ready to triage findings. If your entire application was built in a weekend by two people and an AI assistant, that level of engagement is mismatched.
That does not mean you should skip security entirely. It means you need the right kind of security engagement for where you are right now.
The goal is not to be perfectly secure before launch. The goal is to not have the obvious, exploitable vulnerabilities that will get you breached in the first week.
Most of the issues we find in vibe-coded apps can be identified and fixed in days, not weeks.
What We Offer Instead
At Lorikeet Security, we built our Vibe Coding Security Solutions service specifically for this situation. It is designed for founders, solo developers, and small teams who are shipping AI-generated code and need to validate that it is not going to get them into trouble.
Here is what that looks like in practice:
Targeted Code Reviews
We manually review your codebase with a focus on the vulnerability patterns that AI tools consistently introduce. We are not reading every line of code. We are hunting for the specific issues (hardcoded secrets, broken auth, dangerous defaults) that we know from experience will be there. You get a prioritized list of findings with clear remediation guidance, including code examples you can apply directly.
Configuration Reviews
Your application code is only half the picture. We review your cloud configurations, CI/CD pipeline settings, environment variable management, CORS policies, and deployment infrastructure. Misconfigurations here are often more dangerous than code-level bugs because they can expose your entire environment rather than a single endpoint.
Light Vulnerability Scans
We run external vulnerability scans against your deployed application to identify exposed services, missing security headers, SSL/TLS misconfigurations, and known vulnerabilities in your tech stack. This gives you a clear picture of what your application looks like from an attacker's perspective without the time and cost of a full penetration test.
The engagement typically takes two to five days, starts at $2,500, and gives you a security baseline you can build on as your product and team grow. When you are ready for a full pentest down the road, we do that too. But we will not sell you one before you need it.
The Bottom Line
AI coding tools are not going anywhere. They are going to get better, faster, and more widely adopted. That is a good thing. But the security gap they create is real and it is growing. Every week we see another vibe-coded application go live with hardcoded credentials, broken authorization, and wide-open CORS policies.
The fix is not to stop using AI tools. The fix is to add a security checkpoint before you launch. A few days of expert review can be the difference between a successful product launch and a data breach notification.
If you are building with AI tools and getting ready to ship, talk to us before you go live.
Get Your AI-Generated Code Reviewed
Our Vibe Coding Security Solutions start at $2,500. Targeted code reviews, config checks, and vulnerability scans, scoped for how you actually build.
Learn About Vibe Coding Security Book a Consultation