We've spent the last six months reviewing code from startups that built their products with Lovable, Claude, Cursor, and Bolt. Some of these apps had paying customers. Some were processing credit cards. One was handling health data.

Almost all of them had critical vulnerabilities that would've taken a moderately skilled attacker less than an hour to find.

This isn't a hit piece on AI coding tools. We use them internally. They're fast, they're useful, and they're not going anywhere. But there's a gap between "this app works" and "this app is safe to put in front of real users with real data," and right now, AI tools are building the first kind and founders are shipping it as the second kind.

Here's what we actually found.


Some context first

Veracode put out their 2025 GenAI Code Security Report earlier this year. They tested over 100 LLMs across 80 coding tasks. 45% of the generated code introduced OWASP Top 10 vulnerabilities. Java was the worst at 70%+ failure rates, but Python and JavaScript weren't far behind at 38-45%. The really ugly number: LLMs failed to produce secure code against cross-site scripting 86% of the time and log injection 88% of the time.[1]

NYU's cybersecurity researchers found something similar back in 2021 when they first studied Copilot. Roughly 40% of the programs it generated across 89 security scenarios were exploitable.[2] That study is four years old now. The tools have gotten better at writing functional code. They haven't gotten meaningfully better at writing secure code.

We're seeing this play out in the real world every week.


The Supabase problem is worse than people think

If you've been paying attention to the vibe coding security space at all, you probably heard about CVE-2025-48757. Security researcher Matt Palmer found that Lovable-generated Supabase projects were shipping without Row Level Security policies, which meant the databases were fully exposed to anyone who knew how to make an API call. Over 170 apps were affected. Attackers could read, modify, or delete anything (user lists, payment records, API keys) without authenticating.[3]

The root cause is deceptively simple. Lovable puts the Supabase anon key in the frontend JavaScript, which is actually how you're supposed to use Supabase's client library. The key is designed to be public. The catch is that without RLS policies locking down who can access what, that public key gives everyone access to everything. Lovable was generating the client code but not the database security policies that make it safe.

We saw this exact pattern in 17 out of 20 Lovable apps we reviewed. In one engagement, a SaaS product storing customer financial data had RLS on some tables but not others. We pulled their entire users, billing, and documents tables using nothing but the anon key and Supabase's REST API. No auth required. The founder had no idea.

Somanath Balakrishnan documented a case where a developer built a full social media app in about six hours using Lovable and Supabase. Three days later it was compromised: user data leaked, API keys exposed. His take: "This isn't an isolated incident. It's becoming the norm."[4] The situation got bad enough that dedicated scanning tools like safevibe.codes popped up specifically to check vibe-coded apps for database exposures. Their tagline says it all: "Most AI-generated apps have at least one database exposure."[5]

Lovable has since added a built-in security scanner and improved defaults for new projects. But if your app was built before those changes and you haven't manually audited your RLS configuration, you should assume you're exposed.


Auth that looks real but isn't

This one is subtler and honestly more dangerous because it's harder to spot from the outside.

We kept finding apps where authentication only existed on the frontend. The login page works. The session management works. The redirect logic works. If you're a user clicking around the app normally, everything looks locked down. But the API endpoints behind the UI? No verification at all.

Snyk's analysis of vibe coding called this "authentication theater": systems that appear secure but contain fundamental flaws underneath.[5] That's exactly the right term for it. We saw it in 14 of 20 Lovable apps and 8 of 15 Claude/Cursor apps.

The worst example was a B2B product with paid tiers. The admin dashboard had a React component checking session.user.role === 'admin' before rendering. The API endpoints the dashboard called didn't check anything. We called them directly as a free-tier user and got full admin access: user management, billing data, config settings, everything.

A security guide for Lovable apps published earlier this year put it plainly: "anything in the frontend is public. Users can inspect, modify, or bypass any code that runs in their browser. All security decisions must happen on the backend."[6] AI tools consistently ignore this. They build auth flows that satisfy the developer's eye without satisfying actual security requirements.


IDOR everywhere

We found Insecure Direct Object Reference vulnerabilities in 22 out of 35 apps we reviewed. Nearly two out of three. This was the most common issue across every AI tool, not just Lovable.

The pattern never varied. An API endpoint takes an ID as a parameter and returns data without checking if the requesting user should have access to it. Change /api/documents/12345 to /api/documents/12346 and you get someone else's document. IDs are usually sequential or easily guessable, so enumeration takes minutes.

In one engagement (a health-adjacent app) we pulled over 2,000 user records by incrementing an ID parameter on a single endpoint. Names, emails, and health questionnaire responses, all accessible to any authenticated user.

This makes sense when you think about how these tools work. You prompt the AI to build a documents API. It builds one that correctly creates, reads, updates, and deletes documents. It does not, on its own, add the check that says "only return this document if it belongs to the user making the request." That's a security requirement, not a functional one, and AI tools almost never infer it. Veracode's research backs this up: context-dependent security decisions are where models fail hardest because they can't see the broader application architecture.[1]


Stripe webhooks with no signature validation

Nine out of twelve apps we tested with Stripe integration had no webhook signature validation. That means anyone can forge a Stripe event and send it to the app's webhook endpoint. Send a fake checkout.session.completed event and congratulations, you just got a paid subscription for free.

We actually demonstrated this in three engagements. Crafted a fake webhook payload, sent it to the endpoint, and watched our test account get upgraded to a paid tier. No payment made. The app trusted the incoming data blindly.

The AI generates the handler perfectly: it parses the event JSON, switches on event types, updates the database. It just doesn't verify that the event actually came from Stripe. The security step is invisible in the happy path, so it never gets built.


Hardcoded secrets that shouldn't be there

We found genuinely sensitive secrets baked into source code in 11 of 35 apps. Not just Supabase anon keys (those are expected in frontend code), but Stripe secret keys, SendGrid API keys, OpenAI keys, database connection strings, and in two cases, AWS IAM credentials with way too many permissions.

Security analyses of Lovable apps specifically flag this pattern: the AI "sometimes generates code with API keys hardcoded in frontend JavaScript files."[3] The Hacktron research team found similar issues when they dug into Lovable Cloud's Supabase integration as part of their SupaPwn vulnerability chain.[7]

One app we reviewed had a .env file committed to a public GitHub repo with a Stripe secret key, a Supabase service role key (which bypasses RLS entirely), and a Resend API key. It had been public for three months. The developer had no idea because the AI handled the integration, it worked, and they moved on.

The inconsistency is what gets people. AI tools sometimes use environment variables properly and sometimes don't. It depends on the prompt, the context, and seemingly random factors. Developers see the AI handling secrets correctly in one file and assume it did the same everywhere else. It didn't.


CORS wide open, rate limiting nonexistent

These are less dramatic than the findings above but they showed up in nearly every app we touched.

Open CORS policies (Access-Control-Allow-Origin: *) on authenticated API endpoints appeared in 18 of 35 apps. This lets any website on the internet make cross-origin requests to the API. Paired with the auth weaknesses above, that's a recipe for silent data exfiltration.

Missing rate limiting on auth endpoints was almost universal: 31 of 35 apps had no rate limiting on login, registration, or password reset. That means brute force and credential stuffing attacks are trivially easy. Rate limiting sits at the infrastructure layer, outside what AI coding tools typically generate, so it just never gets built.


The pattern underneath all of this

Every finding above comes back to the same root cause. AI coding tools optimize for functionality. They generate code that works, that looks right, that passes the demo. They do not optimize for security. The login page is beautiful. The payment flow completes. The API returns the right data. And anyone with basic security knowledge can walk through the front door.

Veracode's CTO, Jens Wessling: "The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built. The main concern is that developers do not need to specify security constraints to get the code they want, effectively leaving secure coding decisions to LLMs, which are making the wrong choices nearly half the time."[1]

That's what we're seeing in practice. The AI makes the wrong security choice about half the time, and nobody catches it because the app works fine on the surface.


What we tell our clients

If you're building with AI tools and you're about to go live with real user data, get a code review first. Not a full pentest, but a targeted review focused on auth architecture, database access controls, and secrets management. Those three areas account for the vast majority of critical findings in our engagements. At Lorikeet Security, these reviews start at $2,500 and take 2-3 business days. That's a small price to avoid becoming the next CVE.

If you're already in production, assume you have some of these issues. Audit your Supabase RLS policies. Check if your API endpoints validate auth server-side. Search your codebase for hardcoded keys. These are the three things most likely to get you breached.

And if you're building with AI tools long-term (which most of us are) treat the AI like a junior developer. It writes the first draft. A human with security knowledge needs to review it before it ships.

Sources

  1. Veracode, "2025 GenAI Code Security Report," July 2025. Tested 100+ LLMs across 80 coding tasks. veracode.com
  2. Pearce, H., Ahmad, B., Tan, B., Dolan-Gavitt, B., Karri, R., "Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions," NYU Center for Cybersecurity / Communications of the ACM, 2022. acm.org
  3. CVE-2025-48757 analysis and Lovable security issue documentation. vibeappscanner.com and superblocks.com
  4. Balakrishnan, S., "The Vibe Coding Stack (Lovable/Supabase/Replit etc): When AI-Driven Speed Becomes Your Biggest Liability," Medium, August 2025. medium.com
  5. Snyk, "The Highs and Lows of Vibe Coding," October 2025. snyk.io
  6. "Security Best Practices for Lovable Apps (2026)," Medium, January 2026. medium.com
  7. Hacktron AI, "SupaPwn: Hacking Our Way into Lovable's Office and Helping Secure Supabase," 2025. hacktron.ai

Shipping AI-Generated Code?

Our vibe coding security reviews start at $2,500. Targeted code reviews, config checks, and vulnerability scans, scoped for how you actually build.

Book a Consultation Learn More
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.