We ran an experiment. We took Lovable, one of the most popular AI app builders on the market, and gave it a real-world prompt: build an investor relations portal for a cybersecurity startup raising a pre-seed round. The prompt was detailed. The result was functional. And then Lovable's own built-in security scanner told us the app had critical vulnerabilities.

It let us publish it anyway.

This isn't a takedown of Lovable. They've actually done more than most AI coding platforms by adding a security scanner at all. But this case study shows something important about the current state of vibe coding: even when the tool knows the code is insecure, the default path is still "ship it."


The prompt

We didn't use a vague, two-sentence prompt. We gave Lovable a detailed specification for a full investor relations website, the kind of thing a founder would actually build when raising capital. Here's the core of what we asked for:

The prompt was roughly 600 words, covering every section, every feature, and the tech stack. This is the kind of detailed brief that should produce a solid result, and functionally, it did.


What Lovable built

Credit where it's due: Lovable produced a working investor portal in minutes. The hero section, the navigation, the layout, all clean and professional. It connected to Supabase for the backend, set up authentication for the admin portal, and built the newsletter signup form with investor type categorization.

Lovable-built investor portal for Lorikeet Security showing the hero section with Redefining Attack Surface Management tagline
The finished investor portal: clean design, professional layout, functional CTA buttons.

From a product standpoint, a founder could look at this and feel confident sharing it with investors. The design matches the cybersecurity aesthetic we asked for. The "Request the Deck" flow works. The admin portal loads. Blog posts render.

Lovable editor showing the investor portal being built with deployment options and security scan notification visible
The Lovable editor with the portal deployed and the security scan visible in the background.

But here's where it gets interesting.


The security scanner fires

Lovable has a built-in security scanner that runs automatically. When we went to publish, the scanner flagged the project with 2 errors, 4 warnings, and 2 informational findings. The errors were labeled as "critical problems that need your attention right away."

Lovable detected issues panel showing 2 Errors, 4 Warnings, 2 Infos with Gated Resource URLs Exposed to Public and Storage Bucket Configured as Public as critical errors
Lovable's security scanner: 2 critical errors, 4 warnings, 2 informational findings.

Let's break down what the scanner found.

Error: Storage Bucket Configured as Public

Error

Storage Bucket Configured as Public

The Supabase storage bucket used for investor resources was configured with public access. Anyone with the bucket URL could access uploaded files directly, bypassing any gating logic built into the frontend.

Remember, our prompt specifically asked for gated resources where "some resources can be gated (require email submission before accessing)." Lovable built the gating UI on the frontend, but the storage bucket itself was wide open. The gate was decoration. Any investor materials uploaded to the portal (pitch decks, financial projections, cap table summaries) would be accessible to anyone who found the storage URL.

Error: Gated Resource URLs Exposed to Public

Error

Gated Resource URLs Exposed to Public

Resources marked as "gated" in the admin portal were still served via public URLs. The email-capture gate only existed in the React frontend and could be bypassed entirely.

This is the same pattern we documented in our review of dozens of AI-built apps: security logic lives in the frontend where it can be trivially bypassed. The AI builds what looks like access control but doesn't implement it where it matters.

Warning: Subscriber Email Addresses Could Be Stolen

Warning

Subscriber Email Addresses Could Be Stolen by Competitors or Spammers

The newsletter-subscribers table contains email addresses, names, and investor types. While protected by RLS policies requiring admin access to view, the "Anyone can subscribe" INSERT policy allows anyone to submit data. If the is_admin() function has vulnerabilities or the policies are misconfigured, sensitive contact information could be exposed.

Lovable security scanner warning about subscriber email addresses being stolen, showing database security check details and an Ignore issue button
The subscriber email warning, complete with an "Ignore issue" button highlighted in red.

For an investor relations portal, this table contains some of the most sensitive data in the entire application: names, emails, and investor types of people considering investing in your company. The scanner correctly identified that the INSERT policy is too permissive and the read protection depends on proper RLS implementation, but the resolution offered was "Ignore issue."

Think about what that data is worth. A competitor could harvest your investor pipeline. A spammer could scrape every VC and angel who expressed interest. For a startup in a competitive fundraising environment, a leak of that subscriber list could be genuinely damaging.


"Publish anyway"

Here's the moment that matters most. After the scanner identified critical vulnerabilities, including two errors that it categorized as "critical problems that need your attention right away," Lovable presented this dialog:

Lovable dialog showing Security issues found with Publish anyway and Review security buttons
"Security issues found. Critical vulnerabilities were detected in your project." And right next to "Review security": "Publish anyway."

Two buttons, side by side. "Publish anyway" and "Review security." The text says "Critical vulnerabilities were detected in your project. Review them now to not risk leaking sensitive data or your secret keys." And then it gives you a button to ignore all of that and ship it live.

This is the fundamental tension in vibe coding right now. The tools are getting better at identifying security problems. They are not getting better at preventing you from shipping those problems to production. The path of least resistance is still "publish anyway."

We clicked "Publish anyway." The app went live. The storage bucket stayed public. The gated resources stayed exposed. The subscriber table stayed vulnerable. All of the critical findings the scanner identified were now in production, serving real URLs, accessible to anyone on the internet.


The irony is not lost on us

We're a cybersecurity company. We used an AI tool to build an investor portal for a cybersecurity company that specializes in Attack Surface Management, literally the practice of finding and securing exposed assets. And the tool exposed our assets.

The investor portal that's supposed to convince VCs we know what we're doing with security shipped with a publicly accessible storage bucket containing our pitch deck and financial projections. If this were a real deployment and not a controlled test, we'd be the exact kind of case study we write about in our other blog posts.

This is what happens when vibe coding meets the real world. The demo looks perfect. The functionality works. But under the surface, the security model is held together by frontend JavaScript and RLS policies that may or may not be configured correctly, and the platform's answer to its own security findings is a button that says "Ignore issue."


What Lovable gets right (and wrong)

To be fair, Lovable deserves credit for three things most AI coding platforms don't do:

  1. They have a security scanner at all. Most vibe coding tools, Bolt, Replit Agent, and others, don't flag security issues during deployment. Lovable at least surfaces them.
  2. The findings are accurate. Every issue the scanner flagged was a real vulnerability. The descriptions were clear and the severity ratings were appropriate.
  3. They provide context. The scanner explained why each finding was a problem and what the impact could be, like the subscriber email warning explaining how the data could be stolen.

But here's what they get wrong:

  1. "Publish anyway" shouldn't exist for critical findings. If your own scanner calls something a critical error, letting the user bypass it with a single click undermines the entire point of scanning.
  2. "Ignore issue" normalizes insecurity. Every finding in the scanner has an "Ignore issue" button. This trains developers to dismiss security warnings the same way people dismiss cookie banners, reflexively and without reading.
  3. The scanner runs at publish time, not build time. By the time you see these findings, you've already built the whole app. Fixing them means understanding Supabase RLS policies, storage bucket permissions, and backend security patterns, exactly the things you used Lovable to avoid having to understand.

What this means if you're building with AI tools

This case study isn't about Lovable specifically. It's about a pattern that's playing out across every AI coding platform. The tools build functional apps fast. The security layer is either absent, advisory, or bypassable. And founders ship the result because it works, looks professional, and the deadline was yesterday.

If you're building with Lovable, Cursor, Claude, Bolt, or any other AI tool and you're about to go live:


The bottom line

Lovable built us a working investor portal in minutes. It looked good enough to share with investors. Its own security scanner found critical vulnerabilities that would expose our pitch deck, financial projections, and investor contact list to anyone on the internet. It let us publish anyway.

The "Publish anyway" button is a perfect metaphor for where vibe coding security stands in 2026. The tools know the code is insecure. They tell you it's insecure. And then they let you ship it because the alternative, actually making developers fix the problems, would slow things down.

Speed is the point of these tools. But speed without security is how startups end up in breach disclosures instead of boardrooms.

Built Something with AI? Let Us Check It.

Our vibe coding security reviews start at $2,500. We review your auth, database access controls, and secrets management before you go live.

Book a Consultation Learn More
-- views
Link copied!
Lorikeet Security

Lorikeet Security Team

Penetration Testing & Cybersecurity Consulting

We've completed 170+ security engagements across web apps, APIs, cloud infrastructure, and AI-generated codebases. Everything we publish here comes from patterns we see in real client work.