5 Security Holes I Find in Every Vibe-Coded App
After auditing dozens of AI-generated applications built with Cursor, bolt.new, Lovable, and other vibe coding tools, I've started to see the same vulnerabilities over and over again. Here's what I find almost every single time, and how to fix them.
I'm not here to bash vibe coding. The tools are genuinely impressive. I've watched founders build working prototypes in hours that would have taken weeks before. But there's a gap between "it works" and "it's safe to put in front of real users."
According to research from Stanford and Georgetown, 40-50% of AI-generated code contains security flaws. Nearly half. And the scary part? Most founders have no idea they're shipping vulnerable code.
Here are the five security holes I find in almost every vibe-coded app I audit.
1. Exposed API Keys in Frontend Code
This one takes the crown. In roughly 70% of the vibe-coded apps I review, I find sensitive API keys or tokens exposed in the client-side JavaScript bundle.
const supabase = createClient(
'https://xyzcompany.supabase.co',
'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...' // Service key, not anon key!
)
The AI often reaches for the service key because it "just works" with no RLS configuration needed. The problem is it works for everyone, including attackers.
The Fix
- Use the anon key in frontend code, never the service key
- Keep service keys in environment variables on the server only
- Set up proper Row Level Security policies
- Audit your bundle: run
grep -r "eyJ" ./distto find exposed JWTs
2. Missing or Misconfigured Row Level Security
This is the silent killer. Your app looks secure. Login works. Users can only see "their" data in the UI. But the database itself is wide open.
CREATE POLICY "Users can view own data" ON profiles
FOR SELECT USING (true); -- This allows EVERYONE to see EVERYTHING
-- What it should be
CREATE POLICY "Users can view own data" ON profiles
FOR SELECT USING (auth.uid() = user_id);
In one memorable audit, I found a SaaS app where any authenticated user could read every other user's payment information, personal details, and private documents. The UI prevented it. The database did not.
The Fix
- Enable RLS on every table:
ALTER TABLE tablename ENABLE ROW LEVEL SECURITY; - Test policies by logging in as different users
- Use
auth.uid()in your policies, not client-provided user IDs - Don't just rely on the UI to restrict access
3. SQL Injection via Unparameterised Queries
AI models love to build dynamic queries by concatenating strings. It's intuitive. It's also one of the oldest vulnerabilities in the book.
Research shows up to 40% of AI-generated database queries are vulnerable to injection attacks. The AI understands the pattern but doesn't consistently apply security best practices.
The Fix
- Always use parameterised queries
- Validate and sanitise all user inputs
- Use an ORM that handles escaping automatically
- Never interpolate user input directly into SQL strings
4. Authentication Bypass via Client-Side Checks
This one's subtle. The AI builds a beautiful auth flow. Login works. Protected routes redirect to login. Looks bulletproof. But the actual API endpoints? Completely unprotected.
An attacker doesn't use your UI. They call your API directly. If the auth check only happens in the browser, it's not really a check at all.
The Fix
- Verify authentication on every API endpoint
- Use middleware to enforce auth globally
- Never trust the client. Always verify server-side
- Test endpoints directly with curl or Postman, not just through the UI
5. Hardcoded Secrets and No Environment Variables
I regularly find apps where every secret, API key, and database credential is hardcoded directly in the source files. Committed to Git. Sometimes even pushed to public repos.
Once it's in your Git history, it's there forever (unless you know how to properly scrub it). Bots actively scan GitHub for exposed credentials. The window between commit and compromise can be minutes.
The Fix
- Use environment variables for all secrets
- Add
.envto.gitignorebefore your first commit - Use a secrets manager for production (Vercel, Railway, etc. all have built-in options)
- Rotate any keys that have ever been committed
git log -p | grep -i "api_key\|secret\|password" to check if you've ever committed sensitive strings to your repo.The Bottom Line
Vibe coding is powerful, but it's not magic. The AI doesn't understand your security requirements. It doesn't know that your users' data needs protecting. It just writes code that appears to work. If you're new to vibe coding, check out our complete guide to what vibe coding is and how it works.
The good news? These vulnerabilities are all fixable. Most take less than an hour to address once you know what to look for. The key is actually looking before your users (or attackers) find them for you.
If you're not sure whether your vibe-coded app has these issues, you probably do. I've yet to audit one that didn't have at least three of these five problems.
Not sure if your app is secure?
Book a free 30-minute consultation. We'll take a quick look at your codebase and tell you honestly what needs fixing. No sales pressure, no BS.
Book Free Consultation