Vibe coding, the practice of building apps by describing what you want to an AI and letting it write the code, is the fastest way to ship software in 2025. Cursor, Claude Code, Copilot, and Windsurf have made it possible to go from idea to deployed app in hours.
But there's a problem: AI coding tools optimize for functionality, not security.
We've scanned hundreds of vibe-coded projects with Ship Safe, and the same security patterns keep appearing. Here's what we found.
1. Hardcoded Secrets
The most common finding by far. AI assistants frequently complete configuration with real-looking API keys, database URLs, and auth tokens.
// AI-generated config
const stripe = require('stripe')('sk_live_51ABC...');
const db = new Pool({ connectionString: 'postgresql://admin:pass@...' }); // ship-safe-ignore — example codeFix: Always use environment variables. Run npx ship-safe scan . to catch any that slip through.
2. API Routes Without Authentication
AI generates the endpoint logic beautifully but forgets the auth middleware.
// AI-generated: "create an API endpoint to delete a user"
export async function DELETE(req: Request) {
const { userId } = await req.json();
await db.user.delete({ where: { id: userId } });
return Response.json({ success: true });
}
// Anyone can delete any userFix: Always wrap state-changing routes with auth middleware. Ship Safe's AuthBypassAgent flags these automatically.
3. Raw SQL Queries
AI sometimes reaches for raw queries instead of parameterized ones, especially for complex filtering.
# AI-generated: "search users by name"
@app.route('/search')
def search():
name = request.args.get('name')
results = db.execute(f"SELECT * FROM users WHERE name LIKE '%{name}%'")
return jsonify(results)Fix: Always use parameterized queries. Ship Safe's InjectionTester catches SQL injection, NoSQL injection, and command injection patterns.
4. Missing Input Validation
Server Actions, API routes, and form handlers that trust user input blindly. A common pattern: AI generates a form handler that passes role from the form directly to the database, letting users promote themselves to admin.
Fix: Use Zod schemas to validate all user input. Whitelist allowed fields explicitly.
5. Excessive LLM Agency
If you're building AI features, AI assistants often give the LLM too much power: direct database writes, shell commands, file system access, all without human approval.
Fix: Restrict destructive tools behind a human-in-the-loop approval step. Ship Safe's AgenticSecurityAgent checks for OWASP LLM04 (Excessive Agency).
6. Docker Running as Root
AI generates a working Dockerfile, but usually without a non-root user. This is a container escape risk.
Fix: Add a USER directive to your Dockerfile. Ship Safe's ConfigAuditor flags this.
7. Wildcard Dependencies
AI often adds dependencies without pinning versions, or uses * for quick setup. This is a supply chain attack vector.
Fix: Pin exact versions. Use npx ship-safe audit . to catch wildcard versions and known CVEs in your dependency tree.
The Fix: One Command After Every Vibe Coding Session
npx ship-safe audit .18 agents, 80+ attack classes, 3 seconds. Free and open source.
Add it to your pre-commit hook to make it automatic:
npx husky init
echo "npx ship-safe diff --staged" > .husky/pre-commitShip fast. Ship safe.