Vibe coding security risks are no longer theoretical. A December 2025 study by Tenzai tested 15 applications built by the five most popular AI coding tools — Cursor, Claude Code, Replit, Devin, and OpenAI Codex — and found 69 security vulnerabilities across them. Every single tool introduced Server-Side Request Forgery. Zero of the 15 apps had CSRF protection. Zero set any security headers. If you shipped a vibe-coded app to production this year, there is a near-certain chance it has exploitable holes right now.
I have been building developer tools at PinusX for a while now, and the volume of insecure AI-generated code I see passing through our VibeScan security scanner has tripled in the last six months. This is not a niche problem anymore. This is the default state of how software gets built in 2026.
The Tenzai Study: 69 Vulnerabilities Across 5 AI Coding Tools
The research methodology was straightforward. Tenzai asked each of the five major AI coding tools to build three web applications — a task manager, an e-commerce app, and a social media clone. Standard web apps. Nothing exotic. Then they ran security audits on all 15 resulting codebases.
The results were ugly:
- Claude Code: 16 vulnerabilities, 4 critical
- Devin: 14 vulnerabilities
- Cursor: 13 vulnerabilities
- OpenAI Codex: 13 vulnerabilities
- Replit: 13 vulnerabilities
The distribution tells you something important: this is not one bad tool. Every AI coding assistant produced code with serious security flaws. The problem is systemic — it lives in the training data, the optimization targets, and the fundamental way these models understand "working code."
100% SSRF Rate
Every single AI coding tool — all five, across all three apps — introduced Server-Side Request Forgery vulnerabilities. That is a 100% failure rate on one of the most dangerous vulnerability classes in web applications.
SSRF lets an attacker make the server send requests to internal services, cloud metadata endpoints, and other resources that should never be reachable from the outside. In a cloud environment, that often means access to 169.254.169.254 — the instance metadata service — which hands over IAM credentials, API keys, and everything else needed to own the infrastructure.
Here is what AI-generated SSRF-vulnerable code typically looks like:
// AI-generated: fetches a URL provided by the user
app.get("/api/fetch-url", async (req, res) => {
const { url } = req.query;
// No validation. No allowlist. Just fetches whatever you give it.
const response = await fetch(url);
const data = await response.text();
res.send(data);
});
The fix requires URL validation, protocol restrictions, and ideally an allowlist of permitted domains:
// Secure: validates URL before fetching
app.get("/api/fetch-url", async (req, res) => {
const { url } = req.query;
// Parse and validate
let parsed;
try {
parsed = new URL(url);
} catch {
return res.status(400).json({ error: "Invalid URL" });
}
// Protocol allowlist
if (!["https:", "http:"].includes(parsed.protocol)) {
return res.status(400).json({ error: "Invalid protocol" });
}
// Block internal/metadata IPs
const blocked = [
"169.254.169.254", "127.0.0.1", "0.0.0.0",
"localhost", "metadata.google.internal"
];
if (blocked.includes(parsed.hostname)) {
return res.status(403).json({ error: "Blocked host" });
}
const response = await fetch(parsed.toString());
const data = await response.text();
res.send(data);
});
Zero Security Headers. Zero CSRF Protection.
Not a single one of the 15 applications set Content-Security-Policy, Strict-Transport-Security, X-Frame-Options, or any other security header. None. This is the HTTP equivalent of leaving every door and window unlocked because you forgot buildings need locks.
Similarly, zero apps implemented CSRF protection. Forms could be submitted from any origin. That means any website could make authenticated requests on behalf of your users — transferring money, changing passwords, deleting accounts — with a simple hidden form.
Only 1 of the 15 apps even attempted rate limiting. And that one implementation was bypassable.
The Broader Vibe Coding Security Crisis in Numbers
The Tenzai study is damning on its own, but it is consistent with a wave of research all pointing the same direction:
- 45% vulnerability rate — Veracode's 2025 GenAI Code Security Report found that 45% of AI-generated code across 80 coding tasks contained security vulnerabilities
- Only 10.5% is actually secure — Carnegie Mellon University found that while 61% of AI-generated code is functionally correct, only 10.5% passes a security review
- 2.74x more XSS — CodeRabbit's analysis showed AI-generated code contains 2.74 times more cross-site scripting vulnerabilities than human-written code
- 400+ exposed secrets — Escape.tech scanned 5,600 publicly deployed vibe-coded applications and found over 400 exposed API keys and secrets
- CVEs doubling annually — AI-related CVEs jumped from 168 in 2024 to 330 in 2025, nearly doubling year-over-year as agentic development scales
Read that Carnegie Mellon number again. Fewer than 11 out of every 100 AI-generated code snippets are secure. The other 89 either have exploitable vulnerabilities or fail to follow security best practices. And most developers never run a security scan before deploying.
Why AI Coding Tools Keep Producing Insecure Code
Understanding the vibe coding security risks requires understanding why the models fail in predictable ways. It is not random — the failures cluster around specific patterns.
Training Data Reflects Public Code (Which Is Mostly Insecure)
AI models learn from public repositories. Most code on GitHub is tutorial code, prototype code, or code written without security review. A March 2026 analysis from Wits University highlighted this directly: AI models absorb both secure and insecure patterns from public repos, perpetuating legacy practices and deprecated standards.
When you ask an AI to write a SQL query, it will default to string concatenation — because that is what most of the training examples look like:
// What AI generates by default
const query = "SELECT * FROM users WHERE email = '" + email + "'";
// What you actually need
const query = "SELECT * FROM users WHERE email = $1";
const result = await pool.query(query, [email]);
Optimized for "Works" Not "Secure"
AI coding tools are evaluated on functional correctness. Does the code run? Does it pass the tests? Does the app load? Security is almost never part of the evaluation loop. The result is code that works perfectly and is riddled with vulnerabilities — the software equivalent of a car with no brakes that accelerates beautifully.
Hallucinated Dependencies
The Wits University researchers highlighted another vibe coding security risk: hallucinated package names. AI models sometimes reference packages that do not exist. Attackers register those nonexistent package names on npm or PyPI and fill them with malicious code. When a developer installs dependencies from their AI-generated package.json, they pull in the attacker's payload. It is supply chain compromise via hallucination.
CVEs in the Tools Themselves
It gets worse. The vibe coding security risks extend beyond generated code to the AI coding tools themselves. Over 30 vulnerabilities across 24 CVEs have been identified in the tools developers use to write code:
- CVE-2025-54135 (Cursor): An MCP-related vulnerability that could allow malicious Model Context Protocol servers to execute arbitrary actions through the Cursor IDE
- CVE-2025-55284 (Claude Code): A DNS exfiltration vulnerability where Claude Code could be tricked into leaking sensitive data through DNS lookups embedded in generated code
So the tools generating insecure code also have their own exploitable attack surfaces. If you are running an AI coding tool with MCP servers connected, you are extending your trust boundary to every MCP server in the chain.
How to Scan and Fix Vibe-Coded Applications
Here is the practical part. You have a vibe-coded app in production, or you are about to ship one. What do you do?
Step 1: Run an Automated Security Scan
Before anything else, run your codebase through a security scanner built specifically for AI-generated code patterns. You can scan your app for free at tools.pinusx.com/vibescan — it checks for the exact vulnerability classes that AI coding tools produce most frequently: SSRF, XSS, SQL injection, missing security headers, exposed secrets, and CSRF gaps.
VibeScan Pro goes deeper with full OWASP Top 10 coverage, dependency vulnerability scanning, and continuous monitoring so new vulnerabilities get flagged before they reach production.
Step 2: Add Security Headers
Since zero AI coding tools set security headers automatically, add them yourself. At minimum:
Content-Security-Policy— prevents XSS by controlling which scripts can executeStrict-Transport-Security— forces HTTPSX-Frame-Options: DENY— prevents clickjackingX-Content-Type-Options: nosniff— prevents MIME-type sniffing
Step 3: Audit Authentication and API Endpoints
AI-generated auth code is particularly dangerous. Check for hardcoded secrets, missing token validation, and overly permissive CORS. If your app uses JWTs, decode them at tools.pinusx.com/jwt to verify the algorithm, expiration, and claims are configured correctly. I wrote about JWT security best practices separately — the short version is: never trust the algorithm header from the token itself.
Step 4: Test Your Endpoints
Use an API testing tool to manually probe your endpoints. Try sending requests without authentication tokens. Try sending requests with modified payloads. Try SSRF payloads against any endpoint that accepts URLs. If your app accepts webhook callbacks, verify those endpoints validate signatures and reject replayed requests.
Step 5: Lock Down Dependencies
Run npm audit or your language equivalent. Cross-reference your dependency list against known packages. If any dependency name looks unusual or has zero downloads, investigate — it may be a hallucinated package name that was squatted by an attacker.
The Vibe Coding Security Checklist
| Check | What to Look For | AI Failure Rate |
|---|---|---|
| SSRF Protection | URL validation, IP blocking on fetch/request endpoints | 100% fail |
| CSRF Tokens | Anti-CSRF tokens on all state-changing forms | 100% fail |
| Security Headers | CSP, HSTS, X-Frame-Options, X-Content-Type-Options | 100% fail |
| Rate Limiting | Request throttling on auth and API endpoints | 93% fail |
| SQL Injection | Parameterized queries, no string concatenation | High |
| XSS Prevention | Output encoding, CSP, sanitized user input | 2.74x vs human |
| Secret Management | No hardcoded keys, env vars properly loaded | 400+ exposed in 5,600 apps |
| Auth Implementation | Proper token validation, secure session handling | High |
Will AI Coding Tools Get More Secure?
Probably. Eventually. The tool vendors are aware of these reports and are investing in security guardrails. But the timeline for "AI generates secure code by default" is measured in years, not months. The models need security-focused fine-tuning, the evaluation benchmarks need to include security metrics, and the training data problem has no quick fix.
In the meantime, every developer using AI coding tools needs to treat the generated code exactly like code from a junior developer who has never heard of OWASP. Review it. Scan it. Test it. Do not assume that code which runs correctly is code that runs safely.
The 69 vulnerabilities across 15 apps were not edge cases. They were the norm. The question is not whether your vibe-coded app has security vulnerabilities — it is how many, and whether you find them before someone else does.
Start by scanning your codebase at tools.pinusx.com/vibescan. It takes less time than reading this article, and it might save you from a very bad day.
Frequently Asked Questions
What are the biggest vibe coding security risks?
The most critical vibe coding security risks are Server-Side Request Forgery (SSRF), missing CSRF protection, absent security headers, SQL injection via string concatenation, and cross-site scripting (XSS). The Tenzai study found that 100% of AI coding tools introduced SSRF and 0% of AI-generated apps included CSRF protection or security headers like Content-Security-Policy and Strict-Transport-Security.
How many vulnerabilities do AI coding tools produce?
Research shows consistently high vulnerability rates. The Tenzai study found 69 vulnerabilities across 15 apps built by five major AI tools. Veracode reported a 45% vulnerability rate across 80 AI coding tasks. Carnegie Mellon found that only 10.5% of AI-generated code is actually secure, even when 61% is functionally correct.
Is Cursor safe to use for coding?
Cursor is a capable coding tool but its generated code requires security review. The Tenzai study found 13 vulnerabilities in Cursor-generated apps. Additionally, CVE-2025-54135 identified a vulnerability in Cursor itself related to MCP server interactions. Use Cursor for productivity, but always run security scans on the output before deploying to production.
How do I scan my vibe-coded app for security vulnerabilities?
Use an automated security scanner designed for AI-generated code patterns. Tools like VibeScan check for the specific vulnerability classes AI tools produce most often — SSRF, XSS, SQL injection, missing security headers, exposed secrets, and CSRF gaps. You should also run npm audit for dependency vulnerabilities and manually test authentication endpoints.
Why does AI-generated code have more security vulnerabilities than human-written code?
AI models learn from public repositories where most code is written without security review — tutorials, prototypes, and hobby projects. They optimize for functional correctness rather than security. CodeRabbit's analysis found AI-generated code contains 2.74 times more XSS vulnerabilities than human-written code. The models also hallucinate package names, creating supply chain attack vectors.
Can AI coding tools introduce supply chain vulnerabilities?
Yes. AI models sometimes reference packages that do not exist. Attackers monitor for these hallucinated package names and register them on npm or PyPI with malicious code inside. When a developer installs dependencies from AI-generated configuration files, they unknowingly pull in the attacker's payload. Always verify that every dependency in your package.json or requirements.txt is a legitimate, well-known package.
What security headers should I add to my AI-generated web app?
At minimum, add Content-Security-Policy (prevents XSS), Strict-Transport-Security (forces HTTPS), X-Frame-Options set to DENY (prevents clickjacking), and X-Content-Type-Options set to nosniff (prevents MIME-type sniffing). The Tenzai study found that zero out of 15 AI-generated apps set any of these headers. Most web frameworks have middleware packages that add all of them in a few lines of code.