Vibe Coding Is Fun Until You Commit Your API Key
(An Architectural Post, Not a Moral One)
We’ve reached a point where you can build a working system faster than you can understand the system you just built.
This is not inherently bad. It is new.
AI-assisted development has removed friction at exactly the layer where architecture used to be learned: by breaking things slowly enough that you noticed why.
Now you can vibe-code straight past that phase.
And the first thing to go isn’t elegance. It’s security boundaries.
The Most Common Vibe-Coding Smell
You open a repo and see something like:
const OPENAI_API_KEY = “sk-live-…”;
Or:
export const config = {
dbPassword: “password123”
}
Or worse: secrets embedded in prompts because “it’s just a prototype.”
This isn’t a beginner mistake. It’s a systems mistake.
Because the problem isn’t where the secret is. It’s that the builder doesn’t yet distinguish between:
code
configuration
runtime environment
AI made it possible to ship without forming that mental model.
“It’s Fine, This Is Just Vibe Coding”
This is the architectural equivalent of:
“We’ll add tests later.”
Except here, “later” involves:
leaked keys
revoked credentials
broken builds
surprise invoices
and the sudden realization that the frontend can see everything
Vibe coding collapses layers:
frontend vs backend
build time vs runtime
local vs production
public vs private
If everything is “just code,” then everything becomes visible somewhere.
.env Files Exist Because Boundaries Exist
.env files are not a best practice. They are a boundary marker.
They say:
“This value exists outside the system.”
Hard-coding secrets says the opposite:
“This system owns its secrets.”
It doesn’t. The environment does.
The moment you put secrets into code:
you couple security to deployments
you make rotation a refactor
you guarantee leaks through logs, bundles, or screenshots
None of this is theoretical. Every experienced engineer has cleaned this up at least once.
Frontend Environment Variables Are Not Secret Variables
Another common vibe-coding leap:
REACT_APP_SECRET_KEY=supersecret
This is a string shipped to every browser.
The prefix is not cosmetic. It is a warning label.
AI won’t stop you from doing this because syntactically, it’s fine.
Architecturally, it’s nonsense.
AI Didn’t Break Security — It Bypassed the Learning Loop
Security didn’t get harder. It got easier to ignore.
Before:
you had to understand why the server existed
you had to understand why config was external
you had to understand why secrets don’t live in Git
Now:
the system works
the demo runs
the prompt says “make it secure”
The problem is not ignorance. It’s missing feedback.
Nothing breaks until it’s already public.
This Is Why Architecture Still Matters
Vibe coding produces software. Architecture produces systems that survive contact with reality.
If you don’t know:
what runs where
what gets bundled
what executes at build time vs runtime
what the client can see
what the server must protect
Then you’re not moving fast. You’re skipping the map and hoping the road is straight.
Vibe coding is a powerful accelerator. But acceleration without boundaries is just speed toward exposure.
If your system can’t answer:
where secrets live
how they are injected
who can see them
how they rotate
Then the problem isn’t AI. And it isn’t vibes.
It’s that the architecture was never there to begin with.
The boundary rule
If it must be secret, trusted, rate-limited, or audited → it lives on the backend.
Frontend is untrusted by default (users can inspect, modify, replay, automate).
What belongs in the frontend (untrusted zone)
Frontend can safely handle:
UI state & presentation
prompts the user types
conversation rendering
client-side validation (UX only)
Non-sensitive configuration
public feature flags
public model names if they’re not tied to secrets
public analytics IDs (still be careful)
Auth session tokens (short-lived)
only if they are scoped and revocable (e.g., standard app sessions)
Direct-to-storage uploads (optional)
only via pre-signed URLs or scoped upload tokens issued by backend
Never treat frontend env vars as secret storage. Anything shipped to the browser is public.
What belongs in the backend (trusted zone)
Backend should own anything that is:
1) Secrets & credentials
LLM API keys (OpenAI/Anthropic/etc.)
vector DB keys
database credentials
webhook secrets
encryption keys
signing keys (JWT private keys)
2) Policy enforcement
rate limiting / quotas per user
prompt injection defenses (input sanitation + tool constraints)
content filtering / moderation policy
tool permissioning (“this user can call retrieval but not email”)
3) Data access & privacy
fetching from private databases
retrieval over proprietary docs
PII handling & redaction
audit logs / compliance trails
“right to delete” workflows
4) LLM orchestration
prompt templates (system prompts)
tool calling / function routing
memory rules
model selection & fallback
streaming responses (server streams to client)
The “three layers” architecture that keeps you safe
Layer A — Frontend (Browser/App)
Responsibility: collect user input, display output
Trust level: zero
Key pattern: frontend calls your backend, never the LLM provider directly (unless you really know what you’re doing).
Layer B — Backend API (Your control plane)
Responsibility: identity, policy, orchestration, secrets
Trust level: high
Key pattern: all LLM calls go through here.
Layer C — Data plane (Storage + Retrieval)
Responsibility: data persistence and retrieval
Trust level: high, but access must be brokered by backend
Key pattern: least-privilege access, scoped tokens, audit.
Concrete mapping: where each thing should live
Prompts
Frontend: user-entered text, UI-only guidance
Backend: system prompt, safety policies, tool rules, template assembly
Tool calling (MCP, function calls, RAG, webhooks)
Frontend: never decides tool calls
Backend: tool routing + allowlists + parameter validation + timeouts
RAG (retrieval)
Frontend: query input + display citations
Backend: retrieval execution, chunking rules, filters, access control
Data layer: vector store + document store
Files (PDFs, TMs, screenshots, etc.)
Frontend: upload UI
Backend: virus scan, content-type checks, storage token minting, redaction pipeline
Storage: object store (S3/Supabase Storage/etc.)
Logging
Frontend: minimal (no secrets, no full prompts if sensitive)
Backend: structured logs + audit events + trace IDs
Red flags (classic vibe-coding boundary violations)
If you see any of these, you’ve crossed the streams:
Calling the LLM provider directly from the browser with an API key
Putting provider keys in
.envthat gets bundled (Vite/CRA public env)Letting the client choose tools like
“send_email”: truePassing raw database queries from frontend to backend
Storing full chat logs with secrets/PII without retention rules
“Auth” that’s just a userId in localStorage with no server verification
And just to be clear: I’m not a systems architect, a security researcher, or someone who enjoys arguing about zero-trust diagrams on the internet. But I have worked in software companies long enough to have learned one thing the hard way: security is not something you sprinkle on later like parsley. If you don’t respect boundaries from the start, the system will eventually teach you why they mattered, usually at the worst possible moment.
So this isn’t expertise speaking: it’s experience, a bit of common sense, and a healthy desire to sleep at night knowing I won’t be rotating keys over a pint.



Exactly. I was nodding my head all the time while reading this.
Security & privacy have long been an afterthought during the development process. And now with AI, completely non-technical folks overlook the most common pitfalls that have been happening for years even in production-level projects.
I think we need less "vibe-coding" and more "vibe-security".
Fantastic as always! I love reading your articles, and this one is so spot on. "security is not something you sprinkle on later like parsley. If you don’t respect boundaries from the start, the system will eventually teach you why they mattered, usually at the worst possible moment."