Vibe Coding, Agentic Engineering, and When AI Is Not Enough
Oh no, not another vibe code post. Yes Brenda, another one.
But it's actually a bit more nuanced than that: I am trying to see it as a tool rather than a total pile of rubbish. It has made me way more productive and quicker. We excel together on small tasks. But there is a limit, and that is what this post is about.
As a frontend leaning developer with loads of experience in the UI layer, accessibility, component architecture, design systems, and shipping front end in production (including in fintech), a complex backend task was a different kind of problem. That experience did not automatically translate into being able to own the feature end to end in that domain. So I tried to vibe code it.
I am a fire starter vibe coder
You know how it goes. Oh my god, I am such a tech bro, I just vibe coded my very first app and you can see it here http://localhost:8000/.
LOL. NO.
As an actual developer we don't really vibe code this way. It's not like YOLOing into a new codebase and building a new app in a few hours. Any business with complex architecture and multiple repos will not allow you to successfully develop a feature without a heap of problems.
I am a vibe coder agentic engineer
Addy Osmani has written about this distinction: vibe coding (prompt, accept, don't review, iterate on errors) versus what he and others call "agentic engineering," where you orchestrate AI agents but stay in the loop as architect and reviewer. Vibe coding has its place for prototypes and learning; the failure mode is when it demos great and then reality arrives and nobody understands what the code is doing. As one engineer put it in that piece: "This isn't engineering, it's hoping." The professional version starts with a plan, then you direct and review. That is the end of the spectrum I am aiming for.
The "hoping" was what I was doing. The task seemed straightforward and from a junior BE dev's perspective this was a prime opportunity to vibe code it. It's an empowerment tool, like having a senior developer on your shoulder fixing problems with you, right? Wrong.
The code was messy and inconsistent, and I was not confident in the changes I was making. Eventually I got something that did what we wanted: it worked. But the changes lived in repos that were pretty key to our onboarding flows. They would have been significant in their disruption, and I would not have been confident explaining them to someone else.
I had to stop and ask myself: do you fully understand this?
The honest answer was no.
As humans, do we need to understand?
Some say we no longer need to fully understand our systems now that AI can do the hard work. That holds up to a point. But for any business with complex architecture, and especially in fintech, I think it is downright dangerous. Systems that handle money, compliance, and customer onboarding are not the place to ship changes you cannot explain or reason about. AI has no context of business logic and doesn't know the motivations of customers and product owners, or why we build certain features and processes. AI is not a replacement for human understanding. I see it as a tool to help you understand and build better.
Question everything, simplify ruthlessly
We are often led to think that the AI's first approach is the best one. It often is not. AI leans towards over-engineering more often than not. Whether that is to make us use more tokens or simply because complex solutions look more "complete," I don't know. But stepping back and asking "can we do this more simply?" has served me well. Tearing things down and rebuilding with less is underrated. Especially when you work on the front end and the answer might be "just use CSS." As a front end developer that is a technical habit worth keeping: prefer the simplest solution that meets the requirement.
So how do you use it well?
Start with a plan, and understand the architecture. Then break down the task, and use something like Cursor plan mode properly. Rushing to prompt and iterate without that foundation is what got me into the mess. Slowing down to understand first would have saved days and a lot of stress.
Plan
- Use planning mode, especially for larger changes or new features
- In Cursor, planning mode (rather than Agent mode) gets the AI to create a planning document before implementation
- Review the plan and make any desired changes before triggering implementation. If there is something you don't understand, ask the AI to explain it
- Plans are saved as files on disk by default
- Optionally use the Notion MCP server to allow it to reference documents for additional context
- Optionally ask it to use TDD where appropriate
- Optionally install the official Linear MCP server to create projects, tasks and sub-tasks that match the plan for larger projects. This can also be used at the start of the planning stage so the AI can read the project and tickets for additional context (if you use Linear)
- I wrote about my first week with Figma MCP servers if you want a practical example of how MCP tooling fits into a workflow
Learn
- Ask AI to explain its own work. It can create and update documents including diagrams to do this
- Learning from AI's work can apply from the most junior to the most senior devs. This is however especially important for less experienced devs, to allow them to grow their knowledge whilst still being productive with AI
Multi-repo
- Use multiple repos in the workspace
- When working on interaction between services, add any key dependent or depending services and clients to the same workspace
- Especially useful in combination with logs from those same repos, for debugging live issues
Get on the train
When I went back and did it properly, planned the work, broke it down, questioned the output, and made sure I understood every change, I got the tasks over the line. The code was cleaner, I could explain it, and I learned more about backend systems in a few weeks than I had in months. That is the bit people skip over when they talk about AI: used well, it is genuinely one of the best learning tools I have ever had.
I am not one of those "it cannot replace me" people. You are more likely to be replaced by someone using AI, not by AI itself. So I am on the train (for now). That means using it well: understand the problem first, break work down, question the first suggestion, and know where it shines and where you have to lead. Get on the train, but steer it.
If you want another take on where AI falls short, I also wrote about AI image generation and when hype meets reality.