Building with AI can be an amazing, almost addictive experience. You type an idea and, less than an hour later, it's live on your screen. At least, that's how it feels at the start.
But once you move past basic prototypes, you realise that while AI can move at light speed, it doesn't always move in the right direction. The tools that helped most weren't the ones that moved fastest. They were the ones that stopped me most often.
The Reality Check: Speed vs Accuracy
AI has real flaws. One of the most dangerous is its tendency to agree with everything. If you make a mistake or ask for the wrong thing, it often won't flag it — it will just execute. Because it works so fast, a single typo in a prompt can trigger a host of unwanted changes before you even notice.
There are also the practical limits. When using Claude, you're constantly aware of session limits and the context window. Current models like Claude Sonnet 4.6 support up to 1 million tokens in higher tiers (200k is more common on standard plans), but long sessions still burn through the rolling 5-hour quota faster during peak hours. If a conversation goes on too long, the AI starts to lose focus and forget earlier decisions. You have to stay firmly in the driver's seat, keeping the big picture in mind while the AI handles the heavy lifting.
Despite these hurdles, AI let me build the entire Atmos Football app with next to no prior programming experience. A year ago, I would have said this was impossible.
How I Actually Use Claude
I almost never say "write the whole feature." Instead, I use Claude as a thinking partner and code reviewer.
Every new piece of work starts in docs/plans/. I (or Claude) draft a detailed plan document covering the problem, proposed solution, Firestore schema, architecture decisions, implementation phases, files that will change, and a smoke-test checklist. Only once the plan feels solid do I start a fresh session and ask: "Here is the plan. Implement Phase 1 exactly as written."
This planning-first workflow is closer to pair-programming than autopilot. I commit after every small step and never let one session touch more than one phase.
What AI Is Actually Good At
Once the plan is locked, Claude excels at several things:
- Boilerplate and scaffolding — Firebase security rules, Capacitor plugin wrappers, and Recharts components were generated cleanly on the first try.
- Explaining trade-offs and catching plan mistakes — When I was designing the fitness data storage, Claude immediately flagged GDPR Article 9 implications and reminded me to include
groupIdfor multi-group users. - Tedious refactors — Adding a new column to the player stats table and updating every call site across eight files was handled reliably.
- Documentation — Every blog post (including this one), the CHANGELOG, and the About Elo page started as a rough draft. The grammatically nightmarish text I first wrote was turned into readable prose while I kept the honest tone.
What AI Is Not Good At (and the Push-Back Factor)
The biggest lesson I learned is that an AI that agrees with everything is a warning sign, not confirmation. The most valuable moments come when Claude pushes back, flags a contradiction, or refuses a shortcut.
AI struggles with:
- Novel design decisions and taste — it doesn't know what a casual 5-a-side football app should feel like.
- Long-term coherence across a large codebase — context window limits mean it eventually forgets decisions from earlier sessions.
- Debugging tricky runtime issues without the exact error log.
- Coming up with truly original ideas or considering every real-world impact.
There is still a massive human element — and not just for the practical tasks AI can't access, like managing cloud accounts or keeping secrets out of version control. The deeper human element is judgement: knowing when the plan is wrong before the code is written, knowing when a technically correct solution misses the point of what you were building, knowing when to stop. AI has no stake in your project. It cannot care whether the app is actually good. That judgement never gets outsourced.
You also have to choose the right model: Claude Sonnet 4.6 for logic-heavy tasks, smaller and faster models for simple text edits, and always watch token usage.
The Planning Workflow
The single biggest improvement AI forced on me was writing plans before touching code.
Why? Without a written plan, every new session starts with a blank context window and the AI has forgotten everything. The plan document forces clear thinking about edge cases, privacy, and future maintenance.
Every feature in Atmos Football — including the fitness sync, the head-to-head matrix, and the seasonal points graph — has its own Markdown plan in docs/plans/. I review it with Claude, refine until it's solid, then implement one phase at a time and commit. The plan is also the constraint that stops Claude from completing its way past what I actually need. A session that starts with "here is the plan, implement Phase 1 exactly as written" produces something specific. A session that starts with "here is the problem, what should I do" produces something generic.
It feels slower than "just let the AI write everything," but the code actually works first time and I can still understand it months later.
The Surprising Part
The biggest surprise wasn't the code — it was how much AI improved the planning and documentation.
It reduced cognitive load dramatically. I no longer have to hold the entire system in my head. That sounds like a limitation being papered over. It isn't. Holding everything in your head produces decisions made under the pressure of what you remember most recently rather than what matters most. The plan document forces the decision to be made when you have time to think about it, not when you are mid-session and the AI is waiting. Features that would previously have taken two full weekends now take an evening. Without this workflow, the fitness sync feature, attack/defence profiles, and live team generator would still be on the "maybe one day" list.
Should You Use AI for Your Side Project?
Yes — but only if you already know how to build things and you're willing to stay in charge.
If you use AI as a shortcut to avoid thinking, you'll end up with a mess. But if you use it as a disciplined system — for planning, reviewing, and documentation — it becomes a genuine force multiplier for a solo developer.
The AI that agrees with everything produces worse software. The AI that pushes back and forces you to clarify your thinking is the one that helps you ship something you're proud of.
— James