
We’ve all been there. You hand your shiny new CLI AI assistant, like Claude Code or a Cursor-like tool, a simple task: “Refactor this function” or “Add a test for this component”. It works like magic. You feel like you have a super-powered pair programmer at your fingertips.
But then you try to level up. You give it an epic level task: “Implement full internationalisation (i18n) across the entire application, replacing our currently hardcoded copy”.
Suddenly, your super-powered assistant looks a lot less super. It might read a few files, make a half-hearted attempt, and then confidently present a solution that completely misses the mark. It tries to tackle a multi-file, foundational architectural shift in one go.
Why does this happen? It’s not because the AI is “dumb”. It’s because we’re asking it to be both a product manager, an architect, and a developer all at once, without providing it with the right tools for each role.
CLI-based agents have an inherent and sensible bias: avoid reading too many files to conserve their precious context window. This makes them fantastic for small, focused stories, but terrible at epic planning.
To solve this, we need to stop treating our AI as a magical black box and start integrating it into a process we already know works: Agile Product Delivery.
The problem: the AI’s sprint planning failure
In an agile world, a developer doesn’t just grab an epic and start coding. There’s a process: backlog refinement, sprint planning, and breaking the epic down into smaller, manageable stories.
Your CLI AI agent fails at big tasks because it tries to skip right to the coding. It’s a junior dev grabbing the implement i18n epic without a plan. These tasks are challenging for it because they require:
- Broad codebase context: Understanding how dozens of components interact.
- Multi-file discovery: Reading and cross-referencing information across many files and directories to see patterns.
- Strategic planning: Devising a plan before a single line of code is written.
- Large-scale changes: Modifying numerous files in a coordinated way.
The agent’s bias against loading a massive context means it never gets the big picture needed for the planning phase, that is, the vision for how to move from zero locales to a modular, efficient system. This is where we, as developers, need to step in and facilitate a proper planning session.
The solution
Part 1: Epic planning session with an AI architect
For the high-level planning phase, we need a different tool for the job. We need an AI with a massive context window that can act as our Solutions Architect. Our goal here is not to write code, but to create a detailed implementation plan for the new locale structure.
Our planning team:
- You: The Product Owner / Lead Developer.
- An AI architect: A model with a huge context window, like Gemini 1.5 Pro (with its 1M token window).
- A context engineering tool: A utility to easily bundle our codebase for the AI. Great options include RepoPrompt or free alternatives like prompt.16x.engineer, pastemax, repomax or open-repoprompt .
The workflow:
- Gather the project docs (build the context): Use a tool like PasteMax to select all the relevant folders for your task. For our i18n implementation, this means selecting the directories containing any hardcoded UI text. Crucially, deselect irrelevant folders and ensure you include the file map. This tool copies the entire relevant structure and content to your clipboard.
- Hold the planning meeting (write the prompt): Now, head over to a powerful UI like Google AI Studio. Paste the context and write a prompt that is explicitly focused on planning, not implementing.
Here’s the exact prompt I used for the i18n planning:
Read the files provided carefully, create a plan for how to structure locales in multiple files inside `/messages/en/…` `/messages/de/…` . We should store as little as possible (e.g.: “Update Record” should be “Update” and “Record” so that we can also do “Update Tool” without duplication). Give me the structure with some examples. Don’t give me the final locales yet.
Notice the key agile elements in this prompt:
- Goal: Planning the modular structure for the new locale files.
- Acceptance criteria: Componentisation to reduce duplication (“store as little as possible”).
- Definition of done: “Give me the structure with some examples”.
- Explicitly out of scope: “Don’t give me the final locales yet”. This is the most important part. We are defining the system architecture, not executing the migration.
- Review the plan: Gemini 1.5 Pro will ingest the entire context and, acting as an architect, provide a detailed plan, file structures, and examples of how common phrases can be broken down into reusable tokens. You can now review this plan, ask for revisions, and collaborate with the AI until you have a solid blueprint you’re happy with.
\
Part 2: Sprint execution with an AI pair programmer
You now have a beautiful, AI-generated plan. The epic has been broken down into a series of well-defined stories. Now, it’s time to bring back your trusty CLI AI agent. Its biggest weakness, an inability to see the big picture, is no longer a problem, because you are providing the architectural roadmap.
The workflow:
- Pick a story: Look at the plan from your AI Architect. A story might be: “Create the new locale directory structure /messages/en/ and /messages/de/ and set up the base files based on the common actions structure” or “In the UserManagementPage.js file, replace all hardcoded strings with calls to the new localisation utility, using keys defined in the architect’s plan”.
- Execute with your CLI Agent: Now you can give your CLI agent a small, focused, and context-rich prompt.
- Iterate and review: Your CLI assistant will execute this small task perfectly. It has all the context it needs (the plan) and a clear, unambiguous goal (the file change). You review the change, commit it, and move on to the next story from your plan.
Use this hybrid, agile-inspired approach to work with them, using the right tool for the right job, just like a well-oiled agile team.
The agile AI workflow
By splitting our process, we map our AI tools to the agile framework:
- Epic refinement & Sprint planning: Use a large-context model (Gemini 1.5 Pro) and a context tool (PasteMax) to analyse the entire scope and create a detailed plan.
- Sprint execution: Use a fast, efficient CLI agent (Claude Code, etc.) to execute the small, well-defined stories from the plan, one by one.
Stop asking your AI pair programmer to also serve as the product manager and lead architect. Instead, give it a clear, well-defined ticket. Use this hybrid, agile-inspired approach, and you’ll unlock a new level of productivity for even the most complex codebase refactors.
The Non-Technical Founders survival guide
How to spot, avoid & recover from 7 start-up scuppering traps.
The Non-Technical Founders survival guide: How to spot, avoid & recover from 7 start-up scuppering traps.