The agile spike of the future

The agile spike of the future

by Tomás Correia Marques -

In any agile team, there’s a familiar challenge: the “monster epic.” It’s that large and complex feature on the horizon that’s hard to define, touches multiple parts of the system and carries a high degree of uncertainty. In the past, we’d tackle this with a “spike”: a time-boxed investigation to explore the problem, reduce technical risk and gain clarity before committing to a full build.

Spikes produce research documents, wireframes and sometimes a proof-of-concept. But what if you could supercharge that process? What if you could go from a collection of user stories and discovery notes to a functional, interactive prototype in a fraction of the time?

Based on a new workflow we created, a new AI-enhanced agile process is emerging, one that shortens the path from uncertainty to clarity while maintaining agile principles. Let’s break down this powerful flow!

The challenge: a complex Web3 feature

Our client’s team was facing a classic monster epic: implementing a “multi-sig” functionality for their Web3 product. This wasn’t a small tweak, it would require changing major UI components and reworking user flows.

This is the ideal scenario for an agile spike. The goal isn’t to ship code, but to learn, align stakeholders and create a shared understanding of what needs to be built.

Step 1: Grounding in reality

Before a single AI prompt was written, the process started with core agile fundamentals. You can’t build a good prototype, with or without AI, from a vague idea. We gathered all the existing context, the “single source of truth” for the feature:

  • User stories: a collection of development tickets with their full descriptions. This is the current understanding of what needs to be built.
  • Discovery documents: the output from a previous discovery phase, containing research and analysis. This provides early conclusions and research outputs.
  • Existing user flows: to ensure the AI understood the current state of the application, it was fed the user flows of the live product. We recorded the main flows of the product in a video, gave it to Gemini in AI Studio and asked it to output the user flows and requirements. It was surprisingly accurate and thorough.

The principle of “Garbage In, Garbage Out” is critical here; the quality of the AI’s output is directly proportional to the quality of the context it’s given.

Step 2: AI-powered synthesis

With all the context gathered, Gemini was again put to work. The prompt wasn’t “build me an app”, it was a strategic request to act as a product analyst.

“Based on the information below about the platform, give me the complete set of user flows and page outlines for the new version with multisig (including current and new/updated).”

It processed all the tickets, discovery notes and existing flows to generate a coherent plan for the new version of the app. It effectively connected the dots between what the app is and what it needs to become.

This was essentially an AI-assisted requirements analysis. Instead of a product manager spending days manually mapping out every new screen and interaction on a whiteboard, the AI produced a comprehensive first draft in seconds. This establishes a baseline understanding and a clear structure for the prototype.

Step 3: The first iteration

The newly generated flows and page outlines were then taken to an AI code tool, in this case bolt.new. We fed this structured output into the tool, along with a high-level directive:

“Build a prototype for a web-based platform with a clean [UI]… You can simulate the db and web3 interactions, but make sure you include all flows and pages in the prototype.”

The result was a pretty good base. The tool scaffolded the application, creating the necessary pages, components and even simulated backend interactions.

This is the first loop in the Build-Measure-Learn cycle, executed at lightning speed. It’s the creation of a tangible artifact that can be seen, clicked and evaluated.

Step 4: Refinement and Iteration

The AI-generated base wasn’t the final product. Using Claude Code, we refined and expanded it. This is where human expertise and nuance come in. More features were added, interactions were polished and the prototype was brought to life.

Crucially, the prototype wasn’t just a static collection of images. It included a local state where it saved data, making the experience interactive and more realistic for testing user flows.

The AI provided the 90% solution, and the developer provided the critical 10% of refinement, customisation and polish. This human-in-the-loop approach ensures the final output meets the specific needs of the project.

A prototype that drives agile forward

The final result was more than just a proof-of-concept; it was a powerful tool for the team.

  1. Facilitates stakeholder collaboration: The prototype provided a tangible reference point for discussions, replacing abstract descriptions with a shared visual model.
  2. Creates a shared vision: It gives the team and stakeholders a chance to see what it could be. This builds excitement and alignment around a significantly better UI and user experience.
  3. De-risks development: By previewing the new information architecture and user flows, the team can spot potential issues early.

This entire workflow, from a pile of tickets to an interactive, discussion-ready prototype is the modern AI agile spike. It respects the core principles of gathering requirements and iterative building while using AI to collapse the timeline.

It’s a perfect blend of human-centred agile practices and machine-scale speed.

#ai-llm

Tomás Correia Marques

More posts

Share This Post

The Non-Technical Founders survival guide

How to spot, avoid & recover from 7 start-up scuppering traps.

The Non-Technical Founders survival guide: How to spot, avoid & recover from 7 start-up scuppering traps.

Download for free