How parallel AI development built three tools at once

How parallel AI development built three tools at once

by Pedro Janeiro -

Earlier this month, we ran a simple experiment: could three Claude Code tabs, running in parallel, each produce a working (and useful) tool in a few hours? Not to spoil you, but the results were better than expected: three production-quality tools built in roughly 4 hours each, showing what might be the future of hackathons.

The experiment

The setup was relatively straightforward: we opened three separate Claude Code tabs, each dedicated to a different project. Each tab maintained its own context, constraints, and development trajectory, as we had three junior development teams working simultaneously under our guidance.

For architectural decisions, we utilised Zen MCP to enable critical choices to be reviewed by GPT, Grok, and Gemini, with a consensus required for locking in decisions. This multi-model validation prevented common pitfalls of AI-generated code while maintaining velocity.

The sprint loop

Each tool followed an identical development pattern:

  1. Define the expected inputs and outputs
  2. Draft interfaces
  3. Generate scaffolding, tests, and build configuration
  4. Build core adapters with edge case handling
  5. Integrate AI enhancements for user-facing features

The AI handled boilerplate and refactoring at incredible speed, while we focused on architecture, integration, and validation.

Tool 1: Diagramer - Infrastructure visualisation with security context

Goal: Transform Terraform and Kubernetes manifests into architecture diagrams with embedded security analysis.

Diagramer parses infrastructure-as-code files through a sophisticated pipeline. Terraform HCL and Kubernetes YAML are normalised into a Resource Graph Intermediate Representation, with nodes representing resources and edges capturing dependencies. The security engine runs several policy checks, flagging issues such as public S3 buckets and privileged pods.

The output combines visual diagrams (Mermaid/Graphviz) with comprehensive security reports, including compliance checks for CIS, SOC2, HIPAA, and PCI-DSS. The entire system (6,000+ lines of TypeScript) was operational within 4 hours.

Architecture highlights:

  • Plugin system for parsers, renderers, and policy packs
  • Go core for parsing performance, TypeScript for visualisation
  • 558 lines of security analyser with multi-framework compliance

Tool 2: Lowcoster - Pre-deployment cloud cost optimiser

Goal: Catch expensive infrastructure choices before deployment with AI-powered recommendations.

Lowcoster analyses Terraform plans to identify optimisation opportunities. It normalises resources into a provider-agnostic model, fetches real-time pricing from AWS/GCP/Azure APIs, and applies both rule-based and AI-enhanced analysis.

The rules engine suggests concrete changes - GP3 over GP2, Graviton instances for compatible workloads, and rightsizing recommendations. The AI layer (supporting both OpenAI and Anthropic for now) adds context-aware insights that go beyond simple rules, providing savings with confidence scores.

Architecture highlights:

  • Go backend (3,860 lines) with sophisticated pricing cache
  • Multiple AI providers support with structured JSON responses
  • Sandboxed Terraform execution for security
  • Ready for pre-commit hooks and CI integration

Tool 3: Pulumizer - Multi-language infrastructure translation

Goal: Convert Terraform to Pulumi with idiomatic code generation in TypeScript, Python, Go, or C#.

Pulumizer tackles the complex task of cross-platform IaC translation. It parses Terraform HCL into an Abstract Syntax Tree, builds a semantic graph, applies provider-specific mappings, and generates clean, idiomatic code in the target language.

The system handles advanced Terraform constructs - dynamic blocks, count/for_each, complex expressions - while maintaining semantic equivalence. Where automatic translation isn’t possible, it generates TODOs with documentation links for manual review.

Architecture highlights:

  • Three-phase architecture: Parse → Determine intermediate representation → Generate code
  • Comprehensive resource mappings for AWS, Azure, GCP
  • Expression translation engine with 8+ expression types
  • AI-assisted mapping validation and edge case detection

Key takeaways: from technical patterns to a new workflow

The actual outcome of this experiment wasn’t just three tools, but a set of repeatable patterns that point toward a significant shift in the development process. Architecturally, a clear DNA emerged across all projects: extensible plugin systems, a polyglot core using Go for performance and TypeScript for ergonomics, and AI enhancement layers that augmented, rather than replaced, core deterministic logic. The AIs even generated comprehensive test suites, proving that rapid prototyping doesn’t have to mean sacrificing a quality foundation.

These technical patterns enable a change in the developer’s role from line-by-line coder to a conductor of AI agents. The primary skills shift toward high-level architecture, context management, and orchestrating parallel streams of work. Quality, too, finds a new mechanism through consensus; validating key decisions across multiple models like Claude, GPT, and Gemini proved effective at uncovering edge cases a single developer or AI might miss.

But this speed comes with a crucial caveat. Let’s be clear: this process generates MVPs, not production-ready code. There was technical debt. Pulumizer’s translations required manual fixes for values, the AIs missed edge cases, and the code still needs refactoring and security audits. The difference, however, is that this debt is manageable. In 12 “parallel” hours, we built three working prototypes with a solid, testable architecture that would have otherwise taken weeks. The foundation is strong enough to build upon.

This points to the core lesson of the experiment: AI as an amplification factor. It doesn’t replace developers; it allows one person to achieve the output of a small team. We still design the architecture, validate the outputs, and course-correct when the AI goes off-track. The work itself, however, has shifted away from grinding through boilerplate and toward designing systems and focusing on user experience.

This changes how we can approach new projects:

  • Validate prototypes in hours, not days, allowing for early pivots when an approach is flawed;
  • Make prompt engineering a core part of the dev loop, with decision logs becoming as valuable as commit messages;
  • Create reusable AI building blocks, like plugin packs and policy rules, that accelerate future projects.

Ultimately, the real shift is about building better foundations from the start. The question for your next project is no longer whether to try this workflow, it’s what you’ll build first.


Pedro Janeiro

More posts

Share This Post

The Non-Technical Founders survival guide

How to spot, avoid & recover from 7 start-up scuppering traps.

The Non-Technical Founders survival guide: How to spot, avoid & recover from 7 start-up scuppering traps.

Download for free