Adoption Patterns
Waves adapts to where your project IS, not where the framework thinks it should be. Based on analysis of 5 production projects, three distinct adoption patterns emerge.
Three Ways Teams Adopt Waves
How to Choose
Is this a new product idea?
→ YES — Do you need market validation?
→ YES — Pattern 1: Full Pipeline
→ NO — Pattern 2: Library / Component
→ NO — Does the project already exist?
→ YES — Pattern 3: Mid-Flight Adoption
→ NO — Pattern 2: Library / Component
You don't need to use all of Waves to benefit from Waves. Start where you are. The framework meets you where your project is.
New Product from Scratch
Real example: Exobase — a SaaS construction marketplace built from zero.
What makes this pattern distinct
- Heavy upfront validation. Feasibility alone produced 43KB of market data. The go/no-go decision is backed by Monte Carlo simulations, not gut feeling.
- Foundation as bridge. Compacts thousands of simulations into an executive summary with reclassified capabilities and financial benchmarks.
- Multi-wave structure. Three waves planned before writing a line of code: W0 (design), W1 (backend), W2 (frontend/launch).
- Design rules, not code rules. In W0, project_rules focus on visual standards (color tokens, typography, spacing) because the first wave is design.
- Lower initial velocity is normal. Only 4.8% of W0 objectives completed. The upfront investment in feasibility and blueprint pays off when every implementation decision traces to validated business data.
When to use: You're building something new and don't know if it will work. You need data to convince investors or yourself. The product has multiple phases that depend on each other. You want every line of code to trace back to a business decision.
Libraries and Components
Real examples: llm_core (LLM abstraction, 116 objectives) and conversational_engine_ba (conversational AI backend, 181 objectives).
What makes this pattern distinct
- No feasibility unless uncertain. llm_core skipped it entirely. conversational_engine_ba ran it (54KB) because the AI orchestration approach had high architectural uncertainty.
- Single-wave focus. One wave, start to finish. No W0 design phase — these are backend libraries.
- High objective density. llm_core: 115/116 (99.1%) across 11 phases. conversational_engine_ba: 169/181 (93.4%) across 20 phases. ~10 objectives per phase, completed in days.
- Non-code phases are natural. Phase 20 of conversational_engine_ba was pure analysis/documentation — no code changes, only gap analysis and architecture docs.
- Technical guide as living document. An 83KB technical guide evolved alongside the code, emerging from logbook context rather than upfront design.
When to use: You're building a package, library, or engine. The problem domain is understood. You want tight iteration cycles with clear phase boundaries. The component needs architectural documentation that evolves with the code.
Mid-Flight Adoption
Real examples: Enterprise Flutter App (16 JIRA tickets tracked) and Enterprise Android App (13 tickets with multi-stage decomposition).
What makes this pattern distinct
- Skip everything except what's needed NOW. The product exists, has users, and has a JIRA board. Waves adds a context preservation layer on top — it doesn't replace your existing tools.
- Logbooks map 1:1 to JIRA tickets. Each logbook gives the AI agent persistent context about the ticket across sessions. The logbook doesn't replace JIRA — it supplements it.
- Manifest is retrospective. Created AFTER months of development. An "as-is" architecture snapshot that gives new AI sessions full understanding of the codebase.
- Rules emerge from code. Extracted by analyzing existing patterns — "this is how we do it here." Rules were updated multiple times as patterns stabilized.
- Multi-stage logbooks for complex features. Large features get decomposed into sequential logbooks, each building on the previous one.
- Session continuity is the primary value. The AI agent picks up EXACTLY where the last session left off. No more "what was I working on?" The logbook replaces scattered Git history, commit messages, PR comments, and the developer's memory.
When to use: Your product already exists and has an established codebase. Your team uses JIRA, Linear, or another issue tracker. You want AI agent context preservation, not product redesign. You need to onboard AI agents to a codebase they've never seen.
Real Data from Production
Common Elements Across All Patterns
CLAUDE.md
The session bootstrap. Every project has it. It's the first file the agent reads.
Logbooks with recent_context
Objectives, decisions, progress, and learnings preserved across sessions.
Decision tracking
Resolved decisions documented with rationale. Institutional knowledge that survives team changes.
Schema-validated artifacts
JSON files that any AI agent can read, on any platform. No vendor lock-in.
Data sourced from 5 production projects: llm_core (116 objectives), conversational_engine_ba (181 objectives), Exobase (42 objectives), and 2 enterprise apps (29 tickets).