---
name: product-ooda
description: |
  Complete OODA-loop framework for agentic product development. Enables rapid iteration cycles from observation through execution with continuous learning capture. Use when: (1) Starting a new product/feature from idea to ship, (2) Running discovery or research phases, (3) Planning sprints or releases, (4) Executing development with quality gates, (5) Capturing learnings from shipped work, (6) Querying past decisions before making new ones. Trigger phrases: "product cycle", "OODA", "ship and learn", "what did we learn", "plan this feature", "run discovery", "capture retro", "advise me on".
---

# Product OODA: Agentic Product Development Framework

A self-evolving system for rapid product iteration built on the OODA loop (Observe → Orient → Decide → Act) with continuous learning capture.

## Philosophy

**Cycle speed wins.** The team that completes more learning cycles ships better products. This skill optimizes for:

1. **Tight feedback loops** - Every action generates observable signal
2. **Accumulated wisdom** - Learnings compound across cycles  
3. **Appropriate automation** - Machines do mechanical work, humans make commitments
4. **Evidence over opinion** - Decisions grounded in observation

## Quick Start

### Full Cycle (Guided)
```
/ooda:cycle
```
Runs complete OODA loop with human checkpoints at decision points.

### Modular Entry Points
```
/ooda:observe   - Ingest feedback, metrics, signals
/ooda:orient    - Synthesize context, score opportunities  
/ooda:decide    - Create execution plan with commitments
/ooda:act       - Execute with subagents and quality gates
/ooda:retro     - Capture cycle learnings
/ooda:advise    - Query past learnings before any decision
```

## State Management

All state lives in markdown files within your project:

```
.ooda/
├── cycles/           # Active and completed cycles
│   ├── active.md     # Current cycle state
│   └── archive/      # Completed cycles
├── learnings/        # Searchable knowledge base
│   ├── index.md      # Learning categories and links
│   ├── technical/    # Technical learnings
│   ├── product/      # Product/user learnings
│   └── process/      # Process learnings
├── plans/            # Execution plans
└── signals/          # Raw observations awaiting processing
```

Initialize with: `mkdir -p .ooda/{cycles/archive,learnings/{technical,product,process},plans,signals}`

---

## The OODA Cycle

### Phase 1: OBSERVE 🔭
**Automation: HIGH** - Mechanical ingestion, minimal human input needed.

**Purpose:** Gather raw signal from all available sources.

**Inputs:**
- User feedback (support tickets, interviews, NPS comments)
- Usage metrics (if available in markdown/CSV format)
- Competitor signals (news, product changes)
- Technical signals (errors, performance, debt markers)
- Team signals (retro notes, blocked items)

**Process:**
1. Scan for new signals since last cycle
2. Categorize by source and urgency
3. Surface anomalies and patterns
4. Write raw observations to `.ooda/signals/`

**Output:** `signals-YYYY-MM-DD.md` with timestamped observations

**Command:** `/ooda:observe` or conversationally: "What signals do we have?"

See [references/observe-phase.md](references/observe-phase.md) for detailed methodology.

---

### Phase 2: ORIENT 🧭
**Automation: MEDIUM** - AI synthesizes, human validates interpretation.

**Purpose:** Transform raw signals into actionable understanding.

**Inputs:**
- Raw signals from Observe phase
- Current cycle context
- Past learnings (via `/ooda:advise`)
- Strategic constraints (if defined)

**Process:**
1. **Advise check**: Query learnings for relevant past experience
2. **Pattern synthesis**: Group signals into themes
3. **Opportunity identification**: What could we do?
4. **Hypothesis generation**: What do we believe and why?
5. **Scoring**: RICE or weighted priority scoring
6. **Human validation**: Present synthesis for confirmation

**Output:** `orientation-YYYY-MM-DD.md` with:
- Situation summary
- Top opportunities (scored)
- Hypotheses to test
- Recommended focus

**Command:** `/ooda:orient` or conversationally: "Help me make sense of this"

See [references/orient-phase.md](references/orient-phase.md) for scoring frameworks.

---

### Phase 3: DECIDE 🎯
**Automation: LOW** - Human makes commitments, AI structures the decision.

**Purpose:** Commit to specific actions with clear success criteria.

**Inputs:**
- Orientation output (opportunities, hypotheses)
- Resource constraints (time, capacity)
- Risk tolerance

**Process:**
1. **Scope definition**: What exactly are we committing to?
2. **Success criteria**: How will we know it worked?
3. **Measurement plan**: What will we observe?
4. **Task decomposition**: Break into 2-5 minute executable chunks
5. **Rollback plan**: What if it fails?
6. **Human commitment gate**: Explicit "go" required

**Output:** `plan-[name]-YYYY-MM-DD.md` with:
- Commitment statement
- Success criteria
- Task breakdown (obra/superpowers style)
- Measurement instrumentation
- Rollback triggers

**Command:** `/ooda:decide` or conversationally: "Let's commit to a plan"

See [references/decide-phase.md](references/decide-phase.md) for planning templates.

---

### Phase 4: ACT ⚡
**Automation: MEDIUM-HIGH** - Subagent execution with quality gates.

**Purpose:** Execute the plan with continuous verification.

**Inputs:**
- Committed plan from Decide phase
- Codebase context
- Test baseline

**Process:**
1. **Worktree setup** (if using git worktrees)
2. **Task dispatch**: One task at a time to isolated subagent
3. **Two-stage review** per task:
   - Stage 1: Spec compliance (does it match the plan?)
   - Stage 2: Code quality (is it well-implemented?)
4. **Checkpoint gates**: Human approval at defined intervals
5. **Measurement verification**: Are we capturing the signals we planned?

**Execution Modes:**
- **Guided**: Human approves each task (safest)
- **Batched**: Human approves every N tasks (balanced)
- **Autonomous**: Run until blocked or complete (fastest)

**Output:** Implemented code + `execution-log-YYYY-MM-DD.md`

**Command:** `/ooda:act` or conversationally: "Execute the plan"

See [references/act-phase.md](references/act-phase.md) for execution patterns.

---

### Phase 5: LEARN 📚
**Automation: HIGH** - Claude extracts and indexes learnings automatically.

**Purpose:** Capture what happened for future cycles.

**Inputs:**
- Execution log
- Observations during Act phase
- Outcomes vs predictions
- Team reflections

**Process:**
1. **Outcome capture**: What actually happened?
2. **Hypothesis validation**: Were we right or wrong?
3. **Surprise identification**: What was unexpected?
4. **Pattern extraction**: What generalizes?
5. **Indexing**: Categorize for future retrieval
6. **Commit learnings**: Write to `.ooda/learnings/`

**Learning Categories:**
- **Technical**: Code patterns, architecture decisions, tool learnings
- **Product**: User behavior, feature reception, market signals
- **Process**: What worked/didn't in how we worked

**Output:** Learning files in `.ooda/learnings/[category]/`

**Command:** `/ooda:retro` or conversationally: "Capture what we learned"

See [references/learn-phase.md](references/learn-phase.md) for learning templates.

---

## The Advise System

Before any significant decision, query accumulated learnings:

```
/ooda:advise [topic]
```

**How it works:**
1. Searches `.ooda/learnings/` for relevant past experience
2. Surfaces similar situations and their outcomes
3. Highlights failure patterns to avoid
4. Suggests successful approaches to consider

**Example:**
```
/ooda:advise authentication flow

Found 3 relevant learnings:
1. [2024-12] OAuth implementation: "Always handle token refresh edge case..."
2. [2024-11] Session management: "Redis sessions failed under load because..."
3. [2024-10] User onboarding: "2FA during signup reduced completion by 23%..."
```

**Key insight from research:** Teams at sionic-ai found that learnings documenting *failures* are more valuable than successes. "I tried X and it broke because Y" is the most useful sentence.

---

## Cycle Modes

### 1. Full Cycle (`/ooda:cycle`)
Complete guided loop through all phases. Best for:
- New initiatives
- Major features
- When you have time to be thorough

### 2. Quick Cycle
Abbreviated loop for rapid iteration:
```
/ooda:quick [objective]
```
- Skips formal observation (uses conversation context)
- Lightweight orientation (bullet synthesis)
- Minimal planning (task list only)
- Immediate execution
- Fast retro (3 bullets: worked, didn't, learned)

### 3. Continuous Mode
For ongoing work, run phases independently as needed:
- Morning: `/ooda:observe` to check signals
- Planning: `/ooda:orient` + `/ooda:decide`
- Execution: `/ooda:act` in batches
- End of day/week: `/ooda:retro`

---

## Agent Roles

For complex cycles, specialized agents handle different concerns:

| Agent | Role | When Invoked |
|-------|------|--------------|
| **Strategist** | Long-term thinking, OKR alignment | Orient phase, major decisions |
| **Executor** | Task implementation, TDD | Act phase |
| **Guardian** | Quality gates, review | Between tasks |
| **Archivist** | Learning capture, retrieval | Learn phase, Advise queries |

See `agents/` directory for agent definitions.

---

## Self-Evolution

This skill system evolves through use:

### Automatic Evolution
- Each `/ooda:retro` adds to searchable knowledge
- Patterns that repeat get promoted to process improvements
- Failed approaches get flagged as anti-patterns

### Deliberate Evolution
When you notice a gap:
```
/ooda:evolve [description of improvement]
```

This triggers:
1. Analysis of current skill structure
2. Proposal for modification
3. Human approval
4. Skill file updates
5. Learning capture about the evolution itself

### Meta-Learning
The system tracks its own effectiveness:
- Cycle completion rates
- Time per phase
- Learning retrieval usefulness
- Prediction accuracy over time

---

## Integration with Existing Skills

Product OODA composes with other skills:

| Skill | Integration Point |
|-------|-------------------|
| **superpowers** | Act phase uses subagent-driven-development |
| **test-driven-development** | Executor agent enforces TDD |
| **systematic-debugging** | When Act phase hits blockers |
| **memory-report** | Periodic analysis of accumulated learnings |

---

## Anti-Patterns

**Skipping Observe:** Jumping to solutions without understanding signals leads to building the wrong thing.

**Over-Orienting:** Analysis paralysis. Set time boxes. "Good enough" orientation beats perfect orientation that takes forever.

**Uncommitted Decides:** Vague plans with no clear success criteria. Every Decide must answer: "How will we know?"

**Unsupervised Act:** Letting execution run without checkpoints. Even autonomous mode needs circuit breakers.

**Skipping Retro:** The most common failure. "We shipped, we're done." No. Capture the learning or it's lost forever.

---

## Files Reference

| File | Purpose |
|------|---------|
| `references/observe-phase.md` | Signal collection methodology |
| `references/orient-phase.md` | Synthesis and scoring frameworks |
| `references/decide-phase.md` | Planning templates and commitment gates |
| `references/act-phase.md` | Execution patterns and quality gates |
| `references/learn-phase.md` | Learning capture templates |
| `references/anti-patterns.md` | Common failure modes to avoid |
| `agents/strategist.md` | Long-term thinking agent |
| `agents/executor.md` | Implementation agent |
| `agents/guardian.md` | Quality gate agent |
| `agents/archivist.md` | Learning management agent |
| `scripts/knowledge_search.py` | Search learnings index |
| `scripts/cycle_report.py` | Generate cycle summary |
| `templates/cycle.md` | Active cycle template |
| `templates/learning.md` | Learning capture template |
| `templates/plan.md` | Execution plan template |
