DIDP Methodology Guide

This guide walks you through using DIDP (Deterministic Iterative Development Protocol) in your projects. By the end, you’ll understand how to set up DIDP, run iterations, and handle common scenarios.

What is DIDP?

DIDP is a phase-based development protocol designed for AI-assisted development. It solves a critical problem: AI assistants lose context between sessions.

Traditional development assumes continuous human memory. But when you’re working with an AI assistant that may:

  • Hit context limits and need compaction
  • Crash or disconnect
  • Be a different instance next session

…you need a way to restore state from artifacts, not memory.

DIDP provides:

  • Deterministic phases with clear entry/exit criteria
  • Artifact-first development where documents override memory
  • Compaction-safe workflows that survive context loss
  • Clear authority hierarchy when conflicts arise

Quick Start

1. Create Your Iteration State

Create .jellylabs.ai/didp/iteration_state.yaml in your project:

iteration:
  id: "my-first-iteration"
  goal: "Build feature X"
  created_at: "2025-01-15"

phase:
  name: planning
  entered_at: "2025-01-15"
  auto_advance: true
  locked: false

exit_criteria:
  - criterion: "Requirements documented"
    satisfied: false
  - criterion: "Scope agreed"
    satisfied: false

artifacts: []

handoff_notes:
  summary: "Starting new iteration"
  decisions_made: []
  risks_identified: []
  next_recommended_action: "Define requirements for feature X"

2. Add the Workflow Contract

Copy the Workflow Contract to .jellylabs.ai/didp/workflow_contract.md. This tells AI assistants how to behave.

3. Create a Bootstrap Prompt

Create .jellylabs.ai/didp/prompts/bootstrap.txt:

You are operating under DIDP.

MANDATORY FIRST ACTIONS:
1. Read .jellylabs.ai/didp/workflow_contract.md
2. Read .jellylabs.ai/didp/iteration_state.yaml
3. Identify current phase
4. Resume from handoff_notes.next_recommended_action

If any document is missing or inconsistent, STOP and report.

4. Start Your Session

Begin each AI session with the bootstrap prompt. The assistant will:

  1. Read the workflow contract
  2. Load current iteration state
  3. Understand what phase you’re in
  4. Resume from where you left off

The Phase Model

Overview

planning → analysis → spec_lock → implementation → testing → archive → merge → complete

Each phase has:

  • Purpose: What you’re trying to achieve
  • Allowed actions: What you can do
  • Exit criteria: How you know you’re done
  • Conversation style: How interactive it should be

Planning Phase

Purpose: Open exploration and scope definition

Allowed:

  • Brainstorming
  • Requirements gathering
  • Architecture discussions
  • Scope changes

Exit when:

  • Requirements documented
  • Scope agreed
  • Initial plan exists

Conversation: Back-and-forth expected

# Example exit criteria for planning
exit_criteria:
  - criterion: "Requirements documented in requirements.md"
    satisfied: true
  - criterion: "Scope agreed with stakeholder"
    satisfied: true
  - criterion: "High-level approach selected"
    satisfied: true

Analysis Phase

Purpose: Validate assumptions from planning

Allowed:

  • Feasibility checks
  • Risk assessment
  • Clarifying questions
  • Prototyping

Not Allowed:

  • Scope changes (triggers replan)

Exit when:

  • Assumptions validated
  • Risks documented
  • Approach confirmed feasible

Conversation: Clarifying questions OK

Spec Lock Phase

Purpose: Freeze the specification

Allowed:

  • Finalizing requirements
  • Creating implementation plan
  • Writing test plan

Not Allowed:

  • Re-opening planning discussions
  • Scope changes

Exit when:

  • Specification frozen
  • Implementation plan complete
  • Test plan complete

Conversation: Structured finalization only

Implementation Phase

Purpose: Execute the plan

Allowed:

  • Writing code
  • Creating artifacts
  • Execution-focused clarifications

Not Allowed:

  • Scope discussions
  • Requirement changes

Exit when:

  • All planned work complete
  • Code committed

Conversation: Minimal, execution-focused

Testing Phase

Purpose: Verify the implementation

Allowed:

  • Running tests
  • Defect documentation
  • Bug fixes

Not Allowed:

  • Feature additions
  • Scope changes

Exit when:

  • All tests pass
  • Defects addressed or documented

Conversation: Defect-focused only

Archive, Merge, Complete

These are mechanical phases:

  • Archive: Preserve iteration state
  • Merge: Integrate to main branch
  • Complete: Write retrospective

Handling Common Scenarios

Session Ends Mid-Work

  1. AI updates .jellylabs.ai/didp/iteration_state.yaml:

    handoff_notes:
      summary: "Implemented 3 of 5 endpoints"
      next_recommended_action: "Complete remaining endpoints: /users, /orders"
  2. Commit the state file

  3. Next session starts with bootstrap prompt, reads state, continues

AI Hits Context Limit

Same as above. DIDP is designed for this. The artifacts contain everything needed to resume.

Need to Change Scope During Analysis

If analysis reveals planning assumptions are wrong:

analysis_outcome:
  replan_required: true
  reason: "API we planned to use is deprecated. Need alternative approach."

This triggers the replan protocol:

  1. Phase resets to planning
  2. Reason is documented
  3. Planning resumes with new information

Conflicting Information

Use the authority hierarchy:

  1. .jellylabs.ai/didp/iteration_state.yaml wins
  2. Then .jellylabs.ai/didp/workflow_contract.md
  3. Then phase artifacts
  4. Then git state
  5. Conversation is lowest authority

Something Feels Wrong

If the AI is unsure:

  • It should STOP
  • Ask for clarification
  • Never guess or proceed with uncertainty

Best Practices

Update State Frequently

Don’t wait until end of session:

# After each meaningful progress
handoff_notes:
  summary: "Completed database schema design"
  decisions_made:
    - "Using PostgreSQL instead of MongoDB"
  next_recommended_action: "Implement User model"

Commit State Changes

git add iteration_state.yaml
git commit -m "Progress: completed database schema"

Use Clear Exit Criteria

Bad:

exit_criteria:
  - criterion: "Done with requirements"
    satisfied: false

Good:

exit_criteria:
  - criterion: "All user stories written in requirements.md"
    satisfied: false
  - criterion: "Each story has acceptance criteria"
    satisfied: false
  - criterion: "Stories reviewed by product owner"
    satisfied: false

Trust Artifacts Over Memory

If an AI assistant says “I remember we decided X” but .jellylabs.ai/didp/iteration_state.yaml says Y, trust Y.

Troubleshooting

”I don’t know what phase we’re in”

Read .jellylabs.ai/didp/iteration_state.yaml. The phase.name field is authoritative.

”Exit criteria seem wrong”

Update them in .jellylabs.ai/didp/iteration_state.yaml. They’re not immutable until spec_lock.

”AI keeps doing out-of-phase work”

Re-emphasize the workflow contract. The AI should refuse out-of-phase requests.

”Session recovery isn’t working”

Check:

  1. Is .jellylabs.ai/didp/iteration_state.yaml committed?
  2. Does bootstrap prompt include mandatory actions?
  3. Is handoff_notes.next_recommended_action clear?

Knowledge System

DIDP includes a knowledge system for automated spec change detection and normative rule extraction. This enables AI assistants to stay current with protocol updates.

Architecture

.jellylabs.ai/didp/
├── knowledge/
│   ├── index.json           # Hash manifest (content fingerprints)
│   ├── rules.json           # Extracted normative rules
│   └── history/             # Versioned snapshots
└── scripts/
    ├── hash-site.ts         # Generate content hashes
    ├── index-specs.ts       # Extract MUST/SHOULD/MAY rules
    └── doctor.ts            # Anti-pattern analyzer

Hash-Based Change Detection

On every build, DIDP hashes all content files to detect changes:

bun .jellylabs.ai/didp/scripts/hash-site.ts

Output:

╭─────────────────────────────────────────────╮
│  Hash Manifest Generated                    │
├─────────────────────────────────────────────┤
│  Files:         45                          │
│  Site Hash: fbe3737d36606c7d                │
├─────────────────────────────────────────────┤
│  Changes Detected:                          │
│    Modified:     2                          │
╰─────────────────────────────────────────────╯

The siteHash provides a single value for quick comparison. If unchanged, no further processing needed.

Normative Rule Extraction

Specs contain normative statements (MUST, SHOULD, MAY per RFC 2119). The indexer extracts these:

bun .jellylabs.ai/didp/scripts/index-specs.ts

Output:

╭─────────────────────────────────────────────╮
│  Rules Extracted                            │
├─────────────────────────────────────────────┤
│  Total Rules:   139                         │
├─────────────────────────────────────────────┤
│  By Type:                                   │
│    MUST:          79                        │
│    MUST-NOT:      22                        │
│    SHOULD:        12                        │
│    MAY:           26                        │
╰─────────────────────────────────────────────╯

Rules are stored in rules.json with source file, line number, and category.

Anti-Pattern Detection

The doctor analyzes CLAUDE.md files for violations:

bun .jellylabs.ai/didp/scripts/doctor.ts

Built-in checks:

  • Missing DIDP bootstrap instructions
  • Phase skip permissions
  • Conversation prioritized over artifacts
  • Missing source of truth hierarchy
  • Auto-advance without exit criteria
  • Silent replan allowances

Options:

  • --verbose - Show matched lines and suggestions
  • --json - Output as JSON
  • --markdown - Output as Markdown report
  • --deep - Request agent evaluation for quality check

Update Hooks

The knowledge base refreshes automatically on DIDP commands:

CommandHook Behavior
/didp-initHash + index + doctor gate
/didp-startHash + index + doctor gate
/didp-updateHash + compare + index if changed

The hook:

  1. Compares current siteHash with previous
  2. If changed, runs index-specs.ts
  3. Skips if unchanged (fast path)
  4. Runs doctor.ts to check for errors

Error Gate

If doctor finds errors (not warnings), the command pauses:

╭─────────────────────────────────────────────╮
│  ⚠️  DIDP Doctor Found Errors               │
├─────────────────────────────────────────────┤
│                                             │
│  CLAUDE.md has issues that violate DIDP     │
│  protocol rules. These should be fixed      │
│  before proceeding.                         │
│                                             │
│  Options:                                   │
│  1. Fix the issues shown above              │
│  2. Type OVERRIDE to proceed anyway         │
│                                             │
╰─────────────────────────────────────────────╯

This prevents starting new iterations with known CLAUDE.md violations. Users must either:

  • Fix the issues (recommended)
  • Type OVERRIDE to acknowledge and proceed anyway

Warnings (severity: warning/info) do not trigger the gate.

Agent Evaluation

For deep quality checks, use --deep:

bun .jellylabs.ai/didp/scripts/doctor.ts --deep

This creates .jellylabs.ai/didp/knowledge/eval-request.json. Run /didp-eval to complete the evaluation with agent assistance.

The agent:

  • Validates pattern accuracy (true/false positives)
  • Checks for missed issues
  • Recommends fixes (human approval required for global changes)

Build Integration

Add to package.json:

{
  "scripts": {
    "postbuild": "bun .jellylabs.ai/didp/scripts/hash-site.ts",
    "didp:hash": "bun .jellylabs.ai/didp/scripts/hash-site.ts",
    "didp:index": "bun .jellylabs.ai/didp/scripts/index-specs.ts",
    "didp:doctor": "bun .jellylabs.ai/didp/scripts/doctor.ts"
  }
}

The postbuild hook ensures the hash manifest updates on every build.

Documentation Enforcement

DIDP requires documentation to stay synchronized with implementation. This section describes the enforcement mechanisms.

Why Documentation Enforcement?

Stale or missing documentation creates:

  • Agent misinformation — AI reads outdated docs and makes wrong decisions
  • Adopter confusion — External users can’t understand the system
  • Audit findings — Compliance gaps when docs don’t match reality

Requirements by Phase

PhaseDocumentation Requirement
planningDefine documentation scope in iteration goals
spec_lockIdentify which docs will be affected
implementationUpdate docs as features are built
testingVerify docs match implementation
archiveConfirm all doc updates committed

Mandatory Exit Criteria

Implementation phase MUST NOT complete unless:

  • Public-facing changes have corresponding methodology.md updates
  • FTL entry exists and reflects current status
  • CLAUDE.md (local) reflects new capabilities/commands
  • Changelog updated if versioning requires it

Testing phase MUST NOT complete unless:

  • Documentation has been reviewed for accuracy
  • Cross-references validated (links work, versions match)

Documentation Scope Matrix

Change TypeRequired Documentation
New DIDP scriptmethodology.md, CLAUDE.md (local)
New commandCommand .md file, methodology.md if public
Spec changeSpec file, changelog, potentially PSP proposal
Phase model changephase-model.md, workflow-contract.md
New FTL entrytask-ledger.md

Verification

Documentation enforcement can be verified via:

  • doctor.ts --docs (future: FTL-052)
  • Manual review during testing phase
  • Cross-reference validation during archive phase

Agent Behavior

The agent MUST:

  • Prompt for documentation updates before phase advancement
  • Refuse to mark implementation complete if docs are stale
  • Include documentation status in handoff notes

The agent SHOULD:

  • Run doctor.ts to verify CLAUDE.md compliance
  • Check that FTL status reflects work state

Agent Transparency

DIDP requires agents to announce automated actions before and after execution. This ensures users always know what’s happening.

Feedback Pattern

For ANY automated action, the agent MUST:

  1. Announce intent before starting:

    • “Running knowledge base update…”
    • “Checking documentation compliance…”
    • “Idea triggered, please hold while I record this…”
  2. Report outcome after completion:

    • “Knowledge base updated. 3 files changed.”
    • “Documentation verified. No issues found.”
    • “That’s a good idea, saved it for later as FTL-XXX.”

Applies To

  • DIDP hooks (hash-site, index-specs, doctor)
  • FTL idea capture
  • Session startup checks
  • Background agents/tasks
  • Any automated file operations

Rationale

  • Builds trust through transparency
  • Helps user understand system behavior
  • Enables user to interrupt if needed

Deferred Decision Capture

When discussions produce actionable insights that aren’t in the current scope, they MUST be captured immediately as FTL Ideas.

Trigger

The agent creates an FTL Idea when:

  • User says “we should do X later”
  • Discussion produces actionable insight not in scope
  • User expresses interest but defers action
  • A decision is made but implementation is out of scope

Process

  1. Agent says: “Idea triggered, please hold while I record this…”
  2. Creates FTL entry (Status: Idea)
  3. Agent confirms: “That’s a good idea, saved it for later as FTL-XXX: [title]“

Session Startup

At session start, the agent surfaces recent FTL Ideas:

  • “Last session you mentioned [X] (FTL-XXX) - want to pursue this?”

This prevents valuable decisions from being lost to context compaction.


Next Steps