Foundation Objectives

True North

Enable AI-assisted work that is creative during exploration, disciplined during convergence, and trustworthy at publication—without ever losing human authority.

If something does not serve that sentence, it does not belong in the foundation.


Objective 0: Preserve Human Authority While Leveraging AI

Definition: Ensure that humans always retain final decision authority, responsibility for outcomes, and control over what becomes canonical.

Why it matters:

  • Prevents accidental delegation of judgment to tools
  • Prevents “the system decided” narratives
  • Keeps accountability real and explicit

Mechanisms: PSP v1, Human Authority Fallback, Safety Boundaries, Reader Responsibility Contract


Objective 1: Make Decisions Explicit Before Work Begins

Definition: No major direction, structure, or policy should exist without being consciously proposed, reviewed, and accepted.

Why it matters:

  • Prevents “we already built it, so now it’s real”
  • Avoids retroactive justification
  • Saves time by stopping misaligned work early

Mechanisms: PSP v1, Proposal Artifacts, Proposal Index, Task Ledger


Objective 2: Enable Deterministic Convergence

Definition: Work should converge toward stable artifacts with clear stopping points.

Why it matters:

  • AI systems naturally encourage infinite refinement
  • Institutions require closure
  • Stability enables trust, citation, and reuse

Mechanisms: DIDP v1, Explicit Acceptance Criteria, State Locking, Task Acceptance Checklist


Objective 3: Control What Escapes Into the World

Definition: Only curated, intentional artifacts should be published.

Why it matters:

  • Raw process ≠ understanding
  • Exploration ≠ canon
  • Leakage creates confusion, IP risk, and misinterpretation

Mechanisms: PPP v1, Specs vs Docs Split, Canonical Prompt Distillation, Redaction-by-Design


Objective 4: Allow Learning Without Canonizing Noise

Definition: Make it safe to think out loud, brainstorm, and explore without fear that ideas will silently harden into doctrine.

Why it matters:

  • Creativity dies under premature formalization
  • Governance fails when everything feels binding
  • People stop experimenting if exploration is risky

Mechanisms: Docs as Non-Normative, AI Ideation Boundaries, Task Ledger, Proposal Gate


Objective 5: Make the System Understandable at a Glance

Definition: A new reader should be able to quickly answer: What is authoritative? What is explanatory? Where do decisions happen? How does work flow?

Why it matters:

  • Prevents misuse
  • Reduces onboarding cost
  • Signals maturity instantly

Mechanisms: System Overview Diagram, Authority Hierarchy, Index Pages, Explicit Scope Sections


Objective 6: Be Honest About Limits and Failure

Definition: Name what the system does not try to solve and where it may fail.

Why it matters:

  • Prevents overconfidence
  • Builds credibility
  • Stops others from “fixing” imaginary gaps

Mechanisms: Accepted Failure Modes, Safety Boundaries, Sunset Principles, No Immortality Assumptions


Objective 7: Support Long-Lived Evolution Without Drift

Definition: Allow the system to evolve deliberately without eroding its foundations.

Why it matters:

  • Time is the real adversary
  • Drift kills standards quietly
  • Governance must outlive its creators

Mechanisms: Proposal-First Changes, Versioning Discipline, Task Ledger, Living Docs Policy, Explicit Sunsetting


Open Task Mapping

TaskPrimary ObjectiveSecondary
FTL-010: Conductor PatternObj 2 (Convergence)Obj 4 (Learning)
FTL-011: AI Planning BoundariesObj 0 (Human Authority)Obj 4 (Learning)
FTL-012: Safety BoundariesObj 0 (Human Authority)Obj 6 (Honesty)

Decision Filter

When evaluating whether to add something to the foundation, ask:

  1. Does it serve the True North?
  2. Which objective does it advance?
  3. Does it conflict with any objective?
  4. Is it the simplest solution that satisfies the need?

If the answer to #1 is no, stop. It doesn’t belong here.