ADR-016: Externalized Invariant Substrate

Status: Accepted Version: 1.0 Date: 2025-12-21 Supersedes: N/A Related ADRs: ADR-003, ADR-013, ADR-014 Related PRDs: N/A


Context

The most significant gap in LLM reasoning is not arithmetic or symbolic math per se. It is this:

LLMs lack a stable, externalized state transition system that preserves invariants across multi-step transformations.

LLMs can:

But they cannot guarantee preservation of invariants across steps without external help.

This is the fundamental reason why multi-step math fails, chain-of-thought collapses, and agents contradict earlier decisions.

Decision

SEA-Forge™ is explicitly designed as an Externalized Invariant Substrate — the stable semantic layer that LLMs interpret rather than contain.

Core Insight

Reasoning must be externalized into a governed semantic substrate that LLMs interpret, not contain.

This is why SEA™ is not “LLM architecture.” It is architecture for cognition itself.

The Cipher Clarification

The attention mechanism can function as a “cipher” for encoding and decoding logic or semantics, but:

Ciphers require a stable substrate.

Attention space can encode logic or semantics — but it cannot preserve them reliably over time because attention is:

Therefore, the cipher works for interpretation, not for execution.

This distinction is critical:

What This Changes

Dimension Without Substrate With SEA™ Substrate
Math failures Mysterious Invariant violations (diagnosable)
Chain-of-thought Collapses at depth Preserved via external state
Agent decisions Contradictory Consistent via governance
Hallucination Random noise Detectable deviation
Corrections Hope-based Policy-driven

Invariant Preservation Requirements

  1. Externalized State: Reasoning state lives in KGS, not in context windows
  2. Explicit Invariants: Declared via SBVR rules, CALM constraints, Policy primitives
  3. Validation Gates: Every transformation validates preserved invariants
  4. Provenance: Every step records what invariants were checked
  5. Back-projection: Results must validate against source constraints

Example: Multi-Step Math

Standard LLM approach (fails):

1
2
3
Step 1: Simplify sqrt(x² - 2x + 1)
Step 2: = sqrt((x-1)²)
Step 3: = x - 1  ← WRONG! Should be |x - 1|

SEA™ substrate approach (succeeds):

1
2
3
4
5
6
Step 1: Simplify sqrt(x² - 2x + 1)
        → Emit transformation candidate
Step 2: = sqrt((x-1)²)
        → Invariant check: sqrt(u²) = |u| over reals
Step 3: = |x - 1|
        → Validation: invariants preserved ✓

The difference: invariants are externalized and enforced, not hoped-for.

Rationale

Why Not Just Better Prompting?

Prompts cannot guarantee invariant preservation because:

  1. Invariants are not explicit in the token stream
  2. Each token prediction is locally coherent but not globally bound
  3. Context windows lose information at scale
  4. No mechanism for back-projection validation

Why Not Pure Formal Methods?

Too rigid for:

  1. Human-AI co-creation (natural language as representation)
  2. Multi-representation reasoning (formal ↔ graph ↔ narrative)
  3. Continuous improvement (evolution, not static proofs)

Why SEA™ Works

SEA™ provides:

  1. Semantic Core — Meaning externalized, versioned, validated
  2. Knowledge Graph — State persisted with provenance
  3. Governance — Invariants as enforceable policies
  4. Isomorphic Projection — Multiple representations, single truth
  5. Feedback Loop — Runtime signals update the model, not patches

Alternatives Considered

Larger Context Windows

Rejected — More tokens doesn’t create invariant preservation; it just delays collapse.

Chain-of-Thought Prompting

Rejected — CoT is locally coherent but not globally obligated; invariants are not externalized.

Multi-Agent Without Substrate

Rejected — Agents without shared semantic state create contradictory reasoning.

Constraints

Quality Attributes

Bounded Contexts Impacted

Consequences

Positive

Negative

Additional Notes

The SEA™ Difference

SEA™ is not code describing systems.
SEA™ is the system describing code.

And:

SEA™ is not LLM architecture.
SEA™ is architecture for cognition itself.

Post-MVP (foundational principle, implemented through other components)

References