ADR-016: Externalized Invariant Substrate
Status: Accepted
Version: 1.0
Date: 2025-12-21
Supersedes: N/A
Related ADRs: ADR-003, ADR-013, ADR-014
Related PRDs: N/A
Context
The most significant gap in LLM reasoning is not arithmetic or symbolic math per se. It is this:
LLMs lack a stable, externalized state transition system that preserves invariants across multi-step transformations.
LLMs can:
- recognize transformations
- describe transformations
- imitate transformations
But they cannot guarantee preservation of invariants across steps without external help.
This is the fundamental reason why multi-step math fails, chain-of-thought collapses, and agents contradict earlier decisions.
Decision
SEA-Forge™ is explicitly designed as an Externalized Invariant Substrate — the stable semantic layer that LLMs interpret rather than contain.
Core Insight
Reasoning must be externalized into a governed semantic substrate that LLMs interpret, not contain.
This is why SEA™ is not “LLM architecture.” It is architecture for cognition itself.
The Cipher Clarification
The attention mechanism can function as a “cipher” for encoding and decoding logic or semantics, but:
Ciphers require a stable substrate.
Attention space can encode logic or semantics — but it cannot preserve them reliably over time because attention is:
- Contextual (re-evaluated every token)
- Lossy (not all information survives)
- Not referentially stable (identities can shift)
Therefore, the cipher works for interpretation, not for execution.
This distinction is critical:
- LLMs are interpreters of externalized reasoning
- SEA™ provides the substrate for that reasoning
- Invariants live in the substrate, not in the token stream
What This Changes
| Dimension |
Without Substrate |
With SEA™ Substrate |
| Math failures |
Mysterious |
Invariant violations (diagnosable) |
| Chain-of-thought |
Collapses at depth |
Preserved via external state |
| Agent decisions |
Contradictory |
Consistent via governance |
| Hallucination |
Random noise |
Detectable deviation |
| Corrections |
Hope-based |
Policy-driven |
Invariant Preservation Requirements
- Externalized State: Reasoning state lives in KGS, not in context windows
- Explicit Invariants: Declared via SBVR rules, CALM constraints, Policy primitives
- Validation Gates: Every transformation validates preserved invariants
- Provenance: Every step records what invariants were checked
- Back-projection: Results must validate against source constraints
Example: Multi-Step Math
Standard LLM approach (fails):
1
2
3
| Step 1: Simplify sqrt(x² - 2x + 1)
Step 2: = sqrt((x-1)²)
Step 3: = x - 1 ← WRONG! Should be |x - 1|
|
SEA™ substrate approach (succeeds):
1
2
3
4
5
6
| Step 1: Simplify sqrt(x² - 2x + 1)
→ Emit transformation candidate
Step 2: = sqrt((x-1)²)
→ Invariant check: sqrt(u²) = |u| over reals
Step 3: = |x - 1|
→ Validation: invariants preserved ✓
|
The difference: invariants are externalized and enforced, not hoped-for.
Rationale
Why Not Just Better Prompting?
Prompts cannot guarantee invariant preservation because:
- Invariants are not explicit in the token stream
- Each token prediction is locally coherent but not globally bound
- Context windows lose information at scale
- No mechanism for back-projection validation
Too rigid for:
- Human-AI co-creation (natural language as representation)
- Multi-representation reasoning (formal ↔ graph ↔ narrative)
- Continuous improvement (evolution, not static proofs)
Why SEA™ Works
SEA™ provides:
- Semantic Core — Meaning externalized, versioned, validated
- Knowledge Graph — State persisted with provenance
- Governance — Invariants as enforceable policies
- Isomorphic Projection — Multiple representations, single truth
- Feedback Loop — Runtime signals update the model, not patches
Alternatives Considered
Larger Context Windows
Rejected — More tokens doesn’t create invariant preservation; it just delays collapse.
Chain-of-Thought Prompting
Rejected — CoT is locally coherent but not globally obligated; invariants are not externalized.
Multi-Agent Without Substrate
Rejected — Agents without shared semantic state create contradictory reasoning.
Constraints
- MUST externalize reasoning state into Knowledge Graph, not context windows
- MUST declare invariants explicitly via SBVR rules, CALM constraints, or Policy primitives
- MUST validate invariants at every transformation step
- MUST record provenance of invariant checks
- MUST support back-projection validation against source constraints
- MUST NOT rely on context windows alone for invariant preservation
Quality Attributes
- LLMs become semantic interpreters, not reasoners
- Hallucination becomes an invariant violation, not a mystery
- Agent safety without neutering capability
- Multi-step reasoning becomes reliable
- Corrections become localized and actionable
Bounded Contexts Impacted
- Semantic Core
- Knowledge Layer
- AI Agent Runtime
- Invariant Regime
- Governance Layer
Consequences
Positive
- LLMs become semantic interpreters, not reasoners
- Hallucination becomes an invariant violation, not a mystery
- Agent safety without neutering capability
- Multi-step reasoning becomes reliable
- Corrections become localized and actionable
Negative
- Requires explicit invariant declaration upfront
- Needs tooling for externalized state management
- LLMs cannot operate fully autonomously (by design)
Additional Notes
The SEA™ Difference
SEA™ is not code describing systems.
SEA™ is the system describing code.
And:
SEA™ is not LLM architecture.
SEA™ is architecture for cognition itself.
- REF-012: Invariant Regime Specification
⏳ Post-MVP (foundational principle, implemented through other components)
References
- SDS-050: Semantic Identity Provenance