Status: Accepted Version: 1.0 Date: 2025-12-21 Supersedes: N/A Related ADRs: ADR-003, ADR-013 Related PRDs: N/A
SEA-Forge™ requires a formal model for how AI agents can perform multi-step reasoning while preserving semantic correctness. The core problem is that LLMs operate in “semantic continuity space” (preserving plausibility) rather than “invariant space” (preserving truth). This leads to:
We need an architectural pattern that treats reasoning as invariant-preserving transformation across representations, not as token generation.
Adopt the Ciphered Reasoning Loop (CRL) as the canonical pattern for governed reasoning in SEA-Forge™.
A Ciphered Reasoning Loop is a reversible, invariant-preserving transcompiler between representations:
1
Encode → Validate → Operate → Re-validate → Decode → Round-trip Check → Commit
An invariant is a predicate that must remain true across allowed transforms:
| Type | Description | Example |
|---|---|---|
| I₁ Identity | Symbols/variables refer consistently | α-equivalence allowed, capture not |
| I₂ Domain | Side-conditions preserved | Cancel f(x) → obligation f(x) ≠ 0 |
| I₃ Equivalence | Transformations preserve semantics | Equality, implication |
| I₄ Dependency | Each step cites prerequisites | No “magic” leaps |
| I₅ Reversibility Budget | What is allowed to be lossy | Rhetoric style, not structure |
A transform is an encoder/decoder pair with explicit validation:
E_{X→Y} — Encoder from representation X to YD_{Y→X} — Decoder from representation Y to XV_I — Validator that checks invariantsY := E_{X→Y}(X)assert V_I(Y)Y' := Op(Y)assert V_I(Y')X' := D_{Y→X}(Y')assert Equiv_I(X, X') over promised invariantsThe “cipher key” is the invariant set.
Invariants are literally governance policies that force meaning to survive representation changes.
| Dimension | Before CRL | With CRL |
|---|---|---|
| Meaning | Implicit in text | Explicit, externalized |
| Correctness | Emergent | Enforced by invariants |
| Reasoning | Linear (token sequence) | Transformational (governed) |
| Explanation | Narrative | Proof-carrying |
| Agents | Autonomous guessers | Interpreters of state |
| Drift | Undetected | Detectable + localizable |
With invariants declared, every transformation must satisfy:
Encode → Decode → Validate
This gives:
Instead of asking “Why did the model hallucinate?”, we ask “Which invariant failed, and at which step?”
Existing approaches are partial ancestors:
But none combine:
The novelty is the unified invariant-first worldview applied across math, language, agents, and architecture.
Rejected — LLMs cannot reliably track invariant obligations across steps through prompting alone.
Rejected — Too rigid for human-AI co-creation; does not support natural language as a representation.
Rejected — CoT collapses because invariants are not externalized; each token is locally coherent but not globally obligated.
If you try to:
…you’ll kill flexibility.
Winning posture:
Invariants should be few, explicit, and testable — not philosophical.
⏳ Post-MVP (advanced reasoning pattern)