ADR-014: Ciphered Reasoning Loop

Status: Accepted Version: 1.0 Date: 2025-12-21 Supersedes: N/A Related ADRs: ADR-003, ADR-013 Related PRDs: N/A


Context

SEA-Forge™ requires a formal model for how AI agents can perform multi-step reasoning while preserving semantic correctness. The core problem is that LLMs operate in “semantic continuity space” (preserving plausibility) rather than “invariant space” (preserving truth). This leads to:

We need an architectural pattern that treats reasoning as invariant-preserving transformation across representations, not as token generation.

Decision

Adopt the Ciphered Reasoning Loop (CRL) as the canonical pattern for governed reasoning in SEA-Forge™.

Definition

A Ciphered Reasoning Loop is a reversible, invariant-preserving transcompiler between representations:

1
Encode → Validate → Operate → Re-validate → Decode → Round-trip Check → Commit

Core Components

Representations (the “cipher alphabets”)

Invariant Contract

An invariant is a predicate that must remain true across allowed transforms:

Type Description Example
I₁ Identity Symbols/variables refer consistently α-equivalence allowed, capture not
I₂ Domain Side-conditions preserved Cancel f(x) → obligation f(x) ≠ 0
I₃ Equivalence Transformations preserve semantics Equality, implication
I₄ Dependency Each step cites prerequisites No “magic” leaps
I₅ Reversibility Budget What is allowed to be lossy Rhetoric style, not structure

Cipher Transform

A transform is an encoder/decoder pair with explicit validation:

The Loop Execution

  1. Encode: Choose target representation Y := E_{X→Y}(X)
  2. Validate: Enforce invariants assert V_I(Y)
  3. Operate: Do work in that representation (LLM step, tool, agent, rewrite) Y' := Op(Y)
  4. Re-validate: assert V_I(Y')
  5. Decode (round-trip): X' := D_{Y→X}(Y')
  6. Round-trip check: assert Equiv_I(X, X') over promised invariants
  7. Commit: Store provenance + deltas in Knowledge Graph

Key Insight

The “cipher key” is the invariant set.
Invariants are literally governance policies that force meaning to survive representation changes.

Rationale

What This Changes

Dimension Before CRL With CRL
Meaning Implicit in text Explicit, externalized
Correctness Emergent Enforced by invariants
Reasoning Linear (token sequence) Transformational (governed)
Explanation Narrative Proof-carrying
Agents Autonomous guessers Interpreters of state
Drift Undetected Detectable + localizable

Core Implication: Reasoning Becomes Reversible

With invariants declared, every transformation must satisfy:

Encode → Decode → Validate

This gives:

Instead of asking “Why did the model hallucinate?”, we ask “Which invariant failed, and at which step?”

Why This Is Not Just Formal Methods

Existing approaches are partial ancestors:

But none combine:

The novelty is the unified invariant-first worldview applied across math, language, agents, and architecture.

Alternatives Considered

Prompt Engineering for Correctness

Rejected — LLMs cannot reliably track invariant obligations across steps through prompting alone.

Pure Formal Verification

Rejected — Too rigid for human-AI co-creation; does not support natural language as a representation.

Chain-of-Thought Without External State

Rejected — CoT collapses because invariants are not externalized; each token is locally coherent but not globally obligated.

Constraints

Quality Attributes

Bounded Contexts Impacted

Consequences

Positive

Negative

Additional Notes

Implementation Guidance

Biggest Risk: Over-Modeling

If you try to:

…you’ll kill flexibility.

Winning posture:

Invariants should be few, explicit, and testable — not philosophical.

Post-MVP (advanced reasoning pattern)