Shared Epic
User Journey
The Shared bounded context provides cross-cutting infrastructure for reliable message delivery across all bounded contexts. It implements the outbox pattern, inbox processing, and dead letter queue handling to ensure event consistency, idempotency guarantees, and graceful failure recovery throughout the SEA™ platform.
Jobs to be Done & EARS Requirements
Job: Generate Automated Documentation
User Story: As a Developer, I want the system to automatically generate and update project documentation (e.g., README.md, API docs) based on code changes and architectural definitions, so that documentation always reflects the current state of the project.
EARS Requirement:
- WHEN a project’s codebase or specifications are updated, the shared context shall:
- Trigger documentation analysis via context-aware documentation analysis
- Extract semantic context from multiple sources:
- Code structure and patterns
- SEA-DSL specifications
- Architecture definitions
- Knowledge Graph relationships
- Generate comprehensive, production-ready documentation including:
- README with project overview and setup
- API documentation with endpoints and schemas
- Architecture diagrams showing component relationships
- Change logs tracking evolution
- Ensure documentation reflects current state automatically
Job: Provide Context-Aware Documentation Analysis
User Story: As the Documentation Orchestrator, I want to receive rich contextual information from the Project Context Analyzer about the project’s architecture, business domain, technical stack, and dependencies, so that I can generate highly relevant and accurate documentation.
EARS Requirement:
- WHEN analyzing a project for documentation generation, the shared context shall:
- Extract semantic context from multiple sources:
- Code structure (files, classes, functions, modules)
- SEA-DSL specifications (entities, flows, policies)
- Architecture definitions (bounded contexts, integration points)
- Knowledge Graph (business concepts and relationships)
- Build holistic project context including:
- Technical stack identification
- Architectural patterns detection
- Dependency mapping and analysis
- Business domain understanding
- Generate comprehensive documentation aligned with enterprise semantics
- Ensure documentation is accurate, relevant, and complete
Job: Publish Reliable Outbox Events
User Story: As a bounded context, I want to publish domain events reliably, so that no events are lost due to transient failures.
EARS Requirement:
- While processing domain operations, when a domain event is emitted, the shared context shall:
- Write event to outbox table within same transaction as domain state change
- Ensure atomic write: both state change and outbox entry succeed or fail together
- Store event metadata: event type, payload, timestamp, correlation ID
- Mark outbox entry as “pending” for background processing
- Return transaction commit confirmation
Job: Relay Outbox Events
User Story: As a platform operator, I want a background publisher to relay outbox events to the message broker, so that events are delivered reliably.
EARS Requirement:
- While running the outbox relay, when outbox rows are pending, the shared context shall:
- Poll outbox rows with status “pending” using a lease/lock to avoid duplicate workers
- Publish events to NATS/JetStream with idempotency/deduplication headers
- Update outbox status to “published” on success or “failed” with last_error on failure
- Retry failed events with exponential backoff and max_attempts
- Route permanently failed events to a DLQ or alerting channel
- Record attempt_count, last_attempt_at, and last_error metadata
Job: Process Inbox Messages
User Story: As a bounded context, I want to receive and process events from other contexts reliably, so that I can maintain consistency across the system.
EARS Requirement:
- While consuming messages from inbox, when a message is received, the shared context shall:
- Deliver message to target bounded context’s inbox
- Validate message format and required fields
- Acquire a processing lease/lock to prevent duplicate workers
- Check for duplicate processing using message ID (idempotency)
- If already processed, acknowledge and skip (idempotent operation)
- If new, invoke message handler with message payload
- Track processing status (pending, processing, completed, failed)
- Acknowledge message only after successful processing
Job: Handle Failed Messages
User Story: As a system operator, I want failed messages to be routed to dead letter queue, so that they can be analyzed and replayed without blocking the system.
EARS Requirement:
- When a message fails processing after retry attempts, the shared context shall:
- Route failed message to dead letter queue via RouteToDeadLetter command
- Store failure context:
- Original message payload
- Error details and stack trace
- Retry count and timestamps
- Source context and handler information
- Mark original inbox message as “routed-to-dlq”
- Emit alert for monitoring systems
- Allow system to continue processing other messages
Job: Replay Dead Letter Messages
User Story: As a system operator, I want to replay messages from dead letter queue after fixing issues, so that I can recover from transient failures.
EARS Requirement:
- When a ReplayDeadLetter command is received with message ID or “all” qualifier, the shared context shall:
- Retrieve message from dead letter queue
- Reset message to processing state
- Re-submit message to target inbox for reprocessing
- Track replay attempts and outcomes
- Support selective replay (specific message) or bulk replay (all messages)
- Update dead letter queue status on successful replay
Domain Entities Summary
Root Aggregates
- OutboxEvent: Represents a domain event awaiting publication with event type, payload, timestamp, correlation ID, and processing status
- InboxMessage: Represents a received message with message ID, source context, payload, processing status, and retry count
- DeadLetterMessage: Represents a failed message with original payload, error details, failure context, and replay eligibility
- DocumentationJob: Manages documentation generation lifecycle with source context, output format, and artifact references
Value Objects
- MessageMetadata: Contains correlation ID, causation ID, timestamp for distributed tracing
- FailureContext: Captures error details, stack trace, retry attempts for troubleshooting
- ProjectContext: Aggregates code structure, specs, architecture, and dependencies for documentation
Policy Rules
- AtomicOutboxWrite: Outbox entries must be written within same transaction as domain state change
- IdempotentProcessing: Messages with same ID must not be processed more than once
- OutboxPatternEnforcement: All cross-context communication must use outbox pattern
Integration Points
- All Bounded Contexts: Provides reliable messaging infrastructure for every context
- NATS/JetStream: Message broker for event distribution and persistence
- PostgreSQL: Outbox and inbox storage with transactional guarantees
- Documentation Context: Orchestrates documentation generation with rich project context
- Monitoring Systems: Alerts and metrics for dead letter queue and message processing