Integration UI Handbook Epic
User Journey
The Integration UI bounded context provides seamless AI-powered development interfaces through LibreChat (OpenAI-compatible API) and Zed IDE (ACP/JSON-RPC) for accessing SEA™ capabilities. It exposes Agent Society, DomainForge™, GenAIOps, Projects/Cases, and VibesPro™ through a unified portal dashboard with live health indicators.
Jobs to be Done & EARS Requirements
Job: Access SEA via LibreChat Interface
User Story: As a developer, I want to use conversational AI through LibreChat to access SEA™ capabilities, so that I can explore and interact with the platform using natural language.
EARS Requirement:
- While LibreChat is running, when a chat message is sent to a SEA™ model, the integration UI context shall:
- Receive Request at
/v1/chat/completions endpoint with:
- Model selection (sea-default, sea-analysis, sea-governance)
- Messages array with role (system/user/assistant) and content
- Optional parameters (temperature, max_tokens, stream)
- Route to SEA™ Kernel:
- Forward request to appropriate SEA™ service based on model
- Include conversation context for multi-turn dialogue
- Apply model-specific behavior:
- Execute Capability:
- sea-default: General assistant for exploration and Q&A
- sea-analysis: Deep semantic analysis for code review and debt detection
- sea-governance: Policy-aware reasoning for compliance and audit
- Return Response in OpenAI format:
id: Unique response identifier
object: “chat.completion”
created: Unix timestamp
model: Model name used
choices: Array with message (role, content) and finish_reason
usage: Token counts (prompt_tokens, completion_tokens, total_tokens)
- Support Streaming (if
stream: true):
- Establish SSE connection
- Stream chunks as they arrive
- Send
data: [DONE] when complete
- Enforce Security:
- Validate API key in
Authorization: Bearer <token> header
- Apply per-API-key rate limiting (configurable)
- Validate request schema and sanitize inputs
- Enforce model access controls (403 for forbidden models)
- Log auth failures and return 401 for missing/invalid credentials
Job: Access SEA via Zed IDE ACP Gateway
User Story: As a developer, I want to use Zed IDE to execute SEA™ editor actions (validate_specs, generate_code, query_knowledge), so that I can perform spec-to-code operations directly in my editor.
EARS Requirement:
- While Zed IDE agent is connected, when an editor action is invoked, the integration UI context shall:
- Accept ACP Request via JSON-RPC:
- JSON-RPC fields: jsonrpc, id, method, params
- Method:
agent/sendMessage for chat
- Method:
agent/executeAction for editor actions
- Parameters: action type, payload, requestId, auth token
- Require mutual TLS or bearer-token/OAuth2 authentication
- Conform to ACP JSON-RPC v0.1 method/param shapes (see Verification Checklist)
- Execute Editor Actions:
- validate_specs: Parse and validate SEA-DSL files via tree-sitter-sea (local grammar repo)
- generate_code: Execute spec-to-code pipeline with projections
- query_knowledge: Execute SPARQL queries on Knowledge Graph
- Enforce ACLs/scopes for
agent/sendMessage and agent/executeAction
- Return ACP Response:
- JSON-RPC response with id, result, error
- Result data (validation report, generated code, query results)
- Error details with stack trace if failed
- Metadata (execution time, source references)
- Support Idempotency:
- If
requestId is provided, check for prior execution
- Return cached result if already executed
- Enable safe retries for failed requests
Job: Discover Available Capabilities
User Story: As a developer tool, I want to query available SEA™ capabilities and their input schemas, so that I can dynamically generate interfaces and validate inputs.
EARS Requirement:
- While exploring SEA™ UI, when capabilities are queried, the integration UI context shall:
- Expose discovery endpoint:
GET /v1/capabilities
- Optional query:
interface=librechat|zed
- Return capability list with:
- name: Capability identifier (chat, validate_specs, generate_code, query_knowledge)
- description: Human-readable capability description
- schema: JSON Schema for input validation
- available_in: Interfaces supporting capability (LibreChat, Zed, both)
- Include model-specific capabilities:
- sea-default: General assistant, Q&A, exploration
- sea-analysis: Code review, debt detection, semantic analysis
- sea-governance: Policy validation, compliance checking, audit support
- Support filtering by interface:
?interface=librechat → LibreChat-only capabilities
?interface=zed → Zed-only capabilities
- Example response shape:
{ "capabilities": [ { "name": "...", "description": "...", "schema": { ... }, "available_in": ["librechat"] } ] }
Job: Access Agent Society via UI
User Story: As a user, I want to interact with Agent Society multi-agent workflows through LibreChat or Zed, so that I can leverage AI agents for complex tasks.
EARS Requirement:
- While using UI interfaces, when agent interaction is requested, the integration UI context shall:
- LibreChat:
- Send request to
sea-default model with agent context
- Route to Agent Society kernel for orchestration
- Return agent responses in chat format
- Support multi-agent collaboration in conversation
- Zed IDE:
- Execute agent action with agent ID and task payload
- Return agent output structured for editor display
- Support specialist agent invocation (PM, Reviewer, Domain Specialist)
Job: Access DomainForge via UI
User Story: As a domain architect, I want to perform semantic modeling and validation through the UI, so that I can author and validate SEA-DSL models interactively.
EARS Requirement:
- While using UI interfaces, when DomainForge operations are requested, the integration UI context shall:
- LibreChat:
- Submit SEA-DSL code in chat for validation
- Return validation errors with line/column positions
- Provide suggestions for fixing issues
- Zed IDE:
- Execute
validate_specs action on current file
- Return diagnostics inline with error squiggles
- Support quick-fix actions for common errors
- Enable SPARQL queries via
query_knowledge action
Job: Access GenAIOps via UI
User Story: As a governance officer, I want to manage LLM routing and policy gateway through the UI, so that I can configure AI governance interactively.
EARS Requirement:
- While using UI interfaces, when GenAIOps operations are requested, the integration UI context shall:
- LibreChat:
- Query current LLM provider configuration
- Request model recommendations for specific tasks
- Validate policy rules via chat interface
- Zed IDE:
- Execute policy validation actions
- Return compliance reports for current specs
- Support policy rule authoring assistance
Job: Access Project/Case Management via UI
User Story: As a project manager, I want to manage case lifecycle through the UI, so that I can create, monitor, and close cases interactively.
EARS Requirement:
- While using UI interfaces, when case operations are requested, the integration UI context shall:
- LibreChat:
- Create cases via conversational interface
- Query case status and progress
- Request PM-Agent orchestration actions
- Zed IDE:
- Execute case management actions (create, update, close)
- Return case details with stage and task status
- Support artifact attachment and CaseFile access
Job: Access VibesPro via UI
User Story: As a developer, I want to generate code from specs through the UI, so that I can execute spec-to-code pipeline from my editor or chat.
EARS Requirement:
- While using UI interfaces, when code generation is requested, the integration UI context shall:
- LibreChat:
- Submit spec content in chat for code generation
- Return generated code with syntax highlighting
- Support iteration and refinement in conversation
- Zed IDE:
- Execute
generate_code action on current spec file
- Write generated code to specified output path
- Support preview with
--dry-run before write
- Apply generator templates and customizations
Job: Monitor Service Health via Portal Dashboard
User Story: As a developer, I want to view live health indicators for all SEA™ services, so that I can quickly identify and troubleshoot issues.
EARS Requirement:
- While portal is running, when dashboard is accessed, the integration UI context shall:
- Display unified service dashboard at
http://localhost:8000
- Show health status for each service:
- LibreChat: HTTP 3080 with live indicator
- Zed IDE ACP: Python process status
- Agent Society: Kernel runtime status
- DomainForge: Knowledge Graph connectivity
- GenAIOps: Policy Gateway status
- Projects/Cases: Case management API status
- VibesPro: Generator availability
- Update indicators in real-time (poll every 5 seconds)
- Provide quick links to service interfaces
- Show error messages and logs for unhealthy services
- Restrict dashboard access to authenticated admin users and redact sensitive data
Domain Entities Summary
Root Aggregates
- ChatSession: LibreChat conversation with message history, model selection, and context
- EditorAction: Zed IDE operation with action type, payload, result, and metadata
- CapabilityDefinition: Available SEA™ capability with name, description, schema, and interface support
- ServiceHealth: Live status indicator for SEA™ services with uptime, error rate, and last check timestamp
Value Objects
- ChatMessage: Message with role (system/user/assistant), content, and timestamp
- ChatCompletionResponse: OpenAI-compatible response with id, created, model, choices, usage
- ACPRequest: JSON-RPC request with method, params, and request ID
- ACPResponse: JSON-RPC response with result, error, and metadata
Policy Rules
- OpenAICompatibility: All chat endpoints conform to OpenAI API specification
- IdempotentExecution: Requests with same request ID return cached result
- ServiceHealthVisibility: All services expose health endpoint for dashboard aggregation
Integration Points
- LibreChat: OpenAI-compatible
/v1/chat/completions endpoint integration
- Zed IDE: ACP (JSON-RPC) agent integration for editor actions
- Portal Dashboard: Unified service discovery and health monitoring at
http://localhost:8000
- Agent Society: Multi-agent orchestration via chat and editor actions
- DomainForge: SEA-DSL validation and Knowledge Graph queries
- GenAIOps: LLM routing and policy gateway management
- Project/Case Management: Case lifecycle operations via UI
- VibesPro: Spec-to-code generation from editor or chat
Success Metrics
- Portal Dashboard Uptime: >99.5% availability
- Response Time: <2 seconds for chat completions, <5 seconds for editor actions
- Interface Coverage: All SEA™ capabilities accessible via at least one interface
- Health Check Accuracy: 100% real-time service status accuracy
Non-Functional Requirements
- NFR-001: LibreChat endpoint conforms to OpenAI Chat Completions API v1 (spec snapshot 2024-10-01)
- NFR-002: Zed IDE agent supports ACP JSON-RPC protocol v0.1 (SEA ACP subset)
- NFR-003: Portal dashboard updates service status every 5 seconds
- NFR-004: Support 1000 concurrent active users with p95 <=500ms, p99 <=1s, error rate <0.1% under sustained load (15 min k6 test); degradation = >20% latency increase or >0.1% error rate
Verification Checklist
- OpenAI Chat Completions API v1 (2024-10-01):
- Request/response fields: id, object, created, model, choices, usage, stream
- Auth: Authorization Bearer token + rate-limit headers
- Error handling: 401/403/429 with OpenAI-compatible error payloads
- ACP JSON-RPC v0.1 (SEA ACP subset):
- Required methods: agent/sendMessage, agent/executeAction, agent/getCapabilities, agent/ping
- Params: actionName, payload, requestId, auth, jsonrpc/id
- Responses: result, error, metadata (execution time, source refs)