LLM Provider Epic

User Journey

The LLM Provider bounded context provides a unified abstraction layer for accessing multiple Large Language Model providers (OpenAI, Anthropic, Ollama, OpenRouter, and 100+ others via LiteLLM). It enables chat completions and embedding generation with provider-agnostic interfaces, intelligent routing, fallback chains, policy governance integration, and comprehensive observability through OpenTelemetry.

Jobs to be Done & EARS Requirements

Job: Execute CompleteChat

User Story: As an application or service, I want to send chat messages and receive LLM completions through a unified interface, so that I can leverage multiple LLM providers without provider-specific code.

EARS Requirement:


Job: Execute GenerateEmbedding

User Story: As an application or service, I want to generate vector embeddings for text input, so that I can perform semantic search and similarity operations.

EARS Requirement:


Job: Retrieve ListAvailableModels

User Story: As an application or UI component, I want to query available models and their capabilities, so that I can display model options and validate compatibility.

EARS Requirement:


Job: Retrieve GetProviderHealth

User Story: As a monitoring system or load balancer, I want to check provider health status and performance metrics, so that I can make routing decisions and detect issues.

EARS Requirement:


Domain Entities Summary

Root Aggregates

Value Objects

Read Models

Policy Rules

Integration Points