Query Epic
User Journey
The Query bounded context enables natural language queries about policies using Retrieval-Augmented Generation (RAG) with integrated governance enforcement. It orchestrates query processing by retrieving semantically relevant policies, validating access through governance checks, and synthesizing accurate answers with source citations.
Jobs to be Done & EARS Requirements
Job: Process Natural Language Query
User Story: As a policy consumer, I want to ask questions about policies in natural language, so that I can get accurate answers with governance enforcement and source citations.
EARS Requirement:
- While the system is operational, when a
ProcessQuery command is received with natural language query text and user context, the query context shall:
- Generate Query Embedding:
- Convert query text to 384-dimensional embedding vector
- Prepare for semantic similarity search
- Enforce Governance Check:
- Submit user context and query intent to governance-runtime for pre-check
- Retrieve only candidate policy IDs (not full content) for access evaluation
- Validate user has permission to access the candidate policies
- Block query if governance check fails (403 Forbidden)
- Log access attempt for audit trail
- Retrieve Relevant Policies:
- Query memory context for semantically similar policies using cosine similarity
- Retrieve top-K policy documents exceeding similarity threshold (approved IDs only)
- Include policy metadata and source references
- Synthesize Answer with RAG:
- Construct prompt with retrieved policy excerpts as context
- Execute LLM completion via llm-provider context
- Generate natural language answer based on retrieved context
- Provide Source Citations:
- Include source references for all policies used in answer
- Link citations to original policy documents
- Ensure transparency and verifiability
- Emit Observability Signals:
- Create OpenTelemetry trace for query pipeline
- Log query, retrieved policies, governance decision, and answer quality
Job: Track Query History
User Story: As a system administrator, I want to track past queries for improvement and auditing, so that I can understand user needs and system performance.
EARS Requirement:
- While the system is operational, when queries are processed, the query context shall:
- Store query history with:
- Query text and timestamp
- User identifier and context
- Retrieved policy references
- Governance decision (allow/deny)
- Generated answer
- Response time and quality metrics
- Enforce privacy controls for query_history:
- Configurable retention and auto-deletion (per GDPR/CCPA)
- Data classification and RBAC for access
- Encrypt at rest and in transit
- Redact/anonymize PII before storage
- Mask policy metadata on denied queries
- Support opt-out/consent flag for user identifiers
- Enable query history retrieval for analytics
- Support feedback collection on answer quality
- Provide data for continuous improvement of retrieval and synthesis
Job: Handle Follow-up Questions
User Story: As a policy consumer, I want to ask follow-up questions in conversation, so that I can explore policies iteratively with context maintained across turns.
EARS Requirement:
- While a conversation session is active, when a follow-up question is received, the query context shall:
- Maintain conversation context across multiple turns
- Re-evaluate governance for follow-up queries before policy retrieval
- Include previous Q&A in retrieval context for relevance
- Resolve pronouns and references to earlier queries
- Generate new answer considering conversation history
- Support multi-turn dialogue until session conclusion
Domain Entities Summary
Root Aggregates
- QueryRequest: Represents a natural language query with text, user context, and session identifier
- QueryResponse: Contains synthesized answer, source citations, retrieved policy references, and metadata
- ConversationSession: Maintains multi-turn dialogue state with context history
Value Objects
- QueryEmbedding: 384-dimensional vector representation of query text
- RetrievedPolicy: Policy document with similarity score and metadata
- GovernanceDecision: Access control result with allow/deny status and rationale
- SourceCitation: Reference to original policy document with link and excerpt
Policy Rules
- GovernanceFirst: All queries must pass governance check before policy retrieval
- SourceTransparency: All answers must include source citations
- AnswerGrounding: Answers must be grounded in retrieved context only
Integration Points
- Memory Context: Retrieves semantically similar policies via vector embedding search
- Governance Runtime Context: Validates user access to retrieved policies before synthesis
- LLM Provider Context: Generates natural language answers via chat completion
- Semantic Core Context: Provides policy parsing and structure for source references
- Cognitive Extension Context: Enables conversational follow-up and context management