⚙️ GenAIOps Integration

LLM management, policy gateway, and observability via chat.


LLM Provider Management

Check Provider Status

1
2
3
4
5
6
7
8
9
10
You: What's the status of the LLM provider service?

SEA™: LLM Provider Health:
| Provider | Status | Models Available |
|----------|--------|------------------|
| Ollama (local) | ✅ Healthy | llama3.2, mistral |
| Anthropic | ✅ Connected | claude-sonnet, claude-haiku |
| OpenAI | ⚠️ Rate limited | gpt-4o (fallback) |

Active routing priority: Ollama → Anthropic → OpenAI

Switch Models

1
2
3
4
5
6
You: Use claude-sonnet for all subsequent requests

SEA™: Model preference updated. Current configuration:
- Primary: claude-sonnet (Anthropic)
- Fallback: llama3.2 (Ollama)
- Policy gateway: enabled

Policy Gateway

Rate Limit Status

1
2
3
4
5
6
7
8
9
10
You: Show current rate limit status

SEA™: Policy Gateway Statistics:
| Metric | Value | Limit |
|--------|-------|-------|
| Tokens/minute | 45,000 | 100,000 |
| Requests/minute | 42 | 100 |
| Daily cost | $12.50 | $50.00 |

Status: ✅ Within limits

Content Filtering

The Policy Gateway automatically filters:

1
2
3
4
5
6
7
8
You: Send user data including SSN to external API

SEA™: ⛔ Request blocked by Policy Gateway:
- Policy: POL-LLM-003 (PII Protection)
- Action: Scrubbed sensitive fields
- Logged to audit trail

Sanitized request was processed without SSN field.

Observability Integration

View Traces

1
2
3
4
You: Show me the trace for request abc123

SEA™: Trace Analysis (trace_id: abc123):

request → sea-api (12ms) └── llm-provider (245ms) ├── policy-check (3ms) ✅ ├── ollama/llama3.2 (240ms) └── response-filter (2ms) ✅ └── response (5ms)

Total: 262ms Status: Success Tokens: 450 input, 320 output

1

View Metrics

1
2
3
4
5
6
7
8
9
10
11
You: Show LLM usage metrics for today

SEA™: Today's LLM Metrics:

| Hour | Requests | Tokens | Cost | P99 Latency |
|------|----------|--------|------|-------------|
| 09:00 | 156 | 45,000 | $2.30 | 1.2s |
| 10:00 | 203 | 62,000 | $3.10 | 0.9s |
| 11:00 | 178 | 51,000 | $2.60 | 1.1s |

Trend: Stable, within SLO

Semantic Context Attributes

All LLM requests include SEA™ semantic context (SDS-030):

1
2
3
4
5
6
7
# Automatically added to all telemetry
{
  "sea.platform": "sea-forge",
  "sea.domain": "governance",
  "sea.concept": "PolicyEvaluation",
  "sea.regime_id": "regime-001"
}

Query by semantic context in OpenObserve:

1
2
3
4
SELECT * FROM traces 
WHERE sea_domain = 'governance' 
  AND duration_ms > 1000
ORDER BY timestamp DESC

Fallback Chain Configuration

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
You: Configure LLM fallback chain: Claude → GPT-4 → Ollama

SEA™: Updated fallback configuration:

```yaml
providers:
  - name: "anthropic"
    priority: 1
    models: ["claude-sonnet", "claude-haiku"]
    
  - name: "openai"
    priority: 2
    models: ["gpt-4o"]
    fallback: true
    
  - name: "ollama"
    priority: 3
    models: ["llama3.2"]
    localOnly: true
    fallback: true

Active on next request. ```


Best Prompts for GenAIOps

Task Prompt
Provider health “Check LLM provider status”
Usage report “Show token usage for this week”
Cost analysis “What’s my LLM cost breakdown?”
Trace debug “Show trace for request ID X”
Policy check “Will this request pass the policy gateway?”

See Also:


Last Updated: January 2026