SDS-009: User Feedback Processor


spec_id: SDS-009 title: User Feedback Processor bounded_context: cognitive-extension status: Draft version: 1.2.0 date_created: 2025-12-30 last_updated: 2026-01-04 implements:

Addresses Requirements

MVP Status

MVP


Component Description

The User Feedback Processor captures and processes user interactions with cognitive artifacts and explicit feedback on AI recommendations. This data is then used to refine the Recommendation Algorithm and other AI models, ensuring continuous learning and adaptation.


Technical Details

Interfaces

Direction Description Format
Input User interaction events Events (accept/reject, modify)
Input Explicit feedback Ratings, comments
Output Processed feedback events Events to Recommendation Algorithm

Protocols


Invariants

The following invariants MUST be maintained:

  1. Idempotency: Feedback events with the same feedbackId MUST be processed exactly once.
  2. Deduplication: Events with identical dedupeKey values MUST be deduplicated before processing.
  3. Privacy Enforcement: If privacyFlags.anonymize is true, all PII MUST be scrubbed before storage.
  4. Channel Validation: feedbackChannel MUST be one of: explicit, implicit, correction.

Data Model

Feedback Event Schema

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "FeedbackEvent",
  "type": "object",
  "required": ["feedbackId", "userId", "sessionId", "feedbackChannel", "timestamp"],
  "properties": {
    "feedbackId": {
      "type": "string",
      "format": "uuid",
      "description": "Unique identifier for idempotent processing"
    },
    "dedupeKey": {
      "type": "string",
      "description": "Composite key for event deduplication (userId + artifactId + feedbackType + window)"
    },
    "userId": {
      "type": "string",
      "description": "User identifier (may be anonymized per privacyFlags)"
    },
    "sessionId": {
      "type": "string",
      "description": "Session identifier for context correlation"
    },
    "artifactId": {
      "type": "string",
      "description": "Identifier of the cognitive artifact receiving feedback"
    },
    "feedbackChannel": {
      "type": "string",
      "enum": ["explicit", "implicit", "correction"],
      "description": "How feedback was captured: explicit (user action), implicit (behavior), correction (user fix)"
    },
    "feedbackType": {
      "type": "string",
      "enum": [
        "artifact-acceptance",
        "rating",
        "comment",
        "rejection",
        "modification",
        "dwell-time",
        "scroll-depth"
      ],
      "description": "Specific type of feedback within the channel"
    },
    "data": {
      "type": "object",
      "properties": {
        "accepted": { "type": "boolean" },
        "rating": { "type": "integer", "minimum": 1, "maximum": 5 },
        "comment": { "type": "string", "maxLength": 2000 },
        "modifiedElements": {
          "type": "array",
          "items": { "type": "string" }
        },
        "timeSpent": { "type": "integer", "description": "Time in seconds" },
        "scrollPercentage": { "type": "number", "minimum": 0, "maximum": 100 }
      }
    },
    "correctionData": {
      "type": "object",
      "description": "Captures user corrections for learning",
      "properties": {
        "originalValue": { "type": "string" },
        "correctedValue": { "type": "string" },
        "correctionType": {
          "type": "string",
          "enum": ["content", "format", "accuracy", "completeness"]
        }
      }
    },
    "privacyFlags": {
      "type": "object",
      "description": "Privacy and anonymization settings",
      "properties": {
        "anonymize": {
          "type": "boolean",
          "default": false,
          "description": "If true, PII is scrubbed before storage"
        },
        "retentionDays": {
          "type": "integer",
          "default": 90,
          "description": "Number of days to retain this feedback"
        },
        "excludeFromTraining": {
          "type": "boolean",
          "default": false,
          "description": "If true, exclude from model training datasets"
        }
      }
    },
    "context": {
      "type": "object",
      "properties": {
        "taskType": { "type": "string" },
        "projectId": { "type": "string" },
        "agentId": { "type": "string" }
      }
    },
    "timestamp": {
      "type": "string",
      "format": "date-time",
      "description": "ISO 8601 timestamp of feedback capture"
    }
  }
}

Refer to SDS-001: Data Model Schemas for:


API Specification

Endpoints

Endpoint Method Description
/v1/feedback/submit POST Submit user feedback
/v1/feedback/events WS Stream feedback events
/v1/feedback/batch POST Submit multiple feedback events

Request/Response

Submit Feedback

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
POST /v1/feedback/submit
{
  "feedbackId": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
  "userId": "user-123",
  "sessionId": "session-abc",
  "artifactId": "artifact-456",
  "feedbackChannel": "explicit",
  "feedbackType": "artifact-acceptance",
  "data": {
    "accepted": true,
    "modifiedElements": ["checkbox-1", "text-field-2"],
    "timeSpent": 45,
    "rating": 4,
    "comment": "Very helpful checklist"
  },
  "privacyFlags": {
    "anonymize": false,
    "retentionDays": 90
  },
  "context": {
    "taskType": "code-review",
    "projectId": "order-service"
  }
}

// Response: 202 Accepted
{
  "feedbackId": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
  "dedupeKey": "user-123:artifact-456:artifact-acceptance:2026-01-04T09",
  "status": "queued"
}

Submit Correction

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
POST /v1/feedback/submit
{
  "feedbackId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "userId": "user-123",
  "sessionId": "session-abc",
  "artifactId": "artifact-456",
  "feedbackChannel": "correction",
  "feedbackType": "modification",
  "correctionData": {
    "originalValue": "The function returns null",
    "correctedValue": "The function returns undefined",
    "correctionType": "accuracy"
  },
  "privacyFlags": {
    "anonymize": true
  }
}

// Response: 202 Accepted
{
  "feedbackId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "dedupeKey": "user-123:artifact-456:modification:2026-01-04T09",
  "status": "queued"
}

Message Queue Events

Feedback Event (Published to Recommendation Algorithm)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
{
  "eventType": "feedback.processed",
  "feedbackId": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
  "dedupeKey": "user-123:artifact-456:artifact-acceptance:2026-01-04T09",
  "userId": "user-123",
  "artifactType": "checklist",
  "feedbackChannel": "explicit",
  "outcome": "accepted",
  "score": 4,
  "features": {
    "taskType": "code-review",
    "artifactComplexity": "medium",
    "userExperience": "intermediate"
  },
  "privacyFlags": {
    "anonymize": false,
    "excludeFromTraining": false
  }
}

Learning Signal Generation

The User Feedback Processor generates reinforcement learning (RL) signals from processed feedback to enable continuous model improvement per ADR-010.

Learning Signal Schema

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "LearningSignal",
  "type": "object",
  "required": ["signalId", "signalType", "timestamp"],
  "properties": {
    "signalId": {
      "type": "string",
      "format": "uuid",
      "description": "Unique identifier for this learning signal"
    },
    "signalType": {
      "type": "string",
      "enum": ["reward", "policy_update", "exploration_hint"],
      "description": "Type of RL signal generated"
    },
    "sourceEvent": {
      "type": "object",
      "properties": {
        "feedbackId": { "type": "string", "format": "uuid" },
        "feedbackChannel": { "type": "string" },
        "feedbackType": { "type": "string" }
      }
    },
    "reward": {
      "type": "object",
      "description": "Reward signal for RL model",
      "properties": {
        "value": { "type": "number", "minimum": -1, "maximum": 1 },
        "confidence": { "type": "number", "minimum": 0, "maximum": 1 },
        "decay": { "type": "number", "default": 0.99 }
      }
    },
    "policyUpdate": {
      "type": "object",
      "description": "Suggested policy adjustment",
      "properties": {
        "targetModel": { "type": "string" },
        "adjustmentType": {
          "type": "string",
          "enum": ["weight_boost", "weight_decay", "feature_add", "feature_remove"]
        },
        "featureVector": {
          "type": "object",
          "additionalProperties": { "type": "number" }
        }
      }
    },
    "explorationHint": {
      "type": "object",
      "description": "Hint for exploration/exploitation balance",
      "properties": {
        "decreaseExploration": { "type": "boolean" },
        "reason": { "type": "string" }
      }
    },
    "context": {
      "type": "object",
      "properties": {
        "userId": { "type": "string" },
        "artifactType": { "type": "string" },
        "taskType": { "type": "string" }
      }
    },
    "timestamp": {
      "type": "string",
      "format": "date-time"
    }
  }
}

Signal Generation Rules

Feedback Type Signal Type Reward Value Policy Action
artifact-acceptance (accepted=true) reward +0.5 to +1.0 Boost similar features
artifact-acceptance (accepted=false) reward -0.5 to -1.0 Decay similar features
rating (4-5) reward +0.3 to +0.8 Reinforce current policy
rating (1-2) reward + policy_update -0.3 to -0.8 Adjust feature weights
modification policy_update N/A Learn from correction
High acceptance rate (>90%) exploration_hint N/A Decrease exploration

Signal Publishing

Learning signals are published to dedicated NATS JetStream subjects:

Subject Description
learning.signals.reward Reward signals for model training
learning.signals.policy Policy adjustment suggestions
learning.signals.exploration Exploration/exploitation hints

Published Event Example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
  "eventType": "learning.signal.generated",
  "signalId": "sig-789",
  "signalType": "reward",
  "sourceEvent": {
    "feedbackId": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
    "feedbackChannel": "explicit",
    "feedbackType": "artifact-acceptance"
  },
  "reward": {
    "value": 0.85,
    "confidence": 0.92,
    "decay": 0.99
  },
  "context": {
    "userId": "user-123",
    "artifactType": "checklist",
    "taskType": "code-review"
  },
  "timestamp": "2026-01-04T10:00:00Z"
}

Error Codes

Code Description
400 Bad Request Invalid feedback data or schema violation
202 Accepted Feedback queued for processing
409 Conflict Duplicate feedbackId (idempotency check)
422 Unprocessable Entity Invalid feedbackChannel or feedbackType

Dependencies


Error Handling


Performance Considerations


Security Considerations


Testing Strategy


Personalization Model

User Preference Schema

User preferences are stored as versioned JSONB documents with full audit history.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "UserPreference",
  "type": "object",
  "required": ["userId", "preferenceVersion", "createdAt", "updatedAt"],
  "properties": {
    "userId": {
      "type": "string",
      "description": "User identifier"
    },
    "preferenceVersion": {
      "type": "integer",
      "minimum": 1,
      "description": "Monotonically increasing version for optimistic concurrency"
    },
    "artifactPreferences": {
      "type": "object",
      "description": "Per-artifact-type preferences learned from feedback",
      "additionalProperties": {
        "type": "object",
        "properties": {
          "preferredComplexity": {
            "type": "string",
            "enum": ["simple", "moderate", "detailed"]
          },
          "preferredFormat": {
            "type": "string",
            "enum": ["bullet-list", "prose", "table", "code", "mixed"]
          },
          "averageRating": {
            "type": "number",
            "minimum": 1,
            "maximum": 5
          },
          "acceptanceRate": {
            "type": "number",
            "minimum": 0,
            "maximum": 1
          },
          "sampleCount": {
            "type": "integer",
            "minimum": 0
          }
        }
      }
    },
    "contextPreferences": {
      "type": "object",
      "description": "Context-specific preferences (by task type, project)",
      "additionalProperties": {
        "type": "object",
        "properties": {
          "preferredAgents": {
            "type": "array",
            "items": { "type": "string" }
          },
          "preferredResponseLength": {
            "type": "string",
            "enum": ["concise", "balanced", "verbose"]
          },
          "preferredCodeStyle": {
            "type": "string",
            "enum": ["minimal", "documented", "verbose"]
          }
        }
      }
    },
    "globalPreferences": {
      "type": "object",
      "properties": {
        "communicationStyle": {
          "type": "string",
          "enum": ["formal", "casual", "technical"],
          "default": "technical"
        },
        "detailLevel": {
          "type": "string",
          "enum": ["overview", "standard", "deep-dive"],
          "default": "standard"
        },
        "learningMode": {
          "type": "boolean",
          "default": true,
          "description": "If true, preferences are updated from feedback"
        }
      }
    },
    "createdAt": {
      "type": "string",
      "format": "date-time"
    },
    "updatedAt": {
      "type": "string",
      "format": "date-time"
    }
  }
}

Preference Storage

Field Type Description
user_id VARCHAR(255) Primary key
preference_version INTEGER Optimistic locking version
preferences JSONB UserPreference document
created_at TIMESTAMP Record creation time
updated_at TIMESTAMP Last modification time

Indexing Strategy:


Preference API Specification

Endpoints

Endpoint Method Description
/v1/preferences/{userId} GET Retrieve user preferences
/v1/preferences/{userId} PUT Update user preferences (requires version)
/v1/preferences/{userId}/reset POST Reset preferences to defaults
/v1/preferences/{userId}/history GET Get preference version history

Request/Response

Get Preferences

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
GET /v1/preferences/user-123

// Response: 200 OK
{
  "userId": "user-123",
  "preferenceVersion": 42,
  "artifactPreferences": {
    "checklist": {
      "preferredComplexity": "moderate",
      "preferredFormat": "bullet-list",
      "averageRating": 4.2,
      "acceptanceRate": 0.85,
      "sampleCount": 127
    }
  },
  "globalPreferences": {
    "communicationStyle": "technical",
    "detailLevel": "standard",
    "learningMode": true
  },
  "updatedAt": "2026-01-04T09:55:00Z"
}

Update Preferences

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
PUT /v1/preferences/user-123
{
  "preferenceVersion": 42,
  "globalPreferences": {
    "detailLevel": "deep-dive"
  }
}

// Response: 200 OK
{
  "userId": "user-123",
  "preferenceVersion": 43,
  "updatedAt": "2026-01-04T09:57:00Z"
}

// Response: 409 Conflict (version mismatch)
{
  "error": "version_conflict",
  "currentVersion": 43,
  "providedVersion": 42
}

Adaptive Response Logic

Preference Application Flow

graph TD
    A[AI Request] --> B{User Has Preferences?}
    B -->|No| C[Use Defaults]
    B -->|Yes| D[Load UserPreference]
    D --> E[Extract Context]
    E --> F[Apply Artifact Prefs]
    F --> G[Apply Context Prefs]
    G --> H[Apply Global Prefs]
    H --> I[Generate Adapted Response]
    C --> I
    I --> J[Return Response]
    J --> K[Collect Feedback]
    K --> L{learningMode?}
    L -->|Yes| M[Update Preferences]
    L -->|No| N[Skip Update]

Preference Update Algorithm

When feedback is processed, preferences are updated using weighted moving averages:

1
2
newRating = (oldRating * weight + feedbackRating) / (weight + 1)
weight = min(sampleCount, 100)  // Cap influence of old data

Update Triggers:

Invariants

  1. Preference Versioning: Updates MUST use optimistic concurrency with preferenceVersion.
  2. Learning Mode Respect: If learningMode is false, preferences MUST NOT be auto-updated.
  3. Preference Isolation: User preferences MUST NOT influence other users’ responses.