What Is a User Feedback Service?
A user feedback service collects structured and free-form feedback from users across product surfaces, classifies and scores it, routes actionable items to the right product teams, and surfaces trends through aggregated dashboards. It replaces ad-hoc survey tools with a first-class internal platform.
Functional Requirements
- Collect feedback via in-app widgets, email surveys, or API
- Support NPS (0–10 scale), CSAT (1–5), and free-text response types
- Categorize feedback by product area, sentiment, and topic
- Route feedback items to the appropriate team or Jira/Linear project
- Run sentiment analysis on free-text responses
- Provide dashboards showing NPS/CSAT trends, volume, and category breakdown
Non-Functional Requirements
- Submission API must be low-latency and highly available (user-facing)
- ML enrichment (sentiment, topic) can be asynchronous
- Dashboard queries over millions of responses must return in seconds
- PII in free-text must be handled carefully (optional masking / access controls)
- Feedback must not be lost — durable ingestion pipeline
Core Entities
| Entity | Key Fields |
|---|---|
| Survey | id, name, type ('nps'|'csat'|'freeform'), product_area, active, created_at |
| Response | id, survey_id, user_id, score, text, submitted_at, source ('in_app'|'email'|'api') |
| Enrichment | id, response_id, sentiment ('positive'|'neutral'|'negative'), topics[], category, enriched_at |
| RoutingRule | id, survey_id, condition (JSON), destination_type ('team'|'jira'|'linear'), destination_id |
| RoutedItem | id, response_id, rule_id, external_ticket_id, status, created_at |
High-Level Architecture
Client (in-app widget / email link / API) | v Submission API (REST, stateless, autoscaled) |-- Validate & persist Response row (Postgres) |-- Publish event to Feedback Queue (Kafka) |-- Return 202 Accepted immediately Enrichment Worker (consumes Feedback Queue) |-- Call Sentiment Analysis service (internal ML or AWS Comprehend) |-- Run topic classifier |-- Write Enrichment row |-- Publish enriched event to Routing Queue Routing Worker (consumes Routing Queue) |-- Evaluate RoutingRules against response + enrichment |-- Create external tickets via Jira / Linear API |-- Write RoutedItem row Analytics Aggregator |-- Consumes Feedback Queue (separate consumer group) |-- Writes to ClickHouse (or BigQuery) for OLAP queries |-- Maintains pre-aggregated materialized views: NPS by day/area, CSAT by cohort Dashboard API |-- Reads from ClickHouse |-- Serves trend charts, category breakdowns, verbatim samples
NPS and CSAT Scoring
NPS = % Promoters (9–10) minus % Detractors (0–6). Passives (7–8) are excluded from the calculation. CSAT = average score across responses for a given survey and time window. Both metrics are computed server-side from raw Response rows and cached in materialized views refreshed on a schedule (e.g., every 15 minutes) or on-demand for smaller datasets.
Sentiment Analysis
Free-text responses are sent asynchronously to a sentiment analysis service. Options: AWS Comprehend (managed, no infra), a fine-tuned BERT model deployed on internal inference servers, or a third-party API (e.g., OpenAI). Results are stored in the Enrichment table. The enrichment step also extracts topics (bug report, feature request, UX complaint, praise) using a multi-label classifier trained on historical labeled data.
Routing to Product Teams
RoutingRules are evaluated after enrichment. A rule condition is a JSON predicate: {sentiment: negative, category: payments, score_lte: 3}. The routing worker evaluates rules in priority order and creates tickets in the matched destination. Rate limiting and deduplication prevent flooding a team's backlog (e.g., only one ticket per user per 30 days for similar feedback). The routing worker is idempotent — if it crashes and reprocesses a message, it checks for an existing RoutedItem before creating a new ticket.
Aggregated Dashboards
ClickHouse is well-suited here: append-only inserts, columnar storage, fast GROUP BY aggregations over millions of rows. Pre-aggregate NPS and CSAT by (survey_id, product_area, date) in a materialized view. For verbatim samples, store a random reservoir sample of 1,000 responses per (category, sentiment, week) to avoid reading the full table for qualitative review. Dashboard filters (date range, product area, score band, sentiment) map directly to partition keys and WHERE clauses.
PII and Access Controls
- Free-text can contain names, emails, account numbers — run a PII detector before storing or mask in the analytics pipeline
- Raw responses (with user_id) accessible only to authorized roles; dashboards show anonymized aggregates by default
- Retention policy: raw responses deleted after N days per data governance policy; aggregates retained indefinitely
Scaling Considerations
- Submission API: stateless, horizontally scalable; Postgres write throughput is sufficient at moderate rates; switch to Cassandra or Scylla if write volume exceeds tens of thousands per second
- Enrichment latency: async pipeline means users never wait for ML inference; SLA for enrichment is minutes, not milliseconds
- Dashboard query latency: ClickHouse materialized views + result caching in Redis for common queries
- Routing idempotency: exactly-once ticket creation via deduplication key in external API calls or a local lock table
Interview Talking Points
- Why decouple enrichment and routing from the submission path? (latency, fault isolation, independent scaling)
- How do you keep NPS/CSAT numbers consistent as new responses arrive? (materialized views with refresh cadence)
- How do you prevent a surge of negative feedback from spamming a team's Jira? (rate limiting and dedup in routing worker)
- How would you support A/B testing different survey prompts and comparing their NPS? (survey_id as a dimension in all aggregations)
- How do you handle GDPR right-to-erasure for feedback data? (delete raw responses by user_id; aggregates are already anonymized)
See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering
See also: Atlassian Interview Guide