Low Level Design: User Feedback Service

What Is a User Feedback Service?

A user feedback service collects structured and free-form feedback from users across product surfaces, classifies and scores it, routes actionable items to the right product teams, and surfaces trends through aggregated dashboards. It replaces ad-hoc survey tools with a first-class internal platform.

Functional Requirements

  • Collect feedback via in-app widgets, email surveys, or API
  • Support NPS (0–10 scale), CSAT (1–5), and free-text response types
  • Categorize feedback by product area, sentiment, and topic
  • Route feedback items to the appropriate team or Jira/Linear project
  • Run sentiment analysis on free-text responses
  • Provide dashboards showing NPS/CSAT trends, volume, and category breakdown

Non-Functional Requirements

  • Submission API must be low-latency and highly available (user-facing)
  • ML enrichment (sentiment, topic) can be asynchronous
  • Dashboard queries over millions of responses must return in seconds
  • PII in free-text must be handled carefully (optional masking / access controls)
  • Feedback must not be lost — durable ingestion pipeline

Core Entities

EntityKey Fields
Surveyid, name, type ('nps'|'csat'|'freeform'), product_area, active, created_at
Responseid, survey_id, user_id, score, text, submitted_at, source ('in_app'|'email'|'api')
Enrichmentid, response_id, sentiment ('positive'|'neutral'|'negative'), topics[], category, enriched_at
RoutingRuleid, survey_id, condition (JSON), destination_type ('team'|'jira'|'linear'), destination_id
RoutedItemid, response_id, rule_id, external_ticket_id, status, created_at

High-Level Architecture

Client (in-app widget / email link / API)
  |
  v
Submission API (REST, stateless, autoscaled)
  |-- Validate & persist Response row (Postgres)
  |-- Publish event to Feedback Queue (Kafka)
  |-- Return 202 Accepted immediately

Enrichment Worker (consumes Feedback Queue)
  |-- Call Sentiment Analysis service (internal ML or AWS Comprehend)
  |-- Run topic classifier
  |-- Write Enrichment row
  |-- Publish enriched event to Routing Queue

Routing Worker (consumes Routing Queue)
  |-- Evaluate RoutingRules against response + enrichment
  |-- Create external tickets via Jira / Linear API
  |-- Write RoutedItem row

Analytics Aggregator
  |-- Consumes Feedback Queue (separate consumer group)
  |-- Writes to ClickHouse (or BigQuery) for OLAP queries
  |-- Maintains pre-aggregated materialized views: NPS by day/area, CSAT by cohort

Dashboard API
  |-- Reads from ClickHouse
  |-- Serves trend charts, category breakdowns, verbatim samples

NPS and CSAT Scoring

NPS = % Promoters (9–10) minus % Detractors (0–6). Passives (7–8) are excluded from the calculation. CSAT = average score across responses for a given survey and time window. Both metrics are computed server-side from raw Response rows and cached in materialized views refreshed on a schedule (e.g., every 15 minutes) or on-demand for smaller datasets.

Sentiment Analysis

Free-text responses are sent asynchronously to a sentiment analysis service. Options: AWS Comprehend (managed, no infra), a fine-tuned BERT model deployed on internal inference servers, or a third-party API (e.g., OpenAI). Results are stored in the Enrichment table. The enrichment step also extracts topics (bug report, feature request, UX complaint, praise) using a multi-label classifier trained on historical labeled data.

Routing to Product Teams

RoutingRules are evaluated after enrichment. A rule condition is a JSON predicate: {sentiment: negative, category: payments, score_lte: 3}. The routing worker evaluates rules in priority order and creates tickets in the matched destination. Rate limiting and deduplication prevent flooding a team's backlog (e.g., only one ticket per user per 30 days for similar feedback). The routing worker is idempotent — if it crashes and reprocesses a message, it checks for an existing RoutedItem before creating a new ticket.

Aggregated Dashboards

ClickHouse is well-suited here: append-only inserts, columnar storage, fast GROUP BY aggregations over millions of rows. Pre-aggregate NPS and CSAT by (survey_id, product_area, date) in a materialized view. For verbatim samples, store a random reservoir sample of 1,000 responses per (category, sentiment, week) to avoid reading the full table for qualitative review. Dashboard filters (date range, product area, score band, sentiment) map directly to partition keys and WHERE clauses.

PII and Access Controls

  • Free-text can contain names, emails, account numbers — run a PII detector before storing or mask in the analytics pipeline
  • Raw responses (with user_id) accessible only to authorized roles; dashboards show anonymized aggregates by default
  • Retention policy: raw responses deleted after N days per data governance policy; aggregates retained indefinitely

Scaling Considerations

  • Submission API: stateless, horizontally scalable; Postgres write throughput is sufficient at moderate rates; switch to Cassandra or Scylla if write volume exceeds tens of thousands per second
  • Enrichment latency: async pipeline means users never wait for ML inference; SLA for enrichment is minutes, not milliseconds
  • Dashboard query latency: ClickHouse materialized views + result caching in Redis for common queries
  • Routing idempotency: exactly-once ticket creation via deduplication key in external API calls or a local lock table

Interview Talking Points

  • Why decouple enrichment and routing from the submission path? (latency, fault isolation, independent scaling)
  • How do you keep NPS/CSAT numbers consistent as new responses arrive? (materialized views with refresh cadence)
  • How do you prevent a surge of negative feedback from spamming a team's Jira? (rate limiting and dedup in routing worker)
  • How would you support A/B testing different survey prompts and comparing their NPS? (survey_id as a dimension in all aggregations)
  • How do you handle GDPR right-to-erasure for feedback data? (delete raw responses by user_id; aggregates are already anonymized)
{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [ { “@type”: “Question”, “name”: “What is a user feedback service and what types of feedback does it collect?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “A user feedback service provides APIs and widgets that product teams embed in their applications to collect structured and unstructured input from users. It collects NPS surveys that measure likelihood to recommend, CSAT ratings attached to specific interactions, free-text comments and bug reports, feature requests, and in-app reaction signals such as thumbs up or down. All responses are stored centrally for analysis and routing.” } }, { “@type”: “Question”, “name”: “How does NPS and CSAT scoring work in a feedback service?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “NPS asks users to rate likelihood to recommend on a 0-10 scale. Respondents scoring 9-10 are promoters, 7-8 are passives, and 0-6 are detractors. The NPS is calculated as the percentage of promoters minus the percentage of detractors, yielding a score from -100 to 100. CSAT asks users to rate their satisfaction with a specific interaction on a 1-5 or 1-10 scale and is expressed as the percentage of respondents who chose a positive rating, typically the top two scores.” } }, { “@type”: “Question”, “name”: “How does sentiment analysis work for categorizing feedback?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Free-text feedback is passed through a sentiment classification model that assigns a positive, neutral, or negative label along with a confidence score. A topic extraction model or LLM-based pipeline identifies the product area or theme mentioned in the text such as performance, pricing, or onboarding. Combining sentiment and topic lets the service surface that users are negative specifically about checkout latency rather than the product overall, enabling more targeted prioritization.” } }, { “@type”: “Question”, “name”: “How is feedback routed to the right product team automatically?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Routing rules map extracted topics and source surfaces to team queues. For example feedback mentioning billing terms routes to the payments team while feedback collected from the mobile onboarding screen routes to the growth team. Rules can be keyword-based or ML-based using a classifier trained on historically labeled feedback. High-severity items such as bug reports with low ratings are escalated immediately via webhook to incident management tools, while routine feedback is batched into daily digest reports.” } } ] }

See also: Netflix Interview Guide 2026: Streaming Architecture, Recommendation Systems, and Engineering Excellence

See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering

See also: Atlassian Interview Guide

Scroll to Top