User Settings Service Low-Level Design: Typed Settings Schema, Migration, and Bulk Export

A user settings service manages configuration that affects application behavior — email address, two-factor auth state, timezone, connected integrations. Unlike preferences (which are behavioral choices), settings often have strict types, cross-cutting validation rules, and must survive schema evolution as the product grows.

Requirements

Functional

  • Store typed settings grouped by feature namespace (account, security, integrations).
  • Validate values against a versioned schema on write.
  • Migrate existing settings when a schema version changes.
  • Support bulk export of all settings for portability (GDPR data download).
  • Provide per-namespace read endpoints for feature teams.

Non-Functional

  • Schema validation under 5 ms per write.
  • Bulk export completed within 2 seconds for typical users.
  • Zero downtime schema migrations with backward compatibility.

Data Model

settings_schema(
  namespace      VARCHAR(100),
  key            VARCHAR(200),
  schema_version INT,
  data_type      ENUM(bool, int, string, json),
  constraints    JSONB,        -- min/max, regex, enum values
  default_value  TEXT,
  deprecated_at  TIMESTAMPTZ,
  PRIMARY KEY (namespace, key, schema_version)
)

user_settings(
  user_id        BIGINT,
  namespace      VARCHAR(100),
  key            VARCHAR(200),
  value          TEXT,
  schema_version INT,
  updated_at     TIMESTAMPTZ,
  PRIMARY KEY (user_id, namespace, key)
)

settings_changelog(
  id             BIGSERIAL PRIMARY KEY,
  user_id        BIGINT,
  namespace      VARCHAR(100),
  key            VARCHAR(200),
  old_value      TEXT,
  new_value      TEXT,
  changed_by     BIGINT,
  changed_at     TIMESTAMPTZ
)

Core Algorithms

Schema Validation

On every write the service looks up the active schema version for the given namespace/key pair. It deserializes the constraint JSONB into a validator object and runs the value through type coercion and constraint checks (range, pattern, enum membership). Validators are compiled once per schema version and cached in a concurrent hash map, so repeated validation of the same key is O(1) hash lookup plus a lightweight check.

Versioned Settings Migration

When a schema version increments, the service registers a migration function for each affected key. Migration is lazy: when a user setting is read and its stored schema_version is below the current version, the service applies chained migration functions (v1->v2->v3) to produce the current-version value, then writes the migrated value back. This avoids expensive bulk backfills while ensuring all reads return schema-current values. Migration functions are pure transformations registered in code and tested independently.

Bulk Export

The export endpoint streams all rows for a given user_id from the user_settings table, groups them by namespace, and serializes to a structured JSON document. For users with thousands of settings (e.g., many integrations), the query uses a server-side cursor with page size 500 to avoid large result set memory pressure. The export includes human-readable key names and schema descriptions sourced from the in-memory schema map.

API Design

  • GET /v1/settings/{namespace} — returns all settings in a namespace for the caller.
  • GET /v1/settings/{namespace}/{key} — single setting with current value and schema metadata.
  • PUT /v1/settings/{namespace}/{key} — set a value; validates against current schema version.
  • DELETE /v1/settings/{namespace}/{key} — reset to schema default.
  • GET /v1/settings/export — full settings export as JSON; triggers async job for large accounts.
  • GET /v1/settings/schema/{namespace} — returns current schema for a namespace; used by UI to build dynamic forms.

Scalability and Fault Tolerance

Namespacing lets feature teams own their schema independently and deploy schema changes without coordinating with other teams. A settings registry service holds the canonical schema map and exposes it over an internal gRPC endpoint. Feature services register schemas at deploy time. The main settings service polls the registry every 30 seconds and hot-reloads changed schemas without restart.

For write-heavy operations (e.g., bulk integration sync writing dozens of settings atomically), the service wraps the batch in a single DB transaction and publishes a single settings.bulk_updated event rather than one event per key. Downstream consumers receive the batch as a diff and apply it atomically.

Interview Tips

  • Distinguish settings (configuring the system) from preferences (personalizing the experience) — interviewers sometimes conflate them.
  • Discuss how deprecated keys are handled: the service accepts writes to deprecated keys with a warning header but stops including them in exports after a sunset date.
  • Mention that bulk export is a natural GDPR portability artifact; the same endpoint can power account migration to another provider.
  • For migration, emphasize testing each migration function in isolation with property-based tests to avoid data corruption at scale.

{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How should a user settings service enforce typed schema validation?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Each setting is registered in a schema registry with its key, data type, allowed values or range, and default. On every write the service deserializes the incoming value and validates it against the registered schema before persisting. Using JSON Schema or a Protobuf/Thrift definition as the canonical contract lets both the backend and client SDKs share the same validation rules, eliminating a class of client-side bugs.”
}
},
{
“@type”: “Question”,
“name”: “How does versioned settings migration work?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Each settings record stores a schema_version integer alongside the payload. When the service reads a record whose version is lower than the current schema version it applies a chain of migration functions — one per version increment — to transform the old data into the current shape before returning it. Migrations are written as pure functions and tested offline so the live read path stays fast and side-effect-free.”
}
},
{
“@type”: “Question”,
“name”: “Why use per-feature namespacing in a settings service?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Namespacing (e.g., notifications.email.digest_enabled vs. privacy.profile_visibility) prevents key collisions as the number of owning teams grows, makes access-control grants coarser-grained and auditable at the namespace level, and allows bulk reads for a single feature area without fetching the entire settings blob. It also isolates failure: a corrupt namespace can be reset without touching unrelated settings.”
}
},
{
“@type”: “Question”,
“name”: “What should a bulk export format for user settings include?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “A GDPR-compliant bulk export should be a structured document (JSON or CSV) containing every namespace, key, current value, data type, last-modified timestamp, and the schema version at time of export. Including the schema version lets the importer apply the correct migration chain when restoring. Sensitive values (e.g., linked account tokens) should be redacted or encrypted under a user-controlled key before the file is delivered.”
}
}
]
}

See also: Netflix Interview Guide 2026: Streaming Architecture, Recommendation Systems, and Engineering Excellence

See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering

See also: Atlassian Interview Guide

Scroll to Top