What Is an A/B Test Assignment Service?
An A/B test assignment service deterministically buckets users into experiment variants, maintains consistent assignments across requests and devices, supports force-override for QA and debugging, and logs every assignment decision for statistical analysis. It is a foundational piece of experimentation infrastructure that product and engineering teams rely on to run hundreds of concurrent tests without interference or inconsistency.
Requirements
Functional Requirements
- Assign a user (or anonymous visitor) to a variant for a given experiment using hash-stable bucketing.
- Ensure the same user always receives the same variant for the same experiment (user-level consistency).
- Support force-override assignments for specific user IDs or device IDs, used by QA teams to validate variants.
- Support holdout groups: exclude a percentage of users from all experiments for clean baseline measurement.
- Log every assignment with experiment ID, variant, user ID, and timestamp for downstream statistical analysis.
- Allow experiments to target a traffic percentage less than 100%.
Non-Functional Requirements
- Assignment lookup under 5 ms P99 (served from in-process cache).
- Support 10,000 assignment requests per second per instance.
- Experiment config changes propagated to all service instances within 30 seconds.
Data Model
Experiment
experiment_idUUID — primary key.name,description.traffic_percentageTINYINT — 0-100; what fraction of eligible users are enrolled.saltVARCHAR — unique random string mixed into the hash to prevent cross-experiment correlation.statusENUM: DRAFT, RUNNING, PAUSED, CONCLUDED.start_at,end_atNULLABLE timestamps.
Variant
variant_idUUID,experiment_idFK.name(e.g., control, treatment_a),weightINTEGER.- Weights across all variants in an experiment sum to 100.
ForceOverride
experiment_idFK,entity_id(user or device),entity_typeENUM.variant_idFK.created_by,expires_atNULLABLE timestamp.
AssignmentLog
log_idUUID,experiment_id,variant_id,user_id.assignment_sourceENUM: HASH, OVERRIDE, HOLDOUT.assigned_attimestamp.
Core Algorithms
Hash-Stable Bucketing
For a user with ID U and experiment with salt S and traffic percentage T, compute bucket = murmurhash3(U + ":" + S) mod 10000. If bucket >= T * 100, the user is not enrolled (outside traffic percentage). Otherwise, assign the user to a variant by mapping the bucket value into the cumulative weight ranges of the variants. Because the hash is deterministic, the same user always lands in the same bucket for the same experiment, regardless of which service instance processes the request or how many times the assignment is computed.
Force-Override Resolution
Before hash bucketing, the service checks the ForceOverride table (loaded into an in-memory map at startup, refreshed every 30 seconds). If an override exists for the user ID or device ID for this experiment, the overridden variant is returned immediately and the assignment is logged with assignment_source=OVERRIDE. Overrides take priority over all other logic including holdouts, enabling QA engineers to test any variant on demand.
Holdout Groups
A global holdout is implemented as a special pseudo-experiment with its own salt and traffic percentage. Before evaluating any experiment, the service computes the holdout bucket. If the user falls in the holdout, they are excluded from all running experiments and returned a null assignment. This guarantees a clean control population for measuring the aggregate impact of the experimentation program.
API Design
POST /v1/assign— body:user_id,experiment_idsarray. Returns a map of experiment_id to variant assignment. Supports batch lookup for multiple experiments in a single call.GET /v1/experiments/{experiment_id}/assignments— paginated log of all assignments for analysis; restricted to internal callers.POST /v1/overrides— create a force-override; body: experiment_id, entity_id, entity_type, variant_id, expires_at.DELETE /v1/overrides/{experiment_id}/{entity_id}— remove a force-override.
Scalability and Consistency
In-Process Config Cache
The full experiment and variant configuration is loaded into an in-process hash map at startup. A background refresh polls the database every 30 seconds and atomically swaps the map reference. All assignment computations read only from this in-memory structure, making them CPU-bound with no I/O, achieving sub-millisecond compute time per assignment.
Assignment Logging
Assignment records are written asynchronously to a Kafka topic partitioned by experiment_id. A downstream consumer batches them into a columnar analytics store (BigQuery or Redshift) for statistical analysis. Logging is fire-and-forget from the request path; dropped log entries are acceptable as they represent a tiny fraction of assignments and do not affect variant serving.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How does a hash-stable bucketing algorithm assign users to A/B variants?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “The bucketing function hashes a salted key (e.g., SHA256(experiment_id + user_id)) and maps the output to a [0, 1) float by dividing by 2^32. Traffic allocation boundaries (e.g., control 0–0.5, treatment 0.5–1.0) determine the variant. The hash is deterministic, so the same user always maps to the same bucket without storing per-user assignments, making the approach stateless and horizontally scalable.”
}
},
{
“@type”: “Question”,
“name”: “How is user-level assignment consistency guaranteed across services?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “All services use the same bucketing library with the same salt and algorithm version. Assignment is computed at the edge (or in a shared SDK) and can optionally be cached per session to avoid re-hashing on every request. If an experiment changes its traffic split, existing users outside the new allocation are excluded rather than reassigned mid-experiment, preserving statistical validity.”
}
},
{
“@type”: “Question”,
“name”: “How does force-override work for QA testing of A/B variants?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “A force-override mechanism allows internal users or test accounts to bypass the hash bucketing and be assigned to a specific variant via a query parameter, cookie, or header (e.g., X-Force-Variant: treatment). The assignment service checks for overrides before evaluating the hash. Overrides are gated to allowlisted user IDs or IP ranges and excluded from analysis data to prevent contamination of experiment metrics.”
}
},
{
“@type”: “Question”,
“name”: “How is assignment logging structured for statistical analysis?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Each assignment event is logged with user_id, experiment_id, variant, timestamp, and context metadata (platform, app version). Logs are streamed to a data warehouse where analysts join them with outcome metrics on user_id + timestamp to compute per-variant conversion rates, p-values, and confidence intervals. The logging layer deduplicates by (user_id, experiment_id) to count each user only once in the exposure table.”
}
}
]
}
See also: Meta Interview Guide 2026: Facebook, Instagram, WhatsApp Engineering
See also: Anthropic Interview Guide 2026: Process, Questions, and AI Safety