What Is an Ad Targeting Service?
An Ad Targeting Service is responsible for selecting the most relevant advertisements to show a specific user at a specific moment. Given a user context (device, location, browsing history, demographic signals) and an inventory slot (page, app, placement), the targeting service evaluates which active ad campaigns qualify to compete for that slot. It acts as the gatekeeping layer before the auction: only ads whose targeting criteria match the user enter the bidding process.
In high-volume systems such as Google Ads or Meta Ads, this component must evaluate millions of campaigns in single-digit milliseconds. The design must balance expressiveness of targeting rules with raw lookup speed.
Data Model
-- Campaign targeting rules
CREATE TABLE campaigns (
campaign_id BIGINT PRIMARY KEY,
advertiser_id BIGINT NOT NULL,
status ENUM('active','paused','ended'),
daily_budget DECIMAL(12,4),
start_date DATE,
end_date DATE
);
CREATE TABLE targeting_rules (
rule_id BIGINT PRIMARY KEY,
campaign_id BIGINT REFERENCES campaigns(campaign_id),
dimension VARCHAR(64), -- e.g. country, device_type, age_bucket
operator VARCHAR(16), -- IN, NOT_IN, GTE, LTE
value_set TEXT -- JSON array or scalar
);
CREATE TABLE user_segments (
user_id BIGINT,
segment_id BIGINT,
added_at TIMESTAMP,
PRIMARY KEY (user_id, segment_id)
);
CREATE TABLE campaign_segments (
campaign_id BIGINT REFERENCES campaigns(campaign_id),
segment_id BIGINT,
PRIMARY KEY (campaign_id, segment_id)
);
Core Algorithm: Targeting Criteria Matching
When an ad request arrives, the service builds a user context object containing all known signals: geo, device type, OS, language, time-of-day bucket, and audience segment IDs derived from behavioral data. Matching proceeds in two phases:
- Coarse filtering (index lookup): An inverted index maps each (dimension, value) pair to the set of campaigns targeting that value. The service intersects these sets using bitset AND operations to get candidate campaigns in O(1) per dimension. Segment membership is resolved via a Redis set lookup keyed by user ID.
- Fine filtering (rule evaluation): Each candidate campaign's full rule set is evaluated against the user context. Range conditions (age_bucket GTE 25) and exclusion rules (country NOT_IN ['XX']) are checked. Campaigns failing any rule are dropped.
The surviving set is passed to the auction layer. Campaigns with exhausted daily budgets are pre-filtered by a pacing service that marks them inactive in a shared cache before the request even arrives.
Failure Handling and Latency Requirements
The targeting service must respond within 10-15 ms end-to-end to leave budget for the auction. Key strategies:
- In-process cache: Campaign targeting rules are loaded into the service's memory and refreshed every 30-60 seconds. No database hit per request.
- Circuit breaker on segment store: If the Redis segment lookup times out, the service falls back to targeting only on context signals (geo, device), accepting lower relevance rather than missing the slot entirely.
- Graceful degradation: If the targeting service itself is unhealthy, the ad server falls back to run-of-network campaigns that require no targeting evaluation.
Scalability Considerations
The targeting fleet scales horizontally. Each node holds a full in-memory snapshot of active campaigns (typically tens of millions of rules compressed into bitsets). Updates are pushed via a change-data-capture (CDC) pipeline from the campaign database. Consistent hashing is not required since every node is identical; load balancers distribute requests round-robin. Segment resolution traffic on Redis is sharded by user ID. For very large segment sets (>10 billion user-segment pairs), Bloom filters can pre-screen membership before hitting the segment store.
Summary
The Ad Targeting Service is a read-heavy, latency-critical component that translates a user context into a qualified campaign set. Key design decisions are: inverted index for coarse filtering, in-memory rule cache for fine filtering, and graceful degradation paths that preserve fill rate when dependencies fail. Interview candidates should be comfortable discussing bitset intersection, segment resolution at scale, and pacing integration.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “What is the core challenge in designing an ad targeting system?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “The core challenge is matching ads to users in real time with low latency while balancing relevance, advertiser budget constraints, and user experience. The system must process user signals such as demographics, interests, and behavior, and apply targeting criteria efficiently at massive scale.”
}
},
{
“@type”: “Question”,
“name”: “How do you store and retrieve user targeting segments at scale?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “User segments are typically stored in a distributed key-value store like Redis or a columnar store like Cassandra, keyed by user ID. Segments are pre-computed by an offline pipeline (e.g., Spark) and pushed to a low-latency serving layer. Bloom filters or bitset indexes can accelerate membership checks across millions of segments.”
}
},
{
“@type”: “Question”,
“name”: “What data sources feed into an ad targeting pipeline?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Typical data sources include first-party behavioral data (clicks, searches, purchases), demographic attributes, contextual page signals, CRM data from advertisers, and third-party audience data. These are joined and aggregated in a feature store that is updated in near real-time via stream processing.”
}
},
{
“@type”: “Question”,
“name”: “How do you handle privacy constraints like GDPR and CCPA in an ad targeting system?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Privacy compliance requires storing user consent flags alongside profile data and enforcing them at query time so that no targeting signal is used without valid consent. Data minimization, anonymization techniques such as differential privacy, and strict retention policies must be built into the pipeline from the start rather than added as an afterthought.”
}
}
]
}
See also: Meta Interview Guide 2026: Facebook, Instagram, WhatsApp Engineering
See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering