Low Level Design: Newsletter Service

What Is a Newsletter Service?

A newsletter service lets product teams send bulk email campaigns to segmented subscriber lists. It handles subscription management, campaign scheduling, batch delivery, tracking, and deliverability — at scale, reliably, and without hammering the ESP rate limits.

Functional Requirements

  • Subscribe / unsubscribe / re-subscribe users
  • Create and schedule campaigns with rich HTML or plain-text content
  • Send campaigns to full lists or filtered segments
  • Track opens, clicks, bounces, and unsubscribes per campaign
  • Honor unsubscribes immediately (CAN-SPAM / GDPR)
  • Provide aggregated deliverability and engagement dashboards

Non-Functional Requirements

  • Send millions of emails per campaign within a reasonable time window
  • Idempotent sends — no duplicate delivery on retry
  • Unsubscribe processing must be near-real-time
  • Dashboard queries should be fast (pre-aggregated)
  • High availability for subscription API; eventual consistency for analytics

Core Entities

EntityKey Fields
Subscriberid, email, status ('active'|'unsubscribed'|'bounced'), subscribed_at, list_ids
Listid, name, description, created_at
Campaignid, list_id, subject, body_html, body_text, status ('draft'|'scheduled'|'sending'|'sent'), scheduled_at
Sendid, campaign_id, subscriber_id, status ('queued'|'delivered'|'bounced'|'failed'), sent_at
Eventid, send_id, type ('open'|'click'|'bounce'|'unsubscribe'), occurred_at, metadata

High-Level Architecture

The system splits into three planes: the Subscription API, the Campaign Sender, and the Event Processor.

Client
  |
  v
Subscription API (REST)
  |-- Subscriber DB (Postgres)
  |-- List DB (Postgres)

Campaign Scheduler (cron / admin UI)
  |-- Campaign DB (Postgres)
  |-- Segment query -> Subscriber DB
  |-- Enqueue sends -> Send Queue (Kafka / SQS)

Send Workers (horizontally scaled)
  |-- Dequeue from Send Queue
  |-- Check unsubscribe / suppression list (Redis set)
  |-- Call ESP (SendGrid / SES) via HTTP
  |-- Write send status -> Send DB

ESP Webhook Receiver
  |-- Ingest bounce / open / click / unsub events
  |-- Publish to Event Queue (Kafka)

Event Consumer
  |-- Update Send status
  |-- Update Subscriber status (bounced / unsubscribed)
  |-- Increment campaign counters (Redis)
  |-- Write raw events to analytics store (ClickHouse / BigQuery)

Dashboard API
  |-- Read from analytics store

Subscriber List Management

Subscribers belong to one or more lists. Segmentation is a query-time filter (e.g., status = 'active' AND list_id = X AND country = 'US'). For very large lists, the campaign scheduler pages through the subscriber table with a cursor and writes batches of (campaign_id, subscriber_id) rows into the send queue rather than loading everything into memory.

Campaign Scheduling and Batch Sending

A cron job checks for campaigns where scheduled_at <= NOW() and status is 'scheduled'. It atomically transitions the campaign to 'sending' (optimistic lock or SELECT FOR UPDATE) and fans out send tasks to the queue. Workers pull tasks and call the ESP. Each task carries a unique idempotency key (campaign_id + subscriber_id) to prevent double-send on retry.

Unsubscribe Handling

Every email contains a one-click unsubscribe link with a signed token (HMAC(subscriber_id, campaign_id, secret)). On click, the Subscription API flips the subscriber status to 'unsubscribed' and adds the email to a Redis suppression set. Send workers check the suppression set before calling the ESP, so even queued messages that haven't been sent yet are dropped.

Open and Click Tracking

Opens are tracked by embedding a 1×1 pixel image whose URL encodes the send ID (/track/open/{send_id}). Clicks are tracked by rewriting links to a redirect URL (/track/click/{send_id}/{link_hash}). Both endpoints log the event and redirect (clicks) or return the pixel (opens). Events flow through a lightweight append-only event service into ClickHouse for analytics.

Deliverability Monitoring

Key metrics computed per campaign: delivery rate, open rate, click-through rate, bounce rate (hard vs. soft), and unsubscribe rate. Hard bounces automatically suppress the address. Soft bounces increment a counter; after N soft bounces the address is suppressed. Spam complaint rates from ESP feedback loops are monitored; high rates trigger campaign pausing and alerts.

Scaling Considerations

  • Fan-out bottleneck: Use cursor-based pagination when generating send tasks; never load a full list into memory.
  • ESP rate limits: Control concurrency of send workers per ESP account; use token-bucket throttling.
  • Multiple ESP accounts: Shard sends across accounts/domains to distribute reputation and throughput.
  • Analytics write throughput: Buffer events in Kafka before writing to ClickHouse; avoid write amplification on the campaign counters table.
  • Suppression list lookup: A Redis set or Bloom filter gives O(1) lookups at high throughput.

Interview Talking Points

  • Why separate the send queue from the campaign scheduler? (decoupling, back-pressure, retry isolation)
  • How do you guarantee exactly-once delivery? (idempotency key at ESP call; send status written atomically)
  • How do you handle a campaign to 10M subscribers? (fan-out cursor, worker pool, multi-account ESP sharding)
  • How do you comply with CAN-SPAM / GDPR in real time? (suppression set checked before every send)
  • How do you prevent a noisy campaign from destroying deliverability? (circuit-breaker on complaint rate)
{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [ { “@type”: “Question”, “name”: “What is a newsletter service and how does campaign scheduling work?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “A newsletter service manages subscriber lists, composes email campaigns, and delivers them at scale. Campaign scheduling allows marketers to set a future send time for a campaign. At the scheduled time a job scheduler enqueues the campaign for processing, the subscriber list is snapshotted to ensure a consistent recipient set, and the send job is dispatched to the delivery pipeline. Time-zone-aware scheduling can stagger sends so each subscriber receives the email at their local morning.” } }, { “@type”: “Question”, “name”: “How does batch sending work to deliver newsletters to millions of subscribers?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “The subscriber list is partitioned into smaller batches that worker processes consume from a queue. Each worker renders the email template for its batch, personalizing content and tracking links per subscriber, and hands the messages to an SMTP relay or transactional email provider. Batching controls the send rate to stay within provider throughput limits and IP sending reputation thresholds. Progress is tracked per batch so failed batches can be retried without resending to already-delivered subscribers.” } }, { “@type”: “Question”, “name”: “How are unsubscribes handled reliably in a newsletter service?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Every email contains a one-click unsubscribe link that encodes a signed token identifying the subscriber and campaign. When clicked, the service immediately writes the unsubscribe record to the subscriber database and acknowledges the List-Unsubscribe-Post header to satisfy RFC 8058 and inbox provider requirements. The unsubscribe status is checked at send time so that subscribers who opt out between scheduling and delivery are excluded, and the suppression list is also checked against future campaigns.” } }, { “@type”: “Question”, “name”: “How does a newsletter service track open and click rates?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Open tracking embeds a unique one-pixel transparent image per subscriber in the email body. When the email client loads the pixel the service records an open event tied to that subscriber and campaign. Click tracking rewrites all links in the email to pass through a redirect service that logs a click event before forwarding the user to the destination URL. Aggregated open and click counts are stored in an analytics store and surfaced in reporting dashboards.” } } ] }

See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering

See also: Netflix Interview Guide 2026: Streaming Architecture, Recommendation Systems, and Engineering Excellence

See also: LinkedIn Interview Guide 2026: Social Graph Engineering, Feed Ranking, and Professional Network Scale

Scroll to Top