Progress Tracker Service Low-Level Design: Completion Events, Milestone Detection, and Streak Calculation

What Is a Progress Tracker Service?

A progress tracker service ingests completion events from multiple upstream services, detects when users cross configurable milestones, calculates streaks based on activity patterns, and triggers achievement unlocks through a downstream pipeline. It provides a unified view of user progress across a platform — whether that is courses completed, workouts logged, or episodes watched — without requiring each upstream service to implement milestone logic independently.

Requirements

Functional Requirements

  • Ingest completion events (e.g., lesson complete, workout done, episode watched) from upstream services via event stream.
  • Maintain per-user, per-activity-type cumulative counters and timestamps.
  • Detect milestone crossings (e.g., 10th, 50th, 100th completion) and emit milestone events.
  • Calculate daily and weekly streaks: consecutive days or weeks with at least one qualifying completion.
  • Trigger achievement unlock pipeline when milestone or streak conditions are met.
  • Expose a progress summary API for profile pages and dashboards.

Non-Functional Requirements

  • Ingest up to 100,000 completion events per second at peak.
  • Streak calculations must be timezone-aware (user local time defines day boundaries).
  • Progress summary reads under 25 ms P99.
  • At-least-once event delivery; duplicate events must be idempotent.

Data Model

CompletionEvent (immutable log)

  • event_id UUID — idempotency key; unique index prevents duplicate processing.
  • user_id, activity_type (e.g., LESSON, WORKOUT, EPISODE).
  • content_id — the specific item completed.
  • completed_at timestamp — used for streak date logic.
  • source_service — which upstream service emitted the event.

ProgressCounter

  • user_id, activity_type — composite primary key.
  • total_count BIGINT — cumulative completions.
  • last_milestone_crossed INTEGER — highest milestone threshold already triggered.
  • updated_at timestamp.

StreakRecord

  • user_id, activity_type, streak_type (DAILY, WEEKLY) — composite PK.
  • current_streak INTEGER — current consecutive period count.
  • longest_streak INTEGER — all-time record.
  • last_activity_date DATE — in user local timezone.
  • streak_broken_at NULLABLE DATE — when the most recent streak ended.

Core Algorithms

Idempotent Event Processing

On receipt of a completion event, the processor first attempts to insert the event into the CompletionEvent log using an INSERT ... ON CONFLICT (event_id) DO NOTHING. If the insert affects zero rows, the event is a duplicate and processing stops. This guarantees exactly-once counter increments regardless of how many times the event is delivered from the upstream Kafka topic.

Milestone Detection

Milestone thresholds are stored in a sorted configuration list per activity type (e.g., [1, 5, 10, 25, 50, 100, 250, 500]). After incrementing the counter, the processor queries the next uncrossed threshold: the smallest value greater than last_milestone_crossed. If total_count >= next_threshold, a MILESTONE_CROSSED event is emitted to Kafka and last_milestone_crossed is updated. Multiple thresholds can be crossed in a single update (e.g., bulk import) — the processor iterates the threshold list until no more are crossed.

Streak Calculation

The streak algorithm converts completed_at to the user local date using their stored timezone preference. It then compares last_activity_date on the StreakRecord with today local date. If the difference is 0 (same day), no change. If the difference is 1 (consecutive day for DAILY streak), increment current_streak and update last_activity_date. If the difference is greater than 1, the streak is broken: set streak_broken_at to the previous last_activity_date, reset current_streak to 1, and update last_activity_date. Update longest_streak if current_streak exceeds it.

API Design

  • GET /v1/progress/{user_id} — returns all ProgressCounters and StreakRecords for the user; used by profile and dashboard pages.
  • GET /v1/progress/{user_id}/{activity_type} — returns counter and streak for a specific activity type.
  • POST /v1/events — internal endpoint for upstream services to submit completion events; accepts batches of up to 100 events.
  • GET /v1/milestones/{user_id} — returns all milestones crossed with timestamps; used by achievement display pages.

Scalability and Achievement Pipeline

Ingestion Layer

Completion events arrive via a Kafka topic partitioned by user_id, ensuring all events for a single user are processed in order by the same consumer. Each consumer maintains a local write buffer and flushes ProgressCounter and StreakRecord updates to PostgreSQL in micro-batches every 500 ms. This batching reduces database write amplification while keeping progress data fresh within under one second.

Achievement Unlock Pipeline

MILESTONE_CROSSED and STREAK_ACHIEVED events are published to a dedicated Kafka topic consumed by the achievement service. The achievement service evaluates rule trees (e.g., complete 10 lessons AND maintain a 7-day streak) against the current progress state. When all conditions are met, it inserts an AchievementUnlock record and notifies the user via the notification service. Decoupling achievement evaluation from the progress tracker keeps each service focused on a single responsibility and allows achievement rules to be updated without touching the core progress logic.

{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How are completion events ingested in a progress tracker?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Clients emit completion events (e.g., lesson_completed, workout_finished) to an ingestion API that validates, deduplicates (idempotency key = user_id + activity_id + date), and publishes to a message queue. A stream processor (Flink or Kafka Streams) consumes events and updates the user's progress state in a key-value store, ensuring exactly-once processing via transactional writes.”
}
},
{
“@type”: “Question”,
“name”: “How does milestone detection logic work in a progress tracker?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Milestones are defined as rules evaluated against aggregated progress counters (e.g., total_lessons_completed >= 10). After each event is processed, the system runs a lightweight rule engine against the user's current counters. Newly satisfied rules trigger milestone events published to a separate topic, consumed by the notification and achievement services. Rules are stored in config to allow hot-reload without deployment.”
}
},
{
“@type”: “Question”,
“name”: “How is streak calculation handled with timezone differences?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Streaks are calculated in the user's local timezone, not UTC, to match intuitive day boundaries. Each completion event is converted to the user's timezone before extracting the calendar date. The streak counter checks whether the most recent activity date is today or yesterday (in local time); if yesterday, the streak continues; if earlier, it resets. Grace periods (e.g., 24 hours past midnight) can be applied to handle late-night completions near day boundaries.”
}
},
{
“@type”: “Question”,
“name”: “How does the achievement unlock pipeline work?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Milestone events trigger an achievement evaluation job that checks which badge or reward definitions are satisfied by the user's current state. Unlocked achievements are written to a user_achievements table with an unlocked_at timestamp and published as events for downstream consumers (notifications, leaderboards, social sharing). Idempotency checks prevent duplicate unlocks if the pipeline retries. A backfill job can evaluate historical users when new achievement definitions are added.”
}
}
]
}

See also: Netflix Interview Guide 2026: Streaming Architecture, Recommendation Systems, and Engineering Excellence

See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering

See also: Atlassian Interview Guide

Scroll to Top