Low Level Design: Delivery Tracking Service

What Is a Delivery Tracking Service?

A delivery tracking service monitors the lifecycle of a shipment from pickup through final delivery, provides real-time location updates, notifies customers of status changes, and estimates time of arrival. It is a core component of e-commerce platforms and logistics companies.

Requirements

Functional Requirements

  • Create a shipment record when an order is dispatched.
  • Accept real-time location pings from delivery vehicles/couriers.
  • Transition shipment status through a well-defined state machine.
  • Calculate and update estimated delivery time (EDT) on each location update.
  • Trigger customer notifications (SMS, push, email) on status changes.
  • Handle exceptions: failed delivery attempt, address not found, package damaged.

Non-Functional Requirements

  • Location update ingestion at 50,000 pings/second.
  • Customer-facing tracking page loads in <200 ms.
  • Notification delivery within 10 s of a status change.
  • Audit trail of all status transitions and location history retained for 90 days.

Shipment Status State Machine

CREATED
    |
    v
PICKED_UP
    |
    v
IN_TRANSIT ----> AT_FACILITY (optional hub stop, can repeat)
    |
    v
OUT_FOR_DELIVERY
    |
    +--> DELIVERED (terminal)
    |
    +--> DELIVERY_ATTEMPTED --> OUT_FOR_DELIVERY (retry)
    |                       --> RETURNED_TO_SENDER (terminal)
    |
    +--> EXCEPTION (damage, lost) --> RESOLVED or RETURNED_TO_SENDER

Transitions are enforced server-side. Invalid transitions (e.g., DELIVERED -> IN_TRANSIT) are rejected with a 409 Conflict response.

Core Entities

Shipment
--------
id                UUID
tracking_number   VARCHAR(32)  UNIQUE
order_id          UUID
origin_address    JSONB
destination_address JSONB
status            ENUM('CREATED','PICKED_UP','IN_TRANSIT','AT_FACILITY',
                       'OUT_FOR_DELIVERY','DELIVERED','DELIVERY_ATTEMPTED',
                       'EXCEPTION','RETURNED_TO_SENDER')
courier_id        UUID
estimated_delivery TIMESTAMP
created_at        TIMESTAMP
updated_at        TIMESTAMP

StatusEvent
-----------
id                UUID
shipment_id       UUID
status            ENUM(...)
lat               DOUBLE
lng               DOUBLE
note              TEXT
occurred_at       TIMESTAMP
source            ENUM('system','courier','scan')

LocationPing
------------
courier_id        UUID
shipment_id       UUID
lat               DOUBLE
lng               DOUBLE
ts                TIMESTAMP
speed_kmh         FLOAT

Real-Time Location Updates

  • Courier apps send a LocationPing every 10-30 s via HTTP POST or a persistent WebSocket.
  • Pings are published to a Kafka topic location-pings partitioned by courier_id.
  • A Location Consumer reads from Kafka, updates the courier's current position in Redis (SET courier:{id}:pos {lat,lng,ts}), and writes the ping to a time-series store (InfluxDB or TimescaleDB) for historical queries.
  • The customer tracking page polls GET /shipments/{tracking_number}/location every 15 s or subscribes via SSE for push-based updates.

Estimated Delivery Calculation

function update_edt(shipment_id):
    shipment = db.get(shipment_id)
    courier_pos = redis.get(courier:{shipment.courier_id}:pos)
    dest = shipment.destination_address.coordinates

    if shipment.status == OUT_FOR_DELIVERY:
        // call routing API for remaining distance and traffic-adjusted ETA
        route = routing_service.eta(courier_pos, dest)
        edt = now() + route.duration_seconds
    elif shipment.status in [IN_TRANSIT, AT_FACILITY]:
        // use historical facility-to-facility transit time model
        edt = ml_model.predict_edt(origin_facility, dest_zip, day_of_week)
    else:
        edt = shipment.estimated_delivery   // no change

    db.update(shipment_id, estimated_delivery=edt)
    cache.set(tracking:{shipment.tracking_number}, edt, ttl=60)

Customer Notification Triggers

  • A Notification Worker consumes from the status-events Kafka topic.
  • Each StatusEvent is matched against a notification rule table (which statuses trigger which channels for which customer preferences).
  • Notifications are dispatched to a downstream Notification Service (SMS via Twilio, push via FCM/APNs, email via SES).
  • Deduplication key = (shipment_id, status, date) to prevent duplicate alerts on replay.

Sample Notification Rules

StatusChannelTemplate
OUT_FOR_DELIVERYPush + SMSYour package is out for delivery today.
DELIVEREDPush + EmailYour package has been delivered.
DELIVERY_ATTEMPTEDSMS + EmailDelivery attempted. Reschedule at [link].
EXCEPTIONEmailThere is an issue with your shipment. Contact support.

Exception Handling

  • Failed delivery attempt: Courier marks DELIVERY_ATTEMPTED with a note (no one home, access denied). System schedules retry or prompts customer to reschedule via a self-service link.
  • Address not found: Transitions to EXCEPTION. Triggers email to customer to confirm address. Ops team can update address and resume.
  • Package damaged: Courier uploads photo evidence. Status = EXCEPTION with sub-type DAMAGED. Claims workflow is triggered.
  • System-detected anomaly: If no location ping from courier for >2 h during OUT_FOR_DELIVERY, auto-flag for ops review.

System Architecture

Courier App
    |
    +-- LocationPing (HTTP/WebSocket) --> [Location Ingest API] --> Kafka: location-pings
    |
    +-- StatusUpdate (HTTP POST)      --> [Shipment Status API] --> Kafka: status-events
                                              |
                                              +-- validates state machine transition
                                              +-- persists StatusEvent to DB
                                              +-- updates Shipment.status in DB

Kafka: location-pings --> [Location Consumer] --> Redis (current pos) + TimescaleDB (history)
                                               --> [EDT Updater] --> DB + Cache

Kafka: status-events  --> [Notification Worker] --> Notification Service --> Customer (SMS/Push/Email)

Customer App
    |
    +-- GET /shipments/{tracking_number}          --> [Tracking API] --> DB + Redis cache
    +-- GET /shipments/{tracking_number}/location --> [Tracking API] --> Redis (live pos)

Tracking Page Performance

  • Cache shipment summary (status, EDT, last location) in Redis with 30 s TTL.
  • On cache miss, read from the primary DB replica.
  • Serve location separately from status to allow independent cache TTLs (location: 15 s, status: 60 s).
  • Use CDN for static assets; tracking page itself is server-rendered for SEO and initial load speed.

Scaling Considerations

  • Location ingestion: Kafka absorbs bursts; location consumers scale independently of status API.
  • Time-series storage: Partition LocationPing table by courier_id and month. Use columnar compression. Archive data older than 90 days to cold storage.
  • Hot shipments: High-profile B2B shipments tracked by thousands of ops users simultaneously. Cache aggressively; consider pushing updates via WebSocket rather than polling.
  • Multi-carrier: Abstract the courier model behind a CarrierAdapter interface. Each carrier integration normalizes status codes into the internal state machine.

Interview Tips

  • Draw the state machine first. It anchors the rest of the design and signals strong domain modeling.
  • Distinguish location update ingestion (high throughput, lossy-tolerant) from status events (low throughput, must be reliable).
  • Show how EDT calculation differs by shipment phase (en-route vs. out-for-delivery).
  • Mention the notification deduplication key proactively; interviewers often probe this.

Frequently Asked Questions

How does real-time delivery tracking work in system design?

A delivery tracking system ingests a stream of GPS location updates from drivers or couriers, associates each ping with the corresponding shipment, persists the latest position, and pushes updates to customers via WebSocket, SSE, or polling. The architecture typically separates the write path (high-throughput ingestion via a message queue like Kafka) from the read path (low-latency fan-out to customer-facing APIs). A position store — often Redis for recency and a time-series or columnar store for history — serves both live maps and post-delivery analytics.

How do you model shipment state machines in a delivery system?

Shipment lifecycle is naturally modeled as a finite state machine with states such as CREATED, PICKED_UP, IN_TRANSIT, OUT_FOR_DELIVERY, DELIVERED, and FAILED. Transitions are triggered by driver events (barcode scans, GPS geofence crossings, manual status updates) and validated server-side to prevent illegal jumps (e.g., going directly from CREATED to DELIVERED). The current state and full transition history are persisted in a relational or document store. An event-sourcing approach — storing every transition as an immutable event — makes auditing, debugging, and replaying state trivial.

How is estimated delivery time calculated in a tracking system?

ETA calculation combines routing engine output (road-network travel time from the courier’s current position to the destination) with learned corrections for stop dwell time, traffic patterns by time of day, courier speed profiles, and remaining stop count on the route. Machine learning models trained on historical delivery data can significantly outperform pure routing estimates. ETA is recomputed on every meaningful location update and pushed to the customer, so the displayed window narrows as the courier approaches.

How do you handle GPS location updates at high frequency in a delivery system?

Mobile clients typically emit GPS pings every 3–15 seconds, generating enormous write volume across a large fleet. Common strategies include: client-side dead-reckoning to suppress pings when the device hasn’t moved beyond a threshold; server-side rate limiting that accepts but throttles downstream fan-out; writing raw pings to a partitioned log (Kafka) and updating the live position cache asynchronously; and Douglas-Peucker or similar path-simplification algorithms to reduce stored points for the historical trail without losing visual fidelity on the map.

See also: Uber Interview Guide 2026: Dispatch Systems, Geospatial Algorithms, and Marketplace Engineering

See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering

See also: Lyft Interview Guide 2026: Rideshare Engineering, Real-Time Dispatch, and Safety Systems

Scroll to Top