Low Level Design: Real-Time Location Service

Real-Time Location Service

A real-time location service ingests high-frequency position updates from mobile clients, indexes them for low-latency spatial queries, stores history for analytics and replay, and emits geofence events when entities cross boundaries. The design must handle write amplification (many entities updating frequently) while keeping nearby-entity queries fast.

Location Update Schema

Each update captures the full sensor payload from the client:

location_update {
  entity_id    VARCHAR(64),
  entity_type  ENUM('driver','rider','delivery','asset'),
  latitude     DECIMAL(9,6),
  longitude    DECIMAL(9,6),
  accuracy     FLOAT,          -- meters, from GPS sensor
  speed        FLOAT,          -- m/s
  heading      FLOAT,          -- degrees 0-360
  timestamp    TIMESTAMP
}

Accuracy is used downstream to filter out noisy fixes. Updates with accuracy worse than a configurable threshold (e.g., 50 meters) are accepted but flagged and excluded from geofence evaluation.

Write Path

Mobile clients send updates at 1–5 second intervals. To avoid per-update HTTP overhead, clients batch 5–10 updates and POST via HTTP or stream over WebSocket:

Client -> API Gateway -> Kafka topic: location.updates
                      (partitioned by entity_type)

Kafka Consumer:
  1. GEOADD entity_type:{entity_id} longitude latitude entity_id  (Redis GEO)
  2. Write to InfluxDB measurement: location_history
     tags: entity_id, entity_type
     fields: lat, lon, accuracy, speed, heading
     timestamp: from payload

Redis GEO stores only the latest position per entity. InfluxDB stores the full time series. Kafka decouples ingestion from storage, absorbing bursts and allowing the consumer to lag without dropping updates.

Nearby Entity Query

Redis commands for spatial lookup:

-- Redis 6.2+
GEOSEARCH entity_type:driver
  FROMMEMBER requester_entity_id
  BYRADIUS 5 km ASC
  COUNT 20
  WITHCOORD WITHDIST

-- Older Redis
GEORADIUS entity_type:driver longitude latitude 5 km ASC COUNT 20 WITHCOORD WITHDIST

Results include entity_id, distance, and coordinates. The service layer joins with an entity metadata store to return name, status, and other attributes. Query latency is typically under 5 ms for sets up to several million entities.

Location History Query

Historical queries go to InfluxDB with a time range filter:

SELECT lat, lon, speed, heading FROM location_history
WHERE entity_id = '{id}'
  AND time >= '2026-04-17T08:00:00Z'
  AND time <= '2026-04-17T09:00:00Z'
ORDER BY time ASC

InfluxDB retention policies control how long history is kept: raw data at full resolution for 30 days, then downsampled (1-minute averages) for 1 year, then deleted.

Geofence Event Detection

Geofences are polygons or circles stored in a geofences table with a PostGIS geometry column. On each location update processed by the Kafka consumer, the service checks whether the entity has entered or exited any relevant geofences:

  1. Fetch active geofences for the entity’s region from a spatial index (R-tree or PostGIS GIST index)
  2. For each geofence, evaluate point-in-polygon with the new position
  3. Compare result with the entity’s last known geofence membership (stored in Redis as a set per entity)
  4. If membership changed, publish a GeofenceEntered or GeofenceExited event to a Kafka topic consumed by downstream services (dispatch, alerts, analytics)

Accuracy-flagged updates skip geofence evaluation to avoid false entry/exit events from GPS drift.

Privacy Controls

Location sharing is controlled per entity type and requires explicit consent:

  • A location_consent table records which entity types a user has consented to share location for, and since when
  • The API gateway rejects updates for entity types without active consent
  • Location history is automatically purged after the consent-defined retention window (configurable per entity type, minimum 24 hours for operational use)
  • On consent withdrawal, the consumer stops accepting updates and a cleanup job deletes history from InfluxDB and removes the entity from the Redis GEO index

TTL-Based Cleanup of Stale Entities

Entities that stop sending updates remain in the Redis GEO index indefinitely without cleanup. A background job runs every 5 minutes and removes entities whose last update timestamp (stored in a Redis sorted set entity_last_seen scored by Unix timestamp) is older than a configurable staleness threshold (e.g., 10 minutes for drivers, 24 hours for assets):

-- Find stale entity_ids
ZRANGEBYSCORE entity_last_seen 0 {cutoff_timestamp}

-- Remove from GEO index and last_seen set
ZREM entity_type:{entity_type} {entity_id}
ZREM entity_last_seen {entity_id}

This keeps the GEO index bounded and prevents stale data from appearing in nearby-entity query results.

Frequently Asked Questions

What is a real-time location service in system design?

A real-time location service ingests continuous position updates from mobile devices or IoT sensors, stores current and historical locations, and exposes APIs for nearby-entity queries, location history retrieval, and geofence event streaming. Common use cases include ride-sharing driver tracking, food delivery courier positioning, asset tracking, and social “friends nearby” features. Design concerns include high write throughput (thousands of updates per second per city), sub-second read latency for nearby queries, efficient storage of time-series location history, and reliable geofence trigger detection as entities enter or exit defined regions.

How does Redis GEO enable nearby entity queries?

Redis GEO stores latitude/longitude pairs as 52-bit geohash scores in a sorted set. The GEOADD command adds or updates an entity’s position, and GEORADIUS / GEOSEARCH returns all members within a given radius in O(N+log M) time where N is the number of results and M is the set size. Because positions are stored in a sorted set keyed by geohash, range queries on the geohash space translate efficiently to sorted set range operations. For a location service, each city or geographic shard has its own Redis GEO key (e.g., drivers:nyc). On each location update, GEOADD overwrites the previous position atomically. For nearby-driver queries, GEOSEARCH ... BYRADIUS 5 km ASC COUNT 10 returns the closest entities sorted by distance. This approach supports thousands of reads and writes per second per shard with millisecond latency.

How do you store location history efficiently for time-range queries?

Location history is a time-series workload: high write rate, append-only, and queried by entity ID + time range. Efficient storage options: (1) Cassandra with a partition key of (entity_id, date_bucket) and a clustering key of timestamp DESC — wide rows give O(1) partition lookup and efficient range scans within a day bucket; (2) InfluxDB or TimescaleDB — purpose-built time-series databases with automatic partitioning and compression for numeric sensor data; (3) Object storage + columnar files — batch-flush location streams to Parquet files in S3 partitioned by entity_id/date for cheap long-term storage and analytical queries via Athena or BigQuery. Keep only the last N days in the hot store (Cassandra/Timescale) and archive older data to object storage. Apply coordinate quantization (store lat/lon at 5 decimal places ≈ 1.1 m precision) and delta encoding to reduce storage size.

How do you emit geofence entry and exit events in real time?

Geofence evaluation runs on each location update. For each incoming position, check which geofences (stored as polygons or circles in PostGIS or a spatial index like an R-tree) contain the point. Compare the result against the entity’s last known geofence membership (cached in Redis as a set per entity). If the entity is now inside a geofence it was not in before, emit an ENTERED event; if it was inside and is no longer, emit an EXITED event. Publish events to Kafka for downstream consumers (notification service, billing, analytics). Scalability considerations: (1) Spatial indexing — use an R-tree or geohash-bucketed index to limit geofence candidates to those in the entity’s vicinity rather than checking all geofences; (2) Debouncing — add a small buffer zone (hysteresis) around geofence boundaries to prevent flapping events from GPS jitter; (3) State partitioning — partition both location updates and geofence state by entity ID so a single worker owns all updates for a given entity, avoiding distributed state coordination.

{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [ { “@type”: “Question”, “name”: “What is a real-time location service in system design?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “A real-time location service ingests continuous position updates, stores current and historical locations, and exposes APIs for nearby-entity queries, location history, and geofence events. Design concerns include high write throughput, sub-second read latency, efficient time-series storage, and reliable geofence trigger detection.” } }, { “@type”: “Question”, “name”: “How does Redis GEO enable nearby entity queries?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Redis GEO stores lat/lon pairs as 52-bit geohash scores in a sorted set. GEOADD updates position atomically; GEOSEARCH returns nearby members within a radius sorted by distance. Shard by city (e.g., drivers:nyc). Supports thousands of reads/writes per second with millisecond latency.” } }, { “@type”: “Question”, “name”: “How do you store location history efficiently for time-range queries?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Use Cassandra with partition key (entity_id, date_bucket) and clustering key timestamp DESC for hot data, or a time-series DB like TimescaleDB. Archive cold data to Parquet files in S3. Apply coordinate quantization and delta encoding to reduce storage size.” } }, { “@type”: “Question”, “name”: “How do you emit geofence entry and exit events in real time?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “On each location update, check spatial containment against an R-tree or geohash-bucketed index. Compare against the entity’s last known geofence membership in Redis. Emit ENTERED/EXITED events to Kafka. Use hysteresis buffers to debounce GPS jitter and partition state by entity ID to avoid distributed coordination.” } } ] }

See also: Netflix Interview Guide 2026: Streaming Architecture, Recommendation Systems, and Engineering Excellence

See also: Uber Interview Guide 2026: Dispatch Systems, Geospatial Algorithms, and Marketplace Engineering

See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering

Scroll to Top