Geofencing Service Low-Level Design: Polygon Storage, Point-in-Polygon Check, and Entry/Exit Events

Geofencing Service: Overview and Requirements

A geofencing service stores geographic boundary polygons, evaluates whether a given coordinate falls inside any active fence, and publishes entry and exit events when a tracked entity crosses a boundary. It underpins location-based notifications, delivery zone enforcement, and fleet management features.

Functional Requirements

  • Create, update, and delete geofence polygons (convex and concave, with hole support).
  • Evaluate a (latitude, longitude) coordinate against all active fences within a region in real time.
  • Track entity state: record whether an entity was inside or outside each fence at its last known position.
  • Fire an ENTRY event when an entity transitions from outside to inside a fence.
  • Fire an EXIT event when an entity transitions from inside to outside a fence.
  • Support configurable notification channels per fence: webhook, push notification, or message queue.

Non-Functional Requirements

  • Evaluate a location update against up to 10,000 active fences within 50 ms.
  • Handle 100,000 location updates per second across all tracked entities.
  • Geofence polygon storage supporting up to 10 million vertices across all fences.
  • Event delivery at-least-once with deduplication on the consumer side.

Data Model

Geofence

  • fence_id — UUID.
  • name, description.
  • geometry — stored as PostGIS GEOMETRY(POLYGON, 4326) for spatial indexing.
  • bounding_box — precomputed GEOMETRY(POLYGON) of the MBR for fast candidate filtering.
  • owner_id — tenant or user that owns the fence.
  • status — ACTIVE, PAUSED, DELETED.
  • metadata — JSONB for application-specific attributes (zone type, speed limit, etc.).
  • created_at, updated_at.

Entity State

  • entity_id — identifier of the tracked object (vehicle, device, user).
  • fence_id — the fence this state entry tracks.
  • inside — boolean, true if the entity was last observed inside the fence.
  • last_location — GEOMETRY(POINT) of the most recent update.
  • last_updated_at — timestamp of the last location update that affected this state.

Geofence Event

  • event_id — UUID.
  • entity_id, fence_id.
  • event_type — ENTRY or EXIT.
  • location — GEOMETRY(POINT) at the moment of transition.
  • occurred_at.
  • delivered_at — set when the notification is confirmed delivered.

Core Algorithms

Spatial Index and Candidate Filtering

Naively checking every incoming coordinate against every polygon is O(F * V) where F is the number of fences and V is average vertex count. Use a two-stage approach:

  • Stage 1 — Bounding Box Filter: use the PostGIS GIST spatial index on the bounding_box column to retrieve fences whose MBR contains the input point. This is a fast index scan reducing candidates from thousands to tens.
  • Stage 2 — Point-in-Polygon Test: apply the ray casting algorithm to the full polygon geometry for each candidate. A ray cast from the query point in the positive X direction counts edge crossings; an odd count means inside.

Point-in-Polygon via Ray Casting

For each candidate fence polygon, iterate over edges. For each edge from vertex A to vertex B, determine whether the horizontal ray from the query point intersects the edge. Count intersections. Odd count: inside. Even count: outside. Handle degenerate cases: point on vertex, point on edge (treat as inside), and horizontal edges (skip).

Entry/Exit State Machine

For each (entity_id, fence_id) pair, the state machine has two states: OUTSIDE and INSIDE. On each location update:

  • Retrieve the current state from Redis (fast path) or the entity_state table (cold start).
  • Evaluate point-in-polygon for this fence.
  • If result differs from stored state, record the transition, update stored state, and enqueue an event.
  • If result matches stored state, update last_location and last_updated_at only — no event.

Scalability Design

  • Shard location update processing by entity_id using a Kafka topic. Each consumer partition handles a subset of entities, maintaining in-memory state for its assigned entities to avoid Redis round trips on every update.
  • Cache active fence geometries in memory on each evaluator node, refreshed every 60 seconds via a change-data-capture stream from PostgreSQL. This eliminates a database query per location update.
  • Store entity state in Redis as a hash keyed by entity_id with a nested field per fence_id. TTL of 24 hours after last update to evict state for idle entities.
  • Use H3 or S2 geospatial indexing as a pre-filter on the ingestor side: each fence is registered under the H3 cells it intersects; a location update first looks up fences registered in the cells covering the query point before running bounding-box and polygon checks.

API Design

  • POST /v1/fences — create a geofence; accepts GeoJSON polygon; returns fence_id.
  • PUT /v1/fences/{fence_id} — update geometry or metadata; triggers cache invalidation.
  • DELETE /v1/fences/{fence_id} — soft delete; marks status DELETED.
  • POST /v1/location-updates — ingest a batch of entity location updates; returns event IDs for any transitions detected synchronously (optional low-latency mode).
  • GET /v1/entities/{entity_id}/state — return current inside/outside state for all fences the entity is being tracked against.
  • GET /v1/fences/{fence_id}/entities — return all entities currently inside a specific fence.
  • GET /v1/events?fence_id={id}&entity_id={id}&start={ts}&end={ts} — query historical entry/exit events.

Observability

  • Track point-in-polygon evaluation latency as a histogram to detect degradation when fence complexity increases.
  • Monitor the candidate fan-out ratio: average number of fences that pass bounding-box filter per location update. A high ratio suggests the spatial index is not selective enough and fence geometries may need to be split.
  • Alert when event delivery lag exceeds 5 seconds; this indicates a downstream notification channel is backing up and entities are crossing fences without timely notifications reaching consumers.
  • Emit a fence coverage metric per H3 cell to support capacity planning when new geographic regions see a spike in fence registrations.

{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How are geofence polygons stored with a spatial index?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Geofence polygons are stored in a spatial database (PostGIS, MySQL Spatial) or a geometry column indexed with an R-tree (GIST index in PostGIS). Each polygon row includes an ID, bounding box, and WKT/WKB geometry. The R-tree index enables fast bounding-box pre-filtering: only polygons whose bounding box contains the query point are candidates for the exact point-in-polygon test.”
}
},
{
“@type”: “Question”,
“name”: “How does a two-stage point-in-polygon check work?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Stage one uses the spatial index to filter candidate geofences whose bounding box contains the query point — a cheap O(log n) operation. Stage two runs the ray-casting or winding-number algorithm on each candidate's actual polygon vertices to determine exact containment. This two-stage approach avoids running the exact test against all geofences while remaining correct for concave or complex polygons.”
}
},
{
“@type”: “Question”,
“name”: “How is the entry/exit event state machine designed for geofencing?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Each (device, geofence) pair maintains state: OUTSIDE or INSIDE. On each location update, the system evaluates containment and compares to stored state. A transition from OUTSIDE to INSIDE fires an ENTRY event; INSIDE to OUTSIDE fires an EXIT event. State is persisted in a key-value store keyed by (device_id, fence_id). Debouncing (requiring N consecutive consistent readings) prevents flapping at polygon boundaries.”
}
},
{
“@type”: “Question”,
“name”: “How is real-time location evaluation handled at scale in a geofencing service?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Location updates arrive via a Kafka topic partitioned by device ID, ensuring ordered processing per device. A stateful stream processor (Flink) consumes updates, maintains the per-device geofence membership state in managed state or a Redis sidecar, and performs spatial lookups against an in-memory geofence cache refreshed from the database on change events. This avoids per-event database round-trips while keeping geofence definitions current.”
}
}
]
}

See also: Uber Interview Guide 2026: Dispatch Systems, Geospatial Algorithms, and Marketplace Engineering

See also: Netflix Interview Guide 2026: Streaming Architecture, Recommendation Systems, and Engineering Excellence

See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering

Scroll to Top