What Is an Offline-First Service?
An offline-first service is designed to function fully without network connectivity by storing data locally on the device, queuing mutations made offline, and reconciling with the server when connectivity is restored. Progressive web apps, mobile note-taking apps, and field data-collection tools are canonical examples. The design challenge is presenting a consistent UI during offline periods and reliably reconciling state without data loss or duplication on reconnect.
Requirements
Functional Requirements
- Read and write data without network connectivity using a local store
- Queue all offline mutations in a persistent sync queue
- Replay the sync queue on reconnect and reconcile with the server
- Reflect optimistic UI updates immediately on write, correcting on server response
- Detect and resolve conflicts between offline mutations and server changes
- Indicate sync status (synced, pending, conflict) per record in the UI
Non-Functional Requirements
- Zero data loss: queued mutations must survive app restart and device reboot
- Idempotent sync: retrying a queued operation must not produce duplicates
- Convergence: local and server state must eventually match
- Sync queue must drain in order per record to preserve causal consistency
Data Model
Local Store (SQLite on mobile / IndexedDB in browser)
- local_records: record_id, type, payload (JSON), sync_status (synced, pending, conflict), local_version, server_version, updated_at
- sync_queue: queue_id, record_id, operation (create, update, delete), payload (JSON), idempotency_key (UUID), created_at, attempt_count, last_attempt_at, status (pending, in_flight, failed)
Server Store
- records: record_id, user_id, type, payload (JSON), version (lamport counter), is_deleted, created_at, updated_at
- sync_cursors: device_id, user_id, last_server_version
The idempotency_key on sync_queue entries is a client-generated UUID created when the mutation is first queued. The server uses this key to deduplicate replayed operations.
Core Algorithms
Optimistic UI
On local write: immediately update local_records with the new payload and set sync_status=pending. Insert a sync_queue entry. Render the UI from local_records without waiting for server confirmation. This gives the user instant feedback. When the server response arrives, update local_version to match server_version and set sync_status=synced. If the server returns a conflict, update sync_status=conflict and surface the discrepancy.
Sync Queue Management
The sync worker processes queue entries in order of created_at within each record_id. It sends each operation to the server with the idempotency_key. On 2xx response, delete the queue entry and update local_records. On 409 Conflict, fetch the server version, run the merge strategy (LWW or field-level merge), apply the merged result locally, re-enqueue as an update with a new idempotency_key. On 5xx or network error, back off exponentially and increment attempt_count; move to failed after max_attempts.
Ordering per record is critical: if operation A and operation B on the same record are both queued, B must not be sent before A commits, otherwise the server may apply them out of causal order. Process queue entries per record_id sequentially; parallelize across different record_ids.
Server Reconciliation on Reconnect
On reconnect: first pull all server changes since last_server_version for the device (GET /sync?since=cursor), merge into local_records (server wins for synced records; for pending records, record the conflict). Then drain the sync queue. Update the sync cursor to the latest server version after pull completes. This pull-before-push order prevents sending stale data that overwrites newer server state.
API Design
- GET /sync?since=cursor&device_id= — pull server changes since cursor; returns records and new cursor
- POST /records — create with idempotency_key header
- PUT /records/{id} — update with If-Match: server_version for optimistic locking
- DELETE /records/{id} — soft delete; propagates as tombstone
- POST /sync/batch — send multiple queued operations in one request for efficiency
Scalability Considerations
The local SQLite or IndexedDB store is bounded by device storage. Implement a cache eviction policy for old synced records not accessed recently, keeping only a configurable recency window locally. Records evicted from local cache are re-fetched from the server on demand.
The server sync pull query uses WHERE user_id = ? AND version > ? with an index on (user_id, version). A Redis-based Lamport counter per user assigns version numbers atomically.
For apps with large binary attachments (images, files), sync only metadata locally and download attachment bytes lazily via presigned URLs. Store attachment sync state separately to avoid blocking text record sync on large downloads.
Summary
An offline-first service keeps a local SQLite or IndexedDB store as the primary read/write target, uses a persistent sync queue with idempotency keys to replay mutations reliably, and applies optimistic UI to give instant feedback. Reconnect reconciliation pulls server changes before draining the queue, and conflicts are resolved per-record using merge strategies surfaced to the user when automatic resolution is ambiguous.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How do you structure a local SQLite or IndexedDB store for offline-first apps?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Mirror the server's core data model locally, adding sync metadata columns: server_id, local_id, sync_status (synced/pending/conflict), updated_at (local logical clock), and server_updated_at. Use SQLite on mobile (via expo-sqlite or SQLCipher for encryption) and IndexedDB on web (via Dexie.js or idb). Keep the schema minimal — avoid joins that are expensive in local stores; denormalize where read performance matters.”
}
},
{
“@type”: “Question”,
“name”: “How do you manage a sync queue for offline edits?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Append every local mutation to a durable outbox table: (id, operation, entity_type, entity_id, payload, idempotency_key, attempts, created_at). On reconnect, process the outbox in order, sending each operation to the server. On success, remove the row; on retriable error, increment attempts with exponential backoff; on terminal error (4xx), mark as failed and surface to the user. The idempotency key prevents duplicate server mutations on retry.”
}
},
{
“@type”: “Question”,
“name”: “What are optimistic UI updates and when do they go wrong?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Optimistic UI applies the mutation to local state immediately, before server confirmation, so the UI feels instant. If the server rejects the mutation (conflict, validation error, auth failure), roll back the local state and notify the user. Rollback is safe only if you snapshot the pre-mutation state. Failures are rare on well-validated inputs, so optimistic updates are appropriate for most CRUD operations. Avoid them for irreversible actions like payments.”
}
},
{
“@type”: “Question”,
“name”: “How does server reconciliation work on reconnect?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “On reconnect, the client sends its high-water mark (last synced server timestamp or sequence number). The server returns all changes after that mark. The client applies server changes to local state, skipping records already in the outbox (those will be sent up). After flushing the outbox, run a final reconciliation pass to resolve any conflicts between server-applied changes and pending local edits. Mark sync_status = synced on all reconciled rows.”
}
}
]
}
See also: Apple Interview Guide 2026: iOS Systems, Hardware-Software Integration, and iCloud Architecture
See also: Atlassian Interview Guide