Problem Overview
Game state synchronization keeps all clients and the authoritative server in agreement about the game world in real time. The challenge is that network packets arrive out of order, get dropped, or are delayed — yet players expect smooth, responsive gameplay. The design must hide latency while guaranteeing correctness at the server.
Requirements and Constraints
Functional Requirements
- Server is authoritative: all game state changes originate from server simulation
- Clients send input commands; server applies them and broadcasts state
- Clients predict local state immediately on input for responsiveness
- Server reconciles client predictions and corrects divergence
- Support up to 64 players per game session at 20 Hz tick rate
Non-Functional Requirements
- State update payload per tick under 1,400 bytes (single UDP datagram) per client
- Tolerate up to 5% packet loss without visible artifacts
- Handle player RTT of 20–200 ms
- CPU budget for server simulation: under 8 ms per 50 ms tick
Core Data Model
Game State Snapshot
struct EntityState {
entity_id: u32,
position: Vec3f, // 12 bytes
velocity: Vec3f, // 12 bytes
orientation: Quaternion,// 16 bytes
health: u16,
flags: u16, // alive, crouching, firing, etc.
} // 44 bytes per entity
struct WorldSnapshot {
tick: u32,
timestamp_ms: u64,
entity_count: u16,
entities: EntityState[],
}
Input Command
struct InputCommand {
client_tick: u32,
move_forward: i8, // -100 to 100
move_right: i8,
turn_yaw: i16,
turn_pitch: i16,
buttons: u16, // jump, fire, crouch bitmask
} // 10 bytes
Key Algorithms and Logic
Delta Compression
Instead of sending a full world snapshot every tick, the server sends only entities that changed since the last acknowledged snapshot for each client. Each state update is tagged with the baseline tick it diffs against. The server maintains a circular buffer of the last 32 snapshots per client. If a client's last ACKed snapshot is older than 32 ticks (1.6 seconds at 20 Hz), the server sends a full snapshot to resync.
Quantization further reduces payload: positions are quantized to 0.01-unit precision (16-bit fixed point for coordinates within a 655m world cube), reducing Vec3 from 12 bytes to 6 bytes. Orientation uses a smallest-three quaternion encoding (3 × 15-bit components = 6 bytes). A 64-entity delta update fits comfortably within a 1,400-byte MTU.
Client-Side Prediction
When the player presses a key, the client immediately simulates the movement locally using the same physics code as the server. The client stores a history of unacknowledged input commands in a ring buffer indexed by client tick. This allows instant visual feedback with no waiting for a server round trip.
Server Reconciliation
The server's state update packet includes the last processed client tick (ack_client_tick). On receipt, the client:
- Discards all buffered inputs with tick <= ack_client_tick.
- Sets the local player position to the server-authoritative position.
- Re-simulates all remaining buffered inputs on top of that position.
If the server-corrected position differs from the predicted position by more than a threshold (e.g., 0.5 units), apply a smooth correction lerp over 100 ms rather than snapping, to avoid jarring visual jumps.
Lag Compensation
When a player fires a shot, the server receives the input command at time T + RTT/2. The target entity has moved since the shooter aimed. Lag compensation rewinds the server's entity history to the tick corresponding to the shooter's view time, performs the hit detection against the historical positions, then unwinds. This makes hits feel fair: if the crosshair was on the enemy at fire time on the client, the server confirms the hit.
History depth: store 1 second of entity state snapshots server-side per session (20 snapshots × N entities). Hit validation is capped at 200 ms rewind to prevent abuse by high-latency players.
Entity Relevance and Interest Management
Not all entities are relevant to every client. The server uses a relevance system: entities within a configurable radius or in the client's field of view are flagged relevant. Irrelevant entities are excluded from delta updates, drastically reducing bandwidth for large maps. Relevance is re-evaluated every 5 ticks to amortize cost.
Transport Layer
Use UDP with a thin reliability layer implemented in userspace. Game inputs use unreliable delivery — missing inputs are covered by the client re-sending the last N inputs in every packet (redundant transmission). State updates use unreliable delivery with sequence numbers; the client ACKs the latest received tick. Critical events (player kill, item pickup) use a lightweight reliable channel: a sequenced queue with retransmit on timeout, piggybacked on the next outgoing packet.
API Design (Server Internal)
// Client -> Server (UDP)
InputPacket {
session_id: u64,
redundant_inputs: InputCommand[8], // last 8 inputs for loss recovery
ack_server_tick: u32
}
// Server -> Client (UDP)
StateUpdatePacket {
server_tick: u32,
ack_client_tick: u32,
baseline_tick: u32,
delta_entities: DeltaEntityState[],
reliable_events: GameEvent[]
}
Scalability Considerations
- Tick budget: Profile entity simulation, collision detection, and broadcast serialization separately. Use spatial hashing for collision to keep it O(n) rather than O(n²).
- Session isolation: Each game session runs in its own goroutine or OS thread; no shared state between sessions eliminates lock contention.
- Spectator mode: Spectators receive a delayed (2-second buffered) stream to allow smooth playback at minimal extra server cost.
- Cheat prevention: Server never trusts client position — only client inputs. Movement speed validation rejects inputs that imply super-human velocity.
Failure Modes and Mitigations
- Packet loss burst: Client plays the last known state with extrapolation (dead reckoning) for up to 500 ms before showing a “connection lost” indicator.
- Clock desync: Use NTP-like clock synchronization at session start: client measures RTT over 10 samples, calculates server clock offset, stores it for timestamp translation.
- Player disconnect: Server freezes the disconnected player's entity for 10 seconds awaiting reconnect, then removes it from simulation.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “How does delta compression reduce bandwidth in game state synchronization?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Instead of broadcasting the full world state every tick, the server computes a diff between the last acknowledged state per client and the current state. Only changed fields (position, health, flags) are serialized and sent. Techniques like bitfield dirty-flags and baseline snapshots keep packets small even at 60 Hz tick rates.”
}
},
{
“@type”: “Question”,
“name”: “What is client-side prediction and why is it necessary?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Client-side prediction lets the local client apply player input immediately without waiting for a server round-trip, eliminating perceived input lag. The client maintains a buffer of unacknowledged inputs and simulates their effect locally. When the authoritative server state arrives, the client reconciles any divergence by replaying buffered inputs on top of the server's snapshot.”
}
},
{
“@type”: “Question”,
“name”: “How does server reconciliation work after a misprediction?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Each client input is tagged with a sequence number. The server echoes back the last processed sequence number alongside its authoritative state. If the client's predicted position differs from the server's, the client rewinds to the server's state and re-simulates all buffered inputs with sequence numbers greater than the acknowledged one, producing a corrected present state.”
}
},
{
“@type”: “Question”,
“name”: “How does lag compensation allow hit registration despite network latency?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “The server stores a rolling history of world snapshots (typically 1–2 seconds). When a hit event arrives from a client, the server rewinds the world to the timestamp the client fired at, validates the hit against that historical snapshot, then fast-forwards back to the present. This makes shooting feel responsive for the attacker while keeping validation authoritative.”
}
}
]
}
See also: Meta Interview Guide 2026: Facebook, Instagram, WhatsApp Engineering
See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering