System Design: Gaming Leaderboard at Scale
Designing a leaderboard is a popular system design question at gaming companies (Roblox, Riot Games, Epic Games) and general tech companies (Google, Meta, Amazon). It tests your understanding of sorted data structures, real-time updates, and scaling challenges. The core challenge: ranking millions of players with sub-millisecond read latency while handling thousands of score updates per second.
Requirements Clarification
- Functional: Update player score; get global rank for a player; get top-K players globally; get players around a given rank (rank ±50); support multiple leaderboards (weekly, all-time, regional)
- Non-functional: 10M players; 10K score updates/second; read latency <100ms at p99; write latency <50ms; 99.9% availability
- Scope out: Authentication, friend leaderboards (much more complex), anti-cheat
Data Model
-- PostgreSQL schema for persistent storage
CREATE TABLE players (
player_id BIGINT PRIMARY KEY,
username TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE leaderboard_scores (
leaderboard_id TEXT NOT NULL, -- 'global', 'weekly_2026_w15', 'region_na'
player_id BIGINT NOT NULL,
score BIGINT NOT NULL DEFAULT 0,
last_updated TIMESTAMPTZ DEFAULT NOW(),
PRIMARY KEY (leaderboard_id, player_id)
);
-- Index for ranking queries
CREATE INDEX idx_lb_score ON leaderboard_scores(leaderboard_id, score DESC);
Core Data Structure: Redis Sorted Set
Redis Sorted Sets (ZSET) are the ideal data structure for leaderboards. They maintain elements sorted by score with O(log N) operations.
import redis
from typing import List, Optional, Tuple
class LeaderboardService:
"""
Redis-backed leaderboard using Sorted Sets (ZSET).
Redis ZSET commands:
- ZADD: add/update member with score — O(log N)
- ZREVRANK: get 0-indexed rank (highest score first) — O(log N)
- ZREVRANGE: get members in rank range — O(log N + K)
- ZREVRANGEBYSCORE: get members in score range — O(log N + K)
- ZCARD: get total members — O(1)
- ZSCORE: get score for member — O(1)
Redis can hold 2^32 members per ZSET.
At 10M players, each ZSET ≈ 300MB (manageable on single Redis node).
For 100M+ players: shard ZSETs by score range.
"""
def __init__(self, redis_client: redis.Redis):
self.r = redis_client
def _key(self, leaderboard_id: str) -> str:
return f"lb:{leaderboard_id}"
def update_score(
self,
leaderboard_id: str,
player_id: int,
new_score: int
) -> None:
"""
Set player's score. ZADD with NX flag only adds if better score.
Use GT flag (Redis 6.2+) to only update if new score > current score.
Time: O(log N)
"""
key = self._key(leaderboard_id)
# GT flag: only update if new_score > existing score (never go backward)
self.r.zadd(key, {str(player_id): new_score}, gt=True)
def increment_score(
self,
leaderboard_id: str,
player_id: int,
delta: int
) -> int:
"""
Atomically add delta to player's score.
Use for games where score accumulates (kills, points earned).
Time: O(log N)
Returns: new score
"""
key = self._key(leaderboard_id)
return int(self.r.zincrby(key, delta, str(player_id)))
def get_rank(
self,
leaderboard_id: str,
player_id: int
) -> Optional[int]:
"""
Get 1-indexed rank (rank 1 = highest score).
ZREVRANK returns 0-indexed rank (highest score = 0).
Time: O(log N)
"""
key = self._key(leaderboard_id)
rank = self.r.zrevrank(key, str(player_id))
return (rank + 1) if rank is not None else None
def get_score(self, leaderboard_id: str, player_id: int) -> Optional[int]:
"""Get player's current score. Time: O(1)"""
key = self._key(leaderboard_id)
score = self.r.zscore(key, str(player_id))
return int(score) if score is not None else None
def get_top_k(
self,
leaderboard_id: str,
k: int = 100
) -> List[Tuple[int, int, int]]:
"""
Get top-K players with their scores and ranks.
Returns: [(rank, player_id, score), ...]
Time: O(log N + K)
"""
key = self._key(leaderboard_id)
results = self.r.zrevrange(key, 0, k - 1, withscores=True)
return [
(rank + 1, int(player_id), int(score))
for rank, (player_id, score) in enumerate(results)
]
def get_players_around(
self,
leaderboard_id: str,
player_id: int,
window: int = 5
) -> List[Tuple[int, int, int]]:
"""
Get players within ±window ranks of given player.
Used for "players near you" feature.
Time: O(log N + 2*window)
"""
key = self._key(leaderboard_id)
rank_0idx = self.r.zrevrank(key, str(player_id))
if rank_0idx is None:
return []
start = max(0, rank_0idx - window)
end = rank_0idx + window
results = self.r.zrevrange(key, start, end, withscores=True)
return [
(start + i + 1, int(pid), int(score))
for i, (pid, score) in enumerate(results)
]
def get_total_players(self, leaderboard_id: str) -> int:
"""Total number of players on leaderboard. Time: O(1)"""
return self.r.zcard(self._key(leaderboard_id))
def percentile_rank(
self,
leaderboard_id: str,
player_id: int
) -> Optional[float]:
"""
Player's percentile (top X% of players).
Useful for: "You're in the top 5% of players!"
"""
rank = self.get_rank(leaderboard_id, player_id)
total = self.get_total_players(leaderboard_id)
if rank is None or total == 0:
return None
return round((1 - rank / total) * 100, 1)
Write Path: Score Update Pipeline
"""
Score Update Architecture:
Game Server → [Score Update API] → [Kafka Topic: score-events]
|
┌──────────┴──────────────┐
│ │
[Redis Consumer] [Postgres Consumer]
(real-time rank) (persistent storage)
| |
Redis ZSET PostgreSQL table
(fast reads, p99 None:
"""
Process a score update event.
event: {player_id, leaderboard_id, new_score, event_type}
event_type: 'set' | 'increment'
"""
if event['event_type'] == 'set':
self.lb.update_score(
event['leaderboard_id'],
event['player_id'],
event['new_score']
)
elif event['event_type'] == 'increment':
self.lb.increment_score(
event['leaderboard_id'],
event['player_id'],
event['delta']
)
Scaling Beyond a Single Redis Node
"""
At 10M players: single Redis node handles it (ZSET ~300MB, fast ops).
At 100M players: need sharding.
Sharding strategy: score-range based sharding
- Shard 0: scores 0–999,999
- Shard 1: scores 1,000,000–1,999,999
- etc.
Tradeoff:
- Most players are in lower score ranges (hotspot on shard 0)
- Need "scatter-gather" for global top-K: query all shards, merge results
Better: player-id based sharding for writes, read replicas for reads
- Each shard owns a subset of player IDs
- For top-K: query all shards in parallel, merge sorted results (K-way merge)
Read replicas:
- Redis Sentinel or Redis Cluster replication
- Multiple read replicas for top-K queries (high QPS from leaderboard pages)
- Eventual consistency: replica may lag 1-2ms behind primary (acceptable)
Caching top-K:
- Cache top-100 globally every 60 seconds (most users only care about top)
- Cache invalidated on any change in top 100 (via Redis keyspace notifications)
- For real-time top-K: skip cache, query primary directly (1K QPS threshold)
"""
Weekly Leaderboard Reset
class WeeklyLeaderboardManager:
"""
Manage weekly leaderboard rotation without downtime.
Key requirements:
1. New week starts at midnight Sunday UTC
2. Previous week's leaderboard preserved for 4 weeks (prizes, disputes)
3. Zero downtime during transition
4. Notify players of final rankings before reset
"""
def __init__(self, redis_client, leaderboard_service: LeaderboardService):
self.r = redis_client
self.lb = leaderboard_service
def rotate_weekly_leaderboard(self, iso_week: str) -> None:
"""
Rotate to new week. iso_week: e.g., '2026-W15'
Old ZSET renamed, new ZSET initialized empty.
Redis RENAME is atomic — zero downtime.
"""
current_key = f"lb:weekly_current"
archive_key = f"lb:weekly_{iso_week}"
# Archive current week
if self.r.exists(current_key):
self.r.rename(current_key, archive_key)
# New week starts fresh (no ZADD needed; ZINCRBY creates member if missing)
# Set TTL on archive (4 weeks)
self.r.expire(archive_key, 28 * 24 * 3600)
print(f"Weekly leaderboard rotated. Archive: {archive_key}")
Key Design Trade-Offs Summary
| Decision | Choice | Rationale |
|---|---|---|
| Primary data structure | Redis Sorted Set | O(log N) insert/rank, O(1) score lookup |
| Persistence | Redis + Postgres dual write | Fast reads; durable storage for analytics |
| Write path | Kafka buffering | Handles bursts, multiple consumers, replay |
| Sharding | Player ID sharding | Even distribution; scatter-gather for top-K |
| Cache | Cache global top-100 | Most traffic is for top players; 60s freshness acceptable |
| Weekly reset | Redis RENAME (atomic) | Zero downtime transition |
Follow-Up Questions
- How would you design a friend leaderboard (rank among your 500 friends)?
- How do you prevent cheating / score manipulation?
- How would you add regional leaderboards (North America, Europe, Asia)?
- How do you handle players with the same score (tie-breaking)?
- The leaderboard needs to support 1 billion players. What changes?
This design is also relevant for: Roblox experience leaderboards, Steam achievements, Google Play games, Fortnite Arena mode rankings. See our System Design interview library for more problems.