Coding Interview: System Design Estimation Cheat Sheet — Powers of 2, Latency Numbers, Throughput, Storage

Back-of-envelope estimation is expected in every system design interview. Having key numbers memorized — latency, throughput, storage sizes, and powers of two — lets you make quick calculations that ground your design in reality. This cheat sheet compiles every number you need, organized for fast reference during interview prep and the interview itself.

Powers of Two

Memorize these: 2^10 = 1,024 (~1 Thousand, 1 KB). 2^20 = 1,048,576 (~1 Million, 1 MB). 2^30 = 1,073,741,824 (~1 Billion, 1 GB). 2^40 = ~1 Trillion (1 TB). 2^50 = ~1 Quadrillion (1 PB). Quick conversions: 1 KB = 10^3 bytes. 1 MB = 10^6 bytes. 1 GB = 10^9 bytes. 1 TB = 10^12 bytes. 1 PB = 10^15 bytes. Use powers of 10 for estimation (close enough): 1 million = 10^6 ~ 2^20. 1 billion = 10^9 ~ 2^30. Character encoding: ASCII = 1 byte per character. UTF-8 = 1-4 bytes (English = 1 byte). A typical English word = 5 characters = 5 bytes. A tweet (280 characters) ~ 280 bytes. A UUID = 16 bytes (binary) or 36 bytes (string with hyphens). A timestamp = 8 bytes (64-bit Unix epoch). An IPv4 address = 4 bytes. An IPv6 address = 16 bytes.

Latency Numbers Every Programmer Should Know

L1 cache reference: 1 ns. L2 cache reference: 4 ns. Main memory reference: 100 ns. SSD random read: 16 microseconds (16,000 ns). HDD random read: 2 ms (2,000,000 ns). HDD sequential read (1 MB): 2 ms. SSD sequential read (1 MB): 0.05 ms. Memory sequential read (1 MB): 0.003 ms. Network: same datacenter round-trip: 0.5 ms. US East to West coast: 40 ms. US to Europe: 80 ms. US to Asia: 150 ms. Key insight: memory is 100x faster than SSD, SSD is 100x faster than HDD, and local is 100x faster than cross-continent. These ratios guide architecture decisions: cache in memory to avoid SSD/HDD. Use a CDN to avoid cross-continent latency. Use SSD for databases (not HDD) when latency matters.

Throughput Numbers

Database: a single PostgreSQL instance handles 5,000-50,000 simple queries/sec (depends on query complexity and hardware). A single MySQL instance: similar range. Read replicas multiply read throughput linearly. Cache: Redis handles 100,000-300,000 operations/sec on a single instance. Memcached: 200,000-500,000 ops/sec. Message queue: Kafka handles 1-10 million messages/sec per cluster. SQS: effectively unlimited (managed scaling). RabbitMQ: 10,000-50,000 messages/sec. Web server: a single Nginx instance handles 10,000-100,000 requests/sec for static content. A single application server (Node.js, Go, Java): 1,000-10,000 requests/sec depending on processing complexity. Network: a standard server NIC is 1 Gbps (125 MB/sec) or 10 Gbps (1.25 GB/sec). A 10 Gbps NIC saturates at approximately 10,000 responses/sec with 100 KB responses. DNS: a single authoritative DNS server handles 50,000-200,000 queries/sec.

Storage Sizes

Text: a tweet (280 chars) = ~280 bytes. A typical database row = 100-1,000 bytes. A JSON API response = 1-10 KB. A web page (HTML) = 50-200 KB. Media: a compressed image (JPEG) = 100 KB – 5 MB. A thumbnail = 5-50 KB. An avatar/profile picture = 10-100 KB. Audio: a 3-minute MP3 = 3-5 MB. A 3-minute high-quality OGG = 5-10 MB. Video: 1 minute of 720p = 50-100 MB. 1 minute of 1080p = 100-200 MB. 1 minute of 4K = 300-500 MB. 1 hour of HD video = 3-6 GB. Log entry = 100-500 bytes. A server generates 1-10 GB of logs per day. Quick estimates: 1 million users * 1 KB per record = 1 GB. 1 billion users * 1 KB = 1 TB. 100 million images * 1 MB = 100 TB. 10 million videos * 1 GB = 10 PB.

Time Conversions

Seconds per day: 86,400 (round to 100,000 for estimation). Seconds per month: ~2.6 million (round to 2.5 million). Seconds per year: ~31.5 million (round to 30 million). Requests per second from daily volume: daily_requests / 86,400. Example: 100 million requests per day = 100M / 100K = 1,000 RPS. Peak multiplier: 2-5x average. Peak for 1,000 average RPS = 3,000-5,000 RPS. QPS (Queries Per Second) = RPS for database contexts. Data growth per year: daily_writes * 365. Example: 10 GB/day * 365 = 3.65 TB/year. Over 5 years: 18.25 TB. With 3x replication: ~55 TB. These conversions should be automatic during the interview. Practice converting between daily/monthly/yearly volumes and per-second rates.

Estimation Template

Use this for every system design question: (1) State assumptions: DAU, actions per user, read:write ratio, record size. (2) Traffic: RPS = DAU * actions / 86,400. Peak = RPS * 3. Split into read and write RPS. (3) Storage: daily = writes/day * record_size. Total = daily * 365 * years * replication. (4) Bandwidth: outbound = read_RPS * response_size. Inbound = write_RPS * request_size. (5) Cache: hot_data = daily_unique_reads * record_size * 20% (Pareto). (6) Infrastructure decisions: need a CDN? (outbound > 1 Gbps). Need cache? (read RPS > DB capacity). Need sharding? (storage > single server or write RPS > DB capacity). Need async? (processing time > acceptable latency). Show your math on the whiteboard. Round aggressively — 86,400 becomes 100,000. The interviewer evaluates the process and reasonableness, not exact numbers. Spend 3-5 minutes maximum on estimation before moving to architecture.

Scroll to Top