Social Network Feed: Low-Level Design

A social network feed aggregates posts from accounts a user follows, ranked by relevance or recency. At Instagram or Twitter scale (billions of posts per day, hundreds of millions of users), feed generation is one of the hardest distributed systems problems: naive approaches fail due to the write amplification of fan-out (one post by a celebrity triggers millions of writes) and the read complexity of fan-in (aggregating from thousands of followed accounts).

Pull vs. Push Feed Generation

Pull (fan-in on read): when a user opens their feed, query each account they follow for recent posts, merge and rank the results. Simple — no precomputation needed. Problem: if a user follows 1000 accounts, this requires 1000 database queries per feed load, merged in memory. At N concurrent users, this is O(N × follows) database operations. Unbearable at scale. Push (fan-out on write): when a user posts, write the post to every follower’s feed inbox. Each user has a pre-built feed ready to read in O(1). Problem: a celebrity with 10M followers generates 10M writes per post. Instagram avoids this by not pushing posts from accounts with > 1M followers (celebrities) — those are pulled at read time. Hybrid: push for regular accounts, pull for high-follower accounts.

Feed Inbox Architecture

Each user has a feed inbox — a sorted list of post IDs ordered by publish time or relevance score. Store in Redis as a sorted set: ZADD feed:{user_id} {timestamp} {post_id}. On post creation: fan out the post_id to all followers’ sorted sets via async workers. On feed read: ZREVRANGE feed:{user_id} 0 50 (top 50 posts by score). ZRANGEBYSCORE for cursor-based pagination. The inbox stores only post IDs — the full post content is fetched from the post store by ID (potentially cached). This keeps the inbox compact (post IDs are 8 bytes each; 1000 posts = 8KB per user).

Ranking

Chronological feeds (newest first) are simple but produce lower engagement than ranked feeds. Ranked feeds compute a relevance score per post: score = f(recency, engagement, user_affinity_to_author, content_type_preference). Recency decay: score decreases as the post ages (exponential decay — a post from 10 minutes ago scores higher than an identical post from yesterday). Engagement: likes, comments, shares boost score. User affinity: posts from close friends score higher than posts from distant accounts. This scoring happens during fan-out: when pushing post_id to a follower’s sorted set, compute and store the relevance score as the sorted set score. Periodically re-rank the inbox as engagement signals update.

Handling Celebrities (High Fan-Out)

A user with 10M followers posting creates a fan-out of 10M write operations. Async workers with a message queue handle this: post the post_id to a Kafka topic, fan-out workers consume and write to follower inboxes at a rate that does not overwhelm the Redis cluster. A 10M-follower post might take 30-60 seconds to fully fan out — acceptable because followers are reading cached inboxes, not waiting for the fan-out. For extreme celebrities: merge celebrity posts at read time (pull model) rather than pushing. On feed read, fetch the user’s pre-built inbox plus the last 50 posts from accounts with > 1M followers that the user follows, merge and rank.

Cache Warm/Cold Problem

The Redis inbox expires (TTL) for inactive users. When an inactive user returns after 30 days, their inbox is cold — no pre-built feed. On cache miss: fall back to the pull model (query each followed account for recent posts, merge), rebuild the inbox from the result, and cache it with a fresh TTL. This fallback is expensive but acceptable for infrequent cold-start users — they represent a small fraction of traffic. Monitor cold-start rate as a metric — a spike indicates a problem (mass cache eviction, outage) rather than normal inactive users returning.

See also: Meta Interview Guide 2026: Facebook, Instagram, WhatsApp Engineering

See also: Uber Interview Guide 2026: Dispatch Systems, Geospatial Algorithms, and Marketplace Engineering

See also: Netflix Interview Guide 2026: Streaming Architecture, Recommendation Systems, and Engineering Excellence

See also: Scale AI Interview Guide 2026: Data Infrastructure, RLHF Pipelines, and ML Engineering

See also: LinkedIn Interview Guide 2026: Social Graph Engineering, Feed Ranking, and Professional Network Scale

See also: Airbnb Interview Guide 2026: Search Systems, Trust and Safety, and Full-Stack Engineering

See also: Databricks Interview Guide 2026: Spark Internals, Delta Lake, and Lakehouse Architecture

See also: Anthropic Interview Guide 2026: Process, Questions, and AI Safety

See also: Atlassian Interview Guide

See also: Coinbase Interview Guide

See also: Shopify Interview Guide

See also: Snap Interview Guide

See also: Lyft Interview Guide 2026: Rideshare Engineering, Real-Time Dispatch, and Safety Systems

See also: Stripe Interview Guide 2026: Process, Bug Bash Round, and Payment Systems

Scroll to Top