The Core Problem
A follower feed must show a user the recent posts from everyone they follow, sorted by recency, with low latency on read. The fundamental tension is between write cost and read cost — two opposing architectures represent the extremes.
Fanout-on-Write
When a user publishes a post, the system immediately pushes the post_id into every follower's feed list. If Alice has 1,000 followers, publishing one post triggers 1,000 Redis writes. Feed reads are then O(1) — just fetch the pre-computed list.
Advantages: Feed reads are fast and cheap. No computation at read time.
Disadvantages: Write amplification proportional to follower count. A celebrity with 10M followers publishing a post causes 10M Redis writes — latency is unacceptable and write throughput is a bottleneck. Unfollow cleanup is also complex.
Fanout-on-Read
When a user loads their feed, the system queries all followees, fetches their recent posts, merges by timestamp, and returns the result. No pre-computation on write.
Advantages: Writes are instant — no fan-out. Always reflects the latest posts, including from recently followed accounts.
Disadvantages: Read is expensive. If a user follows 2,000 accounts, loading their feed requires 2,000 queries (or a large IN query), merging results, and sorting — all on the critical read path. Latency degrades for power users with large followee lists.
The Hybrid Approach
Production systems (Twitter, Instagram) use a hybrid: fanout-on-write for regular users, fanout-on-read for celebrities.
- Regular users (follower count < threshold, e.g., 100K): use fanout-on-write. Their posts are pushed to all followers' feed lists at publish time.
- Celebrity users (follower count ≥ threshold): skip the write fanout. When a follower loads their feed, the system fetches celebrity posts separately and merges them in at read time.
The merge at read time for celebrities is cheap because there are few celebrities in a user's followee list, and celebrity posts can be cached globally (not per-follower).
Feed Storage with Redis Sorted Sets
The pre-computed feed for each user is stored as a Redis sorted set:
Key: feed:{user_id}
Score: Unix timestamp of the post
Member: post_id (as string)
On fanout-on-write: ZADD feed:{follower_id} {timestamp} {post_id} for each follower. On feed read: ZREVRANGEBYSCORE feed:{user_id} +inf -inf LIMIT 0 20 to get the 20 most recent post IDs.
Feed Pagination with Cursor
Offset-based pagination (LIMIT 20 OFFSET 40) breaks on feeds that are being updated in real time — new posts shift offsets. Use cursor-based pagination instead.
The cursor is the score (timestamp) of the last seen post. To fetch the next page: ZREVRANGEBYSCORE feed:{user_id} {cursor-1} -inf LIMIT 0 20. This is stable even as new posts are inserted at the top of the sorted set.
Feed Capacity and Trimming
Unbounded growth of feed sorted sets wastes Redis memory. Cap each feed at a maximum size (e.g., 1,000 entries). After each ZADD, run ZREMRANGEBYRANK feed:{user_id} 0 -1001 to trim entries beyond the cap. Users who scroll past 1,000 items fall back to a database query (acceptable edge case).
Unfollow Cleanup
When a user unfollows someone, their pre-computed feed may still contain that person's posts. Two approaches:
- Eager cleanup: remove unfollowed user's posts from the feed sorted set on unfollow. Expensive if the followed user had many recent posts.
- Lazy filtering: keep the posts in the feed but filter them at read time by checking if a follow relationship still exists. Cheaper on write, small overhead on read. Most production systems use this approach.
Feed Hydration
The feed sorted set stores only post_id values, not full post content. After fetching post IDs from Redis, the feed service must hydrate them by calling the post service (or hitting a post cache) to retrieve title, author, media, like counts, etc. Use a multi-get (pipeline or batch fetch) to retrieve all post details in one round trip. Post data is typically cached in Redis or Memcached with a short TTL.
Feed Freshness for Celebrity Posts
Under the hybrid model, celebrity posts are not in a follower's pre-computed feed. They are fetched and merged at read time. This introduces a small latency overhead but is acceptable. Celebrity posts can be cached at the global level (not per-follower) since all followers receive the same posts. Cache with a TTL of 30–60 seconds.
Summary
- Fanout-on-write for regular users, fanout-on-read for celebrities — determined by follower count threshold.
- Redis sorted set per user, keyed by
feed:{user_id}, scored by timestamp. - Cursor-based pagination using score as cursor (not offset).
- Cap feeds at 1,000 entries, trim on each write.
- Lazy unfollow filtering at read time.
- Hydrate post IDs via batch fetch from post cache on read.
{
“@context”: “https://schema.org”,
“@type”: “FAQPage”,
“mainEntity”: [
{
“@type”: “Question”,
“name”: “What is the difference between fanout-on-write and fanout-on-read?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Fanout-on-write (push model) writes a copy of each new post into every follower's feed timeline at publish time, so feed reads are O(1) list fetches but writes are O(followers) per post — catastrophic for accounts with millions of followers. Fanout-on-read (pull model) stores posts only in the author's timeline and merges multiple followed authors' timelines at read time, keeping writes cheap but making reads expensive and latency-sensitive.”
}
},
{
“@type”: “Question”,
“name”: “How does the hybrid approach handle celebrity accounts?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “In the hybrid model, regular users receive fanout-on-write so their followers get fast feed reads, while accounts above a follower threshold (e.g., 1 million) are excluded from write fanout and instead pulled in at read time and merged with the pre-built feed. This caps the worst-case write amplification to a fixed multiple of the threshold rather than unbounded fan-out to tens of millions of followers.”
}
},
{
“@type”: “Question”,
“name”: “How is the feed stored and paginated in Redis?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Each user's feed is stored as a Redis Sorted Set keyed by user ID, with post IDs as members and Unix millisecond timestamps as scores, enabling O(log N) insertion and O(log N + page_size) cursor-based pagination via ZREVRANGEBYSCORE with a score upper bound derived from the last-seen timestamp. The sorted set is capped at a maximum cardinality (e.g., 800 entries) using ZREMRANGEBYRANK to bound memory per user.”
}
},
{
“@type”: “Question”,
“name”: “How is feed staleness handled when a followed user has millions of followers?”,
“acceptedAnswer”: {
“@type”: “Answer”,
“text”: “Because fanout for a celebrity post could take minutes to propagate to all follower feeds, the read path merges the pre-built cached feed with a real-time pull of the celebrity's last N posts, injecting them into the correct chronological position before returning results to the client. A background fanout worker continues propagating the post asynchronously, so the merged result becomes consistent with the stored feed over time without the user perceiving the delay.”
}
}
]
}
See also: Meta Interview Guide 2026: Facebook, Instagram, WhatsApp Engineering
See also: Twitter/X Interview Guide 2026: Timeline Algorithms, Real-Time Search, and Content at Scale
See also: Snap Interview Guide
See also: Anthropic Interview Guide 2026: Process, Questions, and AI Safety