Infinite scroll is one of the most common machine-coding rounds at frontend interviews. The candidate has 45-60 minutes to build a list that loads more items as the user scrolls, with optional virtualization to keep the DOM lean. The exercise tests scroll-event handling, loading state management, and at senior+ level, virtualization techniques that prevent the page from grinding to a halt at thousands of rows.
This piece walks through the full implementation, the senior-level virtualization addition, and what interviewers grade.
The typical prompt
- “Build a feed component that loads more posts as the user scrolls.”
- “Implement an infinite-scroll list backed by this paginated API.”
- “Build a Twitter-like timeline. Show 20 items at a time; load more on scroll.”
- (Senior+) “Now make it work for 100,000 items without slowing down.”
What interviewers grade
- Working scroll loading. The list extends as the user scrolls.
- Loading state. Show a spinner or “loading…” while a request is in flight.
- End-of-data handling. When there are no more items, stop fetching and show an end indicator.
- Race condition handling. Don’t fire duplicate requests; don’t apply stale results.
- (Senior) Virtualization. Render only visible rows when the list is large.
- Accessibility. Screen readers must announce new content; focus management.
- Communication. Narrating decisions out loud.
Implementation: the basic version
Step 1: state and structure
function InfiniteList({ fetchPage }) {
const [items, setItems] = useState([]);
const [page, setPage] = useState(1);
const [loading, setLoading] = useState(false);
const [hasMore, setHasMore] = useState(true);
const sentinelRef = useRef(null);
return (
<div>
{items.map(item => <Item key={item.id} {...item} />)}
{loading && <div className="loading">Loading...</div>}
{!hasMore && <div className="end">You've reached the end</div>}
<div ref={sentinelRef} /> {/* IntersectionObserver target */}
</div>
);
}
Step 2: IntersectionObserver-based loading
The modern approach replaces scroll-event handlers with IntersectionObserver, which is more performant.
useEffect(() => {
if (!sentinelRef.current || !hasMore) return;
const observer = new IntersectionObserver(
entries => {
if (entries[0].isIntersecting && !loading) {
loadMore();
}
},
{ threshold: 0 }
);
observer.observe(sentinelRef.current);
return () => observer.disconnect();
}, [hasMore, loading]);
async function loadMore() {
if (loading || !hasMore) return;
setLoading(true);
try {
const result = await fetchPage(page);
if (result.items.length === 0) {
setHasMore(false);
} else {
setItems(prev => [...prev, ...result.items]);
setPage(p => p + 1);
}
} finally {
setLoading(false);
}
}
The IntersectionObserver fires when the sentinel scrolls into view, calling loadMore. Cleanup disconnects the observer when the component unmounts.
Step 3: race condition handling
The loadMore function above has a subtle race condition. If the user scrolls fast and the IntersectionObserver fires before loading updates from the previous call, two parallel fetches can race. The if (loading) return; guard helps but isn’t bulletproof because of React’s update batching.
A cleaner pattern: use a ref to track in-flight state synchronously.
const inFlightRef = useRef(false);
async function loadMore() {
if (inFlightRef.current || !hasMore) return;
inFlightRef.current = true;
setLoading(true);
try {
const result = await fetchPage(page);
// ... same as before
} finally {
setLoading(false);
inFlightRef.current = false;
}
}
The ref is updated synchronously, preventing the race. Senior candidates note this distinction; junior candidates often don’t.
The senior-level addition: virtualization
For lists of 100+ items, all DOM nodes can hurt scroll performance. Virtualization renders only the visible portion.
The simple virtualization recipe:
function VirtualList({ items, itemHeight, height, renderItem }) {
const [scrollTop, setScrollTop] = useState(0);
const containerRef = useRef(null);
const visibleStart = Math.floor(scrollTop / itemHeight);
const visibleCount = Math.ceil(height / itemHeight) + 2; // overscan
const visibleEnd = Math.min(visibleStart + visibleCount, items.length);
const totalHeight = items.length * itemHeight;
const offsetY = visibleStart * itemHeight;
return (
<div
ref={containerRef}
onScroll={e => setScrollTop(e.target.scrollTop)}
style={{ height, overflow: 'auto' }}
>
<div style={{ height: totalHeight, position: 'relative' }}>
<div style={{ transform: `translateY(${offsetY}px)` }}>
{items.slice(visibleStart, visibleEnd).map((item, i) =>
<div key={visibleStart + i} style={{ height: itemHeight }}>
{renderItem(item)}
</div>
)}
</div>
</div>
</div>
);
}
The container has a fixed height; inside, a spacer div has the full height of all items combined. Visible items are rendered in a translated wrapper. Only the visible items (plus a small overscan) live in the DOM at any time.
Senior candidates know the limitations: this pattern requires fixed-height items. Variable-height virtualization is meaningfully harder (TanStack Virtual or react-virtuoso solve it; rolling your own in 45 minutes is the senior+ stretch).
Common pitfalls
- Scroll-event handlers without throttling. Firing handlers 60+ times per second hurts perf. Use IntersectionObserver instead.
- No race condition handling. Multiple in-flight requests interleave incorrectly.
- Loading the next page before the current finishes. Causes double-loads.
- No end-of-data check. Infinite loop fetching empty pages.
- Forgetting to disconnect IntersectionObserver. Memory leak.
- Re-creating the observer on every render. Stale closures and observer churn.
- For virtualization: forgetting overscan. Causes flicker at scroll boundaries.
Stretch goals
- Pull-to-refresh on mobile.
- Bidirectional infinite scroll (load older items when scrolling up).
- Optimistic insertion (e.g., add a new post to the top instantly).
- Variable-height virtualization.
- Error state with retry.
Time budget for a 45-minute round
- 0-5 min: clarify requirements, ask about virtualization expectation.
- 5-15 min: basic structure with IntersectionObserver.
- 15-25 min: race condition handling, loading state, end-of-data.
- 25-35 min: virtualization (if asked).
- 35-40 min: accessibility (announce new content via ARIA live region).
- 40-45 min: error handling, stretch goals.
Frequently Asked Questions
Should I use a library?
For the round, no. Build it from scratch. In production, use TanStack Virtual or react-virtuoso for serious virtualization needs.
IntersectionObserver vs scroll events?
IntersectionObserver is the modern default. More performant, doesn’t fire on every pixel of scroll. Some interviewers may still ask you to demonstrate scroll-event handling for legacy compatibility.
How do I handle the race condition cleanly?
A ref-based in-flight flag (synchronous) plus a state-based loading flag (for UI rendering). Two flags handle both purposes correctly.
Is virtualization always required?
For senior+ rounds, often yes if the prompt mentions large datasets. For junior or mid-level rounds, basic infinite scroll without virtualization is usually fine.
How does this round differ from autocomplete?
Different patterns. Autocomplete tests debouncing, race conditions on user input, keyboard nav. Infinite scroll tests loading state, scroll-driven loading, and (senior) virtualization.