Performance interviews at senior+ frontend roles increasingly center on Core Web Vitals — Google’s set of three metrics that quantify page experience. The candidate is expected to articulate what each metric measures, where it sits in the rendering pipeline, and how to optimize it. Generic answers about “make the page faster” do not score; specific articulations of LCP, INP, CLS, and the techniques to improve each are the bar.
This piece covers what each metric is, the optimization techniques, and what senior+ interviewers actually probe.
The three Core Web Vitals
| Metric | What it measures | Good threshold |
|---|---|---|
| LCP (Largest Contentful Paint) | Time until largest visible content element renders | ≤ 2.5 seconds |
| INP (Interaction to Next Paint) | Latency between user interaction and next visible response | ≤ 200 ms |
| CLS (Cumulative Layout Shift) | Total visual instability during page lifecycle | ≤ 0.1 |
INP replaced FID (First Input Delay) in 2024 as a core metric because INP captures interaction latency throughout the page lifecycle, not just the first interaction.
LCP optimization
LCP is dominated by:
- Time to First Byte (server response).
- Resource load delay (when the browser discovers the LCP resource).
- Resource load duration (download time).
- Element render delay (paint after resource loads).
Optimization techniques:
- Reduce server response time. Server-side rendering, caching, CDN, edge functions.
- Preload the LCP element. If the LCP is an image, add
<link rel="preload" as="image" href="...">in the HTML head. - Modern image formats. WebP and AVIF are smaller than JPEG and PNG. Use the picture element with type fallbacks.
- Responsive images. srcset and sizes attributes serve the right image for the viewport.
- Don’t lazy-load the LCP image. Lazy-loading the LCP defeats the optimization. Mark it loading=”eager” or omit the attribute (which defaults to eager).
- Critical CSS inline. Inline the CSS needed for the above-the-fold content; defer the rest.
- Eliminate render-blocking JavaScript. Use defer or async on scripts; only inline truly critical JS.
- Font loading. Use font-display: swap; consider preloading critical fonts.
INP optimization
INP measures the worst interaction latency throughout the page session. Three components:
- Input delay (time before the event handler runs).
- Processing time (time the handler takes).
- Presentation delay (time to paint after the handler finishes).
Optimization techniques:
- Break up long tasks. Any JavaScript task over 50ms blocks the main thread. Break work into smaller chunks with
scheduler.yield()orsetTimeout. - Move work off the main thread. Web Workers for heavy computation. The main thread should be free for rendering and input handling.
- Use transitions. React’s useTransition marks updates as non-urgent; the browser can interrupt them for higher-priority work.
- Defer non-critical work. Analytics, tracking, non-essential third-party scripts should run after the page is interactive.
- Optimize event handlers. Don’t do expensive work in click handlers; defer to a microtask or animation frame.
- Avoid forced synchronous layout. Reading layout properties (offsetTop, scrollHeight) after writing causes layout thrashing.
- Reduce JavaScript bundle size. Less JS to parse and execute = faster everything.
CLS optimization
CLS is caused by elements moving unexpectedly. Common culprits:
- Images without dimensions specified, causing reflow when they load.
- Ads, embeds, iframes injected into the layout without reserved space.
- Web fonts swapping in (FOUT) and changing text dimensions.
- Dynamic content inserted above existing content.
Optimization techniques:
- Always specify width and height on images and videos. Or use aspect-ratio CSS.
- Reserve space for ads and embeds. Set a fixed container size before the content loads.
- Font loading strategy. Use font-display: optional (no swap) or carefully match metrics with size-adjust.
- Avoid inserting content above existing content. If you must (e.g., notifications, banners), animate them in rather than abruptly inserting.
- Handle skeleton loading carefully. Skeletons should match the dimensions of real content to avoid shift when real content arrives.
Common interview questions
“How would you optimize a slow page’s LCP?”
Strong answer: measure first (DevTools Performance panel, web-vitals library, RUM). Identify the LCP element. Check whether it’s a server response delay, resource load delay, or render delay. Apply targeted optimization based on the bottleneck.
“Why is this button click feeling janky?”
Strong answer: probably long task on the main thread. Profile the click handler in DevTools. Identify the slow work; move it off-main-thread or break it up.
“Walk me through the difference between LCP and FCP.”
Strong answer: FCP is when any content first paints (could be header text). LCP is when the largest content element paints (often the hero image or main heading). Both measure rendering speed but FCP is earlier and less indicative of user-perceived load.
“How do you measure performance in production?”
Strong answer: real user monitoring (RUM) with the web-vitals JavaScript library, or via a third-party RUM service (Datadog RUM, Sentry Performance, New Relic). Lab data (Lighthouse, WebPageTest) for synthetic monitoring. Both are needed because lab tests miss real-world conditions.
Bundle analysis
Senior interviews often probe bundle size and code splitting:
- Bundle analyzer tools. webpack-bundle-analyzer, rollup-plugin-visualizer, or built-in tools in Vite/Next.
- Code splitting. React.lazy, dynamic imports, route-based splitting.
- Tree shaking. ES modules + production build = unused code eliminated.
- Modern bundlers. esbuild, swc, Turbopack — faster than webpack but with tradeoffs.
- Vendor splitting. Separate vendor bundles for cacheability.
What scores well
- Articulating the rendering pipeline and where each metric sits.
- Naming specific optimization techniques and explaining when they apply.
- Distinguishing lab vs RUM data.
- Knowing the actual thresholds (≤2.5s LCP, ≤200ms INP, ≤0.1 CLS).
- Familiarity with at least one performance profiling tool (DevTools Performance, Lighthouse, WebPageTest).
What scores poorly
- Generic “make it faster” without specifics.
- Confusing FID with INP (FID was deprecated; INP is current).
- Suggesting techniques without understanding the bottleneck.
- Recommending optimization tools without knowing what they measure.
- Treating performance as a one-time audit rather than continuous monitoring.
Frequently Asked Questions
How heavily is performance tested in 2026?
For senior+ frontend roles, very. Performance is one of the most-tested dimensions because it directly affects business metrics and user experience.
Should I memorize the thresholds?
Yes for the three Core Web Vitals. Knowing 2.5s / 200ms / 0.1 cold is part of the bar.
Is the Performance panel in DevTools enough?
For lab analysis yes. For production performance, real user monitoring is also needed because lab tests run on fast networks and devices.
What about bundle size — what’s a reasonable target?
Varies by app. Modern frontend apps target initial JS under 200KB compressed for the critical bundle; deeper pages can be larger if code-split. The HTTP Archive’s State of JS report has reference data.
How does AI affect performance interviews?
AI tools can suggest optimizations, but the candidate needs to verify and explain. The interviewer is grading the underlying understanding, not the AI’s suggestions.