Frontend Performance Budgets and Real-User Monitoring

Frontend performance is one of the easiest things to claim and one of the hardest things to maintain. Performance budgets, enforced in CI, paired with real-user monitoring, are the modern discipline. Senior interviews probe whether you understand the full loop. This guide is the working state in 2026.

The metrics that matter

  • LCP (Largest Contentful Paint): when the largest content element renders. Target < 2.5s.
  • INP (Interaction to Next Paint): response to user input. Target < 200ms. Replaced FID in 2024.
  • CLS (Cumulative Layout Shift): visual stability. Target < 0.1.
  • TTFB (Time to First Byte): server response time. Target < 800ms.
  • FCP (First Contentful Paint): first painted content.

The performance-budget concept

A budget is a hard limit that engineering commits to:

  • JS bundle size: 200KB compressed for the initial page
  • CSS: 50KB
  • Image weight: 500KB
  • Time-based: LCP under 2.5s on 4G mobile
  • If a PR pushes past, CI fails

Lighthouse CI

  • Run Lighthouse on every PR against the deployed preview
  • Compare against budgets; fail PR if missed
  • Track score over time
  • Most teams use Lighthouse CI or third-party (Calibre, SpeedCurve)

The lab vs field gap

  • Lab: Lighthouse runs in a controlled environment
  • Field: real users on real networks and devices
  • The gap is real — lab can show “good” while field shows “needs improvement”
  • Both are required; neither is sufficient alone

Real-user monitoring (RUM)

  • web-vitals library reports Core Web Vitals from the browser
  • Send to your analytics (Sentry, Datadog RUM, Google Analytics, custom)
  • Segment by device, country, page
  • Watch the p75 (75th percentile) — Google’s threshold for “good”

What RUM actually shows

  • Mobile users on slow networks have much worse metrics than desktop
  • Specific pages (heavy media, complex layouts) outliers
  • Geographic distribution — users far from the CDN edge
  • Browser-specific issues (Safari INP regressions are real)

The regression discipline

  • Watch RUM week-over-week
  • If LCP regresses 100ms, find the change that caused it
  • Common culprits: new JS dependency, unoptimized image, blocking script
  • Bundle-size analyzer (webpack-bundle-analyzer, rollup-plugin-visualizer) shows what grew

The new bundle reality

  • RSC and streaming SSR dramatically reduce shipped JS
  • Code-split aggressively: route-level, component-level for large components
  • Defer scripts that are not critical (analytics, A/B testing libraries)
  • Tree-shake unused exports — measure with bundle analyzer

Image optimization

  • WebP / AVIF formats with JPEG fallback
  • Responsive sizes (srcset)
  • Lazy-load below-fold (loading=”lazy”)
  • Modern CDN with image transforms (Cloudinary, imgix, Vercel Image)
  • Avoid CLS: specify width and height attributes

Font loading

  • Self-host or use a CDN with preload
  • font-display: swap to avoid FOIT (flash of invisible text)
  • Variable fonts to ship one file with multiple weights
  • Preload critical fonts in <link rel=”preload”>

Third-party script discipline

  • Each script costs LCP / INP
  • Audit regularly — every quarter, list every third-party tag
  • Move to async or defer where possible
  • Lighthouse flags blocking scripts; fix them

INP regressions

INP is the trickiest of the Core Web Vitals:

  • Long tasks block the main thread; INP suffers
  • Common causes: synchronous JSON parses, heavy React renders, hydration cost
  • Mitigation: scheduler.yield(), break long tasks, defer hydration
  • Measure with INP RUM, not just Lighthouse

Performance budget by route

  • Marketing pages: aggressive budget (small bundle, fast LCP)
  • Logged-in product: looser, but still measured
  • Critical user paths (signup, checkout): tightest budgets
  • Different budgets per route prevent overgeneralization

The “page speed score is a vanity metric” warning

Senior engineers know:

  • Lighthouse score is one input, not the goal
  • RUM p75 of LCP is what users actually feel
  • Conversion rate / engagement are downstream metrics that matter most
  • Optimize for the user, not the score

Common interview questions

  • “How do you set a performance budget?”
  • “Walk me through diagnosing an LCP regression.”
  • “What is INP and how do you optimize it?”
  • “What is the gap between Lighthouse and field data?”

What separates senior from staff

Senior candidates know the metrics and tools. Staff candidates run the discipline — budgets, regression watch, route-specific tuning. Principal candidates link performance to business outcomes (conversion, retention, revenue) and influence cross-team performance culture.

Frequently Asked Questions

Should I optimize for Lighthouse 100?

No. Optimize for real users. Lighthouse 95 with great RUM beats 100 with poor field metrics.

How do I report performance to stakeholders?

Show RUM trends weekly. Tie to business metrics where possible. Avoid presenting Lighthouse alone; it does not move executives.

What about Edge / streaming SSR impact?

LCP often improves significantly. INP can regress if hydration is heavy — measure both.

Scroll to Top