Core Web Vitals Explained: What Actually Matters to Users

Back to Blog
Blyxo Team
7 min read
PerformanceCore Web VitalsUser ExperienceTestingMetrics

You've probably heard about Core Web Vitals. Maybe you've even seen your scores in Google Search Console or PageSpeed Insights. But do you actually understand what they measure—and more importantly, why they matter?

Here's the problem: most developers optimize for metrics they don't understand. They chase a green Lighthouse score without knowing what makes it green. They fix LCP without understanding what LCP actually measures.

This guide explains what Core Web Vitals actually are, what they measure, and how to optimize for metrics that genuinely improve user experience.

Why Core Web Vitals exist

Before Core Web Vitals, performance metrics were all over the place:

  • Load time: When does the page finish loading? (But users don't wait for "finished"—they start interacting as soon as content appears)
  • Time to Interactive: When can users interact? (But this doesn't measure how fast interactions actually respond)
  • First Contentful Paint: When does something appear? (But that "something" might just be a background color, not useful content)

None of these directly measured user experience. You could have a perfect score on all of them and still have a site that feels slow.

Google's Core Web Vitals focus on three aspects that users actually care about:

  1. Loading: How quickly does useful content appear?
  2. Interactivity: How quickly does the page respond to interactions?
  3. Visual stability: Does the page jump around while loading?

Let's break down each metric.

1. Largest Contentful Paint (LCP): Loading performance

What it measures: How long until the largest visible element in the viewport is fully rendered.

Why it matters: Users perceive a page as "loaded" when they can see the main content—not when every asset has finished downloading. LCP measures this perceived load time.

Target: Less than 2.5 seconds (good), 2.5-4 seconds (needs improvement), over 4 seconds (poor)

What counts as the largest contentful element?

Usually one of these:

  • Hero images
  • Large text blocks
  • Video thumbnails
  • Background images (if used via CSS background on a visible element)

Doesn't count: Headers, footers, sidebars, or anything outside the initial viewport.

Common LCP issues and how Blyxo detects them

Issue 1: Unoptimized images

  • A 2MB hero image that takes 5 seconds to download on 3G
  • Blyxo flags: Images over 200KB, images without width/height attributes, images without lazy loading

Issue 2: Render-blocking resources

  • CSS and JavaScript files that block rendering
  • Blyxo flags: Stylesheets over 50KB, synchronous scripts in <head>, unused CSS

Issue 3: Slow server response

  • Server takes 2+ seconds to respond
  • Blyxo measures: Time to First Byte (TTFB) and flags slow API calls

Issue 4: Client-side rendering

  • JavaScript frameworks that render everything client-side
  • Blyxo detects: Long JavaScript execution time, delayed LCP compared to FCP

How to improve LCP

Optimize images:

  • Compress images (use WebP or AVIF)
  • Serve responsive images (srcset)
  • Use a CDN for faster delivery
  • Add width/height to prevent layout shifts

Reduce render-blocking resources:

  • Inline critical CSS
  • Defer non-critical CSS
  • Use async or defer for scripts
  • Code-split JavaScript

Improve server response:

  • Use server-side rendering (SSR) or static generation (SSG)
  • Optimize database queries
  • Use caching (CDN, browser cache, server cache)

How Blyxo helps: Identifies which resources are delaying LCP, shows waterfall charts to visualize blocking resources, and suggests specific optimizations (e.g., "Compress hero.jpg by 78%").

2. Interaction to Next Paint (INP): Interactivity

What it measures: How long between a user interaction (click, tap, keypress) and the next visual update.

Why it matters: Users expect instant feedback. If they click a button and nothing happens for 500ms, they'll click again—or leave.

INP replaced First Input Delay (FID) in 2024 because FID only measured the first interaction. INP measures all interactions throughout the page lifecycle.

Target: Less than 200ms (good), 200-500ms (needs improvement), over 500ms (poor)

Common INP issues and how Blyxo detects them

Issue 1: Long JavaScript tasks

  • Heavy JavaScript execution blocking the main thread
  • Blyxo flags: Tasks over 50ms, long event handlers, expensive render cycles

Issue 2: Heavy event handlers

  • Click handlers that do too much work synchronously
  • Blyxo detects: Event handlers that take over 100ms

Issue 3: Large DOM size

  • Pages with 5,000+ DOM nodes slow down rendering
  • Blyxo flags: DOM size over 1,500 nodes (warning) or 3,000 nodes (critical)

Issue 4: Unoptimized third-party scripts

  • Analytics, ads, chat widgets blocking interactions
  • Blyxo identifies: Third-party scripts with long execution time

How to improve INP

Optimize JavaScript:

  • Code-split to reduce bundle size
  • Use web workers for heavy computation
  • Debounce/throttle event handlers
  • Use requestIdleCallback for non-critical work

Reduce DOM size:

  • Virtualize long lists
  • Lazy-load off-screen content
  • Simplify DOM structure

Optimize third-party scripts:

  • Load non-critical scripts asynchronously
  • Use facades for heavy widgets (e.g., show a static image instead of embedded YouTube until user clicks)

How Blyxo helps: Shows which JavaScript tasks are blocking interactions, identifies slow event handlers, and measures INP across different pages and user flows.

3. Cumulative Layout Shift (CLS): Visual stability

What it measures: How much the page layout shifts unexpectedly during loading.

Why it matters: Ever tried to click a button, but the page shifted and you clicked an ad instead? That's layout shift—and it's infuriating.

CLS measures the total of all unexpected layout shifts that occur during the page lifecycle.

Target: Less than 0.1 (good), 0.1-0.25 (needs improvement), over 0.25 (poor)

How it's calculated: (Impact Fraction) × (Distance Fraction)

  • Impact Fraction: How much of the viewport was affected
  • Distance Fraction: How far elements moved

Common CLS issues and how Blyxo detects them

Issue 1: Images without dimensions

  • Browser doesn't reserve space, so when image loads, it pushes content down
  • Blyxo flags: <img> tags without width/height attributes

Issue 2: Ads, embeds, or iframes without reserved space

  • Third-party content loads and shifts layout
  • Blyxo detects: iframes without explicit dimensions

Issue 3: Dynamically injected content

  • Banners, notifications, or popups that push content
  • Blyxo measures: Total layout shift score and identifies which elements cause shifts

Issue 4: Web fonts causing FOIT/FOUT

  • Flash of Invisible Text (FOIT) or Flash of Unstyled Text (FOUT) when fonts load
  • Blyxo flags: Missing font-display property, large font files

How to improve CLS

Set explicit dimensions:

  • Add width/height to images and videos
  • Reserve space for ads and embeds
  • Use aspect ratio boxes for dynamic content

Optimize font loading:

  • Use font-display: swap
  • Preload critical fonts
  • Use system fonts as fallback

Avoid inserting content above existing content:

  • Load dynamic content below the fold
  • Use overlays instead of inline banners

How Blyxo helps: Highlights elements causing layout shifts, measures CLS over time, and shows before/after comparisons when you fix issues.

The bigger picture: How Core Web Vitals work together

Here's the reality: optimizing one metric can hurt another.

Example: You improve LCP by lazy-loading images. But now users scroll down and wait 2 seconds for images to load. Your LCP improved, but user experience got worse.

Example: You reduce CLS by reserving space for ads. But now you have a huge blank space if the ad doesn't load, hurting perceived performance.

The best performance optimizations improve all three metrics—or at least don't hurt the others.

Real-world example: E-commerce product page

Before optimization:

  • LCP: 4.2s (large hero image, not optimized)
  • INP: 380ms (heavy JavaScript, analytics blocking interactions)
  • CLS: 0.18 (images without dimensions, late-loading reviews section)

After optimization:

  • LCP: 1.8s (compressed WebP image, preloaded, served from CDN)
  • INP: 120ms (code-split JavaScript, deferred analytics)
  • CLS: 0.05 (explicit image dimensions, reserved space for reviews)

Business impact: 12% increase in conversions (users could see and interact with the page faster, fewer accidental clicks).

Why automated testing matters

Here's the problem with Core Web Vitals: they vary by device, network, and location.

Your LCP might be 1.5s on your MacBook Pro but 6s on a low-end Android phone. Your INP might be 150ms on WiFi but 800ms on 3G.

You can't manually test every combination.

How Blyxo automates Core Web Vitals testing

Real devices and networks: Tests from multiple locations with realistic device and network emulation Historical tracking: See how metrics change over time and catch regressions Threshold alerts: Get notified when any page exceeds your performance budgets Element-level insights: See which specific elements cause LCP delays or CLS shifts CI/CD integration: Test every deployment before it goes live

Example workflow:

  1. Developer pushes code
  2. Blyxo runs performance tests automatically
  3. If LCP exceeds 2.5s or CLS exceeds 0.1, the build fails
  4. Developer gets specific feedback: "Hero image increased LCP by 1.2s—compress or lazy-load"
  5. Fix is deployed, metrics improve

What good scores actually mean

Here's what many developers miss: good Core Web Vitals scores don't guarantee a fast site. They just mean you're passing Google's thresholds.

A site with 2.4s LCP, 195ms INP, and 0.09 CLS is technically "good" but still feels slower than a site with 1.2s LCP, 80ms INP, and 0.02 CLS.

Don't optimize for green scores. Optimize for user experience.

The best way to do this:

  • Test from where your users actually are (see Part 1: Testing From Where Your Users Actually Are)
  • Set realistic performance budgets based on your users' devices and networks
  • Track metrics over time and catch regressions early
  • Prioritize optimizations based on real impact

The reality of performance optimization

You won't fix everything overnight. That's okay.

Start with the biggest impact:

  1. Identify your worst pages: Which pages have the highest traffic and worst metrics?
  2. Fix low-hanging fruit: Compress images, defer scripts, add dimensions to images
  3. Measure impact: Did conversions improve? Did bounce rate decrease?
  4. Repeat: Tackle the next priority

Performance is a journey, not a destination. What matters is continuous improvement.


Want to track Core Web Vitals across all your pages automatically? Blyxo measures LCP, INP, and CLS from real locations with realistic devices and networks, alerting you to regressions before they impact users.

Continue reading: Part 1: Testing From Where Your Users Actually Are | Part 3: Performance Optimization Workflow

Ready to improve your website's accessibility?

Blyxo helps teams find and fix accessibility issues with AI-powered testing and developer-friendly recommendations.