Google has spent five years training site owners to obsess over Core Web Vitals. Most of that obsession has been wasted on numbers that don't matter. Here's what actually moved when Google replaced FID with INP in March 2024, what 2026's thresholds are, and the three specific changes that fix most failing pages.
What changed (and what didn't)
The current set of three metrics is:
| Metric | What it measures | "Good" threshold (2026) |
|---|---|---|
| LCP — Largest Contentful Paint | When the biggest above-the-fold element finishes rendering | ≤ 2.5 s |
| INP — Interaction to Next Paint | How long the page takes to respond to any user interaction | ≤ 200 ms |
| CLS — Cumulative Layout Shift | How much the page jumps around as it loads | ≤ 0.1 |
LCP and CLS are unchanged from the original 2020 spec. INP replaced FID (First Input Delay) in March 2024 and has stuck. The replacement matters because FID only measured the delay before the first response — INP measures the worst response across the entire visit. That's a much harsher test.
Google's official position is that Core Web Vitals are a tie-breaker — they decide between pages with otherwise equivalent relevance. Internal experiments at agencies (including ours) suggest the actual weighting is more aggressive than that for competitive queries. A site moving from "poor" to "good" on all three metrics commonly sees 8–20% organic traffic lift inside 60 days, holding all else constant.
LCP — the hero image problem
Largest Contentful Paint usually fails because of one element: the hero image on your landing page. The diagnosis pattern is consistent across the dozens of audits we run a year.
Common offenders:
- Unoptimised hero image. A 3 MB JPEG hosted off-CDN takes 2–4 seconds on a 4G connection. Convert to WebP or AVIF (60–80% smaller at the same quality), serve responsive sizes via
srcset, and put it on a CDN. - Render-blocking CSS or fonts. A
<link>to a 200 KB CSS file at the top of<head>blocks LCP until the entire stylesheet downloads and parses. Inline critical CSS for above-the-fold content; defer the rest. - Late image discovery. If the hero image URL is constructed by JavaScript (common in React apps without proper SSR), the browser can't preload it. Use
<link rel="preload" as="image">for the LCP element and make sure it's in the SSR'd HTML.
The single highest-leverage fix is image format. Switching a hero image from JPEG to WebP on a Next.js site usually cuts LCP by 800–1200 ms. The Next.js <Image> component handles this automatically with the priority prop — but only if you remember to set priority on the LCP image and let it serve AVIF where supported.
INP — the metric most sites are now failing
INP is the new pain point. Sites that scored "good" on FID are routinely failing INP, because INP looks at every interaction during the session, not just the first one.
The interactions that typically blow the budget:
- Cookie consent banners. Especially the ones that load a heavy JS framework just to render a banner. We've seen banners that single-handedly add 400 ms to INP.
- Third-party analytics + chat scripts. Microsoft Clarity, GA4, Meta Pixel, Intercom, Drift — every script you inline blocks the main thread on every interaction until it finishes parsing.
- Heavy client-side state libraries. React apps with deeply nested context providers can take 300+ ms to respond to a click on a slow phone.
The fixes are less glamorous than LCP:
- Audit your third-party scripts. Anything not strictly needed before user interaction goes behind
<Script strategy="lazyOnload">(Next.js) or its equivalent. - Defer non-critical work with
requestIdleCallback. Newsletter signup logic, exit-intent listeners, analytics setup — none of it needs to run during the first interaction. - Profile with Chrome DevTools Performance panel. Click around. Find the slow interaction. Look for the long task. Often it's one third-party script you didn't realise was that heavy.
Cookie consent in particular needs scrutiny. GDPR-compliant consent management can be fast — implementations under 5 KB with no third-party dependencies exist. If yours is over 30 KB and includes its own jQuery-like helper library, swap it. We did this for a UK client and recovered 280 ms of INP, moving them from "needs improvement" to "good".
CLS — the easy one to fix, the easy one to forget
Cumulative Layout Shift is the simplest metric to debug and the easiest to regress on. Every time a developer adds a new section, ad slot, or dynamically-loaded component, CLS can spike.
The recurring causes:
- Images without explicit dimensions. Always set
widthandheightattributes (or aspect-ratio CSS). The browser needs to reserve space before the image loads. - Web fonts swapping in (FOUT). Use
font-display: optionalfor non-critical fonts, or preload your primary font in<head>. Without this, every page render shifts text around as the brand font loads. - Late-loading banners and notifications. Cookie banners, promotional bars, "subscribe to newsletter" widgets — any element that injects above existing content shifts the entire page down.
- Ads. If you run display ads, reserve fixed dimensions for the slot. Otherwise the page jumps every time an ad renders.
We've seen CLS scores of 0.35+ on otherwise well-built sites — caused entirely by a header notification bar that loads JS-side after the rest of the page. Moving the bar to render server-side, with placeholder height reserved, drops CLS to 0.02 without any other change.
The 80/20 fix list
If you have 4 hours to spend on Core Web Vitals and want maximum return:
- Hour 1 — image audit. Convert hero images to WebP/AVIF. Add
priorityto LCP images. Set explicit dimensions on every<img>. (Fixes LCP and CLS.) - Hour 2 — third-party script audit. Move every non-critical script to
lazyOnload. Specifically: GA4, Clarity, Meta Pixel, LinkedIn Insight, Intercom, anything else. (Fixes INP, marginally improves LCP.) - Hour 3 — font loading. Preload your primary brand font. Use
font-display: swaporoptional. (Fixes CLS, helps LCP.) - Hour 4 — measure and verify. Run PageSpeed Insights against your top 5 landing pages. Compare to the previous 28-day field data. Watch for regressions over the next week.
Most sites we audit see all three metrics move from "poor" or "needs improvement" to "good" with this exact sequence. The sites that don't usually have a deeper architectural problem — SSR not running, hydration mismatches, or a heavy SPA framework loading megabytes before any content appears.
Field data vs. lab data — read it right
PageSpeed Insights shows two numbers per metric:
- Lab data: what Lighthouse measures in a single simulated run from Google's data center.
- Field data (CrUX): what real users in the last 28 days experienced, broken down by mobile vs. desktop.
Google uses field data for rankings. Lab data is for debugging. A site with terrible lab scores but good field data (because most of its visitors have fast connections and devices) won't be penalised. A site with perfect lab scores but poor field data — because its users are on 4G mobiles in tier-3 India — will be.
If your field data is missing entirely, you don't have enough traffic for the Chrome User Experience Report to include you. Lab data is the only available signal until traffic picks up.
What's coming next
Google has hinted at adding two more metrics beyond the current three by mid-2026 — one likely to be a measure of responsiveness during scroll, the other a measure of animation smoothness. Neither is in the ranking algorithm yet, but they're already in Lighthouse as informational metrics. Worth tracking.
Want a Core Web Vitals audit?
We run free 24-hour audits on Core Web Vitals + technical SEO. We pull your CrUX data, compare it to industry benchmarks, identify the specific files and scripts that are hurting you, and prioritise fixes by impact. No obligation.
Black Arrow Technologies builds conversion-focused websites and runs technical SEO programmes for UK, UAE, and India clients. We took a London fintech from page 3 to position 2 in 90 days, in part by fixing Core Web Vitals first. We don't chase algorithms — we build foundations.