AI summary

The four numbers your website provider has to hit on mobile, why each one matters more in 2026 than it did three years ago, and the common provider failure modes that quietly cost dealerships ranking and AI-search citations. Part of our [Powersports Website Playbook](/blog/powersports-dealership-website-playbook-seo-geo-ai-search-visibility).

Powersports buyers research on mobile. Roughly seventy to eighty percent of vehicle research traffic on a powersports dealer site arrives on a phone, usually a mid-range Android, often on 4G or a weak LTE signal in the garage or on the trail head. The site's job is to render fast on that device, on that connection, on the page the buyer actually lands on (almost never the homepage, usually a VDP or a filtered SRP).

Core Web Vitals are how Google measures whether you're meeting that bar. They're also a strong signal for AI search engines, which pull sources in real time and skip the slow ones in favor of faster sources, even ones with thinner content. Speed is no longer just user experience. It's literal eligibility to be cited.

This guide walks each of the four metrics that matter for a dealership website, the threshold to hit, the most common reasons provider-template sites miss it, and how to actually test.

Why Core Web Vitals matter more in 2026 than they did in 2022

When Google rolled Core Web Vitals into its ranking signals in 2021, the practical effect on most categories was modest. A slow site got a slight ranking demerit. A fast site got a slight bump. The signal was real but not decisive.

Two things changed that.

The first: AI search engines became a meaningful slice of buyer research. ChatGPT, Perplexity, Claude, Gemini, and Google's AI Overviews don't pre-index the web the way classic search engines do. They retrieve sources at query time, which means a slow source actively hurts the response, and gets dropped in favor of a faster one. The threshold isn't just "good enough to rank," it's "fast enough to be retrieved before the LLM finishes generating the response."

The second: mobile became the dominant research surface for vehicles. Industry data is consistent that 70%+ of dealership-site sessions are mobile. Buyers researching a side-by-side at lunch, a snowmobile on the chairlift, a PWC on the dock, every one of those sessions is on a phone, often on a marginal connection.

Those two shifts mean Core Web Vitals have moved from "ranking signal" to "eligibility threshold." A page that doesn't hit the thresholds doesn't just rank lower. It gets skipped.

Largest Contentful Paint (LCP) under 2.5 seconds

What it measures. The time from page-load start to the moment the largest visible element renders. On a powersports VDP, that's almost always either the hero gallery's first image or the model headline. On a category SRP, it's usually the first listing tile.

The threshold. Google publishes the Core Web Vitals thresholds at three tiers: "good" (under 2.5s), "needs improvement" (2.5–4.0s), and "poor" (over 4.0s). For 2026 ranking and citation eligibility, you want "good" on a representative VDP measured on a mid-range Android device on 4G, not on a desktop fiber connection.

Why it matters. LCP is the single biggest determinant of perceived load speed. A buyer who lands on a VDP and waits four seconds for the hero image is a buyer who's already started the back-button gesture. An AI engine pulling that page as a source is an engine that's moved on to the next candidate.

Common provider failure modes.

  • Hero image weight. Provider templates regularly serve hero images at 600KB–1.5MB in JPEG. The 2026 standard is WebP or AVIF, under 150KB, with srcset for responsive sizes.
  • No preload on the hero. The browser doesn't know the hero is critical until it parses the HTML and CSS. A in the head fixes this.
  • Hero gallery JavaScript blocking the first paint. Carousel libraries that block first paint until the full gallery script loads add a full second of LCP delay on slow phones.
  • Render-blocking CSS in the head. Large unoptimized stylesheets, common on multi-tenant provider templates, block the browser from rendering until the entire stylesheet is parsed.
  • Slow CDN or no CDN. Provider sites still hosted on a single regional origin server, with no edge CDN, pay 200–400ms of TTFB on every request from outside that region.

Quickest wins. Preload the hero image. Convert hero images to WebP/AVIF and cap weight at 150KB. Move non-critical JavaScript below-the-fold or behind defer/async. Inline the critical CSS for above-the-fold content; defer the rest. Ship from a CDN edge close to your buyers.

Interaction to Next Paint (INP) under 200 milliseconds

What it measures. INP replaced First Input Delay (FID) in early 2024. It measures the longest delay between any user interaction and the next visual update, across the entire page session. Tap the gallery → next paint. Tap a filter → next paint. Tap the financing widget → next paint. INP is the worst of those.

!Core Web Vitals have moved from ranking signal to eligibility threshold. A page that misses the thresholds doesn't rank lower, it gets skipped.

The threshold. Under 200ms is "good." Over 500ms is "poor." The middle band is "needs improvement."

Why it matters. INP is the metric that catches heavy client-side JavaScript on filtered SRPs and configurator-style VDPs. Buyers tap a filter, the page hangs for half a second while the bundle re-renders, the buyer thinks the site's broken. Both Google and AI engines treat slow interaction as a quality signal.

Common provider failure modes.

  • Heavy filter logic on the main thread. SRP filter libraries that re-compute the entire result set on the main thread block any other interaction during the work.
  • Third-party scripts on the main thread. Live chat widgets, marketing tags, A/B test SDKs, all of them tend to run on the main thread by default. Stack five of them and INP collapses.
  • Inventory data fetched per interaction. Filter taps that re-fetch from an API, with no debouncing or caching, blow past 200ms regularly.
  • Synchronous third-party iframes. Embedded financing or trade-in widgets in synchronous iframes block the main thread on every interaction.

Quickest wins. Move filter logic into a Web Worker. Lazy-load third-party scripts after the page is interactive. Debounce or batch filter changes. Replace synchronous iframes with async iframes or in-page components. Audit your tag manager, most have at least one tag that should be deferred.

Cumulative Layout Shift (CLS) under 0.1

What it measures. The total amount the page layout shifts during load. Images that load without reserved space, ads or third-party widgets that inject after first paint, async-rendered components, all of them push CLS up.

!A buyer who lands on a VDP and waits four seconds for the hero image is a buyer who's already started the back-button gesture.

The threshold. Under 0.1 is "good." Over 0.25 is "poor."

Why it matters. CLS is the most user-visible failure of the four. When a buyer is about to tap a CTA and the page jumps because a banner just loaded, they tap the wrong thing. They lose trust. They leave. Both engines treat layout shift as a clear quality signal.

Common provider failure modes.

  • Images without width and height attributes. The browser doesn't know how much space to reserve, so the layout shifts when the image arrives.
  • Webfonts without font-display: swap and a sized fallback. When the webfont arrives, the rendered text resizes, shifting everything below it.
  • Third-party widgets that inject after page load. Live chat bubbles, cookie banners, retargeting popups, all classic CLS offenders.
  • Async ads or promoted-listing slots without reserved height. Common on category SRPs where promoted units render above the organic listings.

Quickest wins. Set width and height (or aspect-ratio) on every image. Reserve space for any embedded widget, even if it's empty until it loads. Use font-display: swap with a system fallback that's close in metrics to the webfont. Move late-injecting elements to fixed positions that don't push other content.

Time to First Byte (TTFB) under 600 milliseconds

What it measures. The time from the browser's request to the first byte of the response. TTFB sits behind every other metric, if the server is slow to respond, LCP is slow by definition.

The threshold. Under 600ms is "good" on Google's CrUX dataset. Under 200ms is excellent and increasingly common on properly cached, edge-served sites.

Why it matters. TTFB is the cleanest signal of how the site is hosted and rendered. A site rendered server-side from an edge CDN responds in under 200ms. A site rendered client-side from a single origin under load can be 1–2 seconds. The difference is invisible to a buyer on fiber and brutal to a buyer on the lake on a 1-bar LTE signal.

!Roughly seventy to eighty percent of vehicle research traffic on a powersports dealer site arrives on a phone.

Common provider failure modes.

  • Single-origin hosting, no CDN. Provider hosts everything from a single data center. Buyers far from that region pay the round trip on every request.
  • Client-side rendering of inventory. The first HTML response is a near-empty shell; the inventory data loads after a JavaScript round trip. TTFB looks fine, LCP is awful.
  • Database-bound rendering with no caching. Every page render hits the database for inventory and content. Under load, response times balloon.
  • No HTTP/2 or HTTP/3. Older provider stacks still on HTTP/1.1 with no connection multiplexing.

Quickest wins. Edge cache every public page. Server-side render for first paint, hydrate client-side after. Use a CDN with global PoPs. Upgrade to HTTP/2 or HTTP/3. Cache database query results that don't change per request.

Get the four numbers in 30 seconds

If you don't know your site's LCP, INP, CLS, and TTFB right now, you don't know whether you're eligible to be cited, by Google or by AI engines. The Website Grader pulls all four metrics on a representative page in roughly 30 seconds, then runs an AI-search visibility audit and a traditional SEO health pass alongside them. Useful as a starting point before you spend hours in PageSpeed Insights and Search Console.

How to test, the actual workflow

Three tools, in this order:

1. PageSpeed Insights (pagespeed.web.dev). Free, runs Lighthouse on Google's infrastructure, gives you both lab data (a single test on a controlled environment) and field data (real-world Core Web Vitals from actual Chrome users on your site, when there's enough traffic for the CrUX dataset). Field data is what matters for ranking; lab data is what matters for diagnosis.

2. Chrome DevTools, Lighthouse panel and Performance panel. Reproduce specific issues. Throttle CPU to 4x and network to "Slow 4G." Run the page. Watch the Performance panel for what's blocking the main thread. The Lighthouse panel gives you the same scores PageSpeed Insights gives you, plus a list of opportunities.

3. Real User Monitoring (RUM), your provider's tool, or third-party. Field data on your actual traffic, segmented by page type, device, and region. If your provider can't show you RUM data, that's a tell.

!A six-second page on mobile is effectively invisible to AI search.

The pages you test:

  • The homepage.
  • A representative VDP, pick the most-trafficked unit type, ideally with a heavy gallery.
  • A representative SRP, the highest-traffic category SRP, with default filters applied.
  • A filtered SRP, apply two or three filters and re-test. INP usually breaks here.
  • A location page.

For each, capture LCP, INP, CLS, TTFB on mobile (the default in PageSpeed Insights). Test on desktop too, but the mobile numbers are the ones that decide ranking and citation eligibility.

What to ask your website provider

Three questions, in this order:

  1. What are your current Core Web Vitals on a representative sample of VDPs and SRPs, broken out by mobile and desktop, from your RUM data over the last 30 days? If they don't have RUM data, they don't know.
  2. What's your roadmap if any of those metrics are above the "good" threshold? Specific, dated, measurable. Not "we're working on it."
  3. What's your testing process before each release to make sure these metrics don't regress? Performance budgets in CI, automated Lighthouse runs on representative pages, alerts on regression.

Vagueness on any of these is the answer.

The 30-day Core Web Vitals fix sequence

If you want a concrete sequence to run with your provider:

Week 1. Run PageSpeed Insights on the five page types listed above. Capture mobile scores. Get RUM data from your provider for the last 30 days, segmented by page type. Document everything that's not "good."

Week 2. Hit the easy wins. Convert hero images to WebP/AVIF and cap weight. Add width/height to every image. Add preload for the hero image. Set font-display: swap. Defer non-critical JavaScript.

Week 3. Hit the structural wins. Server-side render the first paint if you're not already. Move filter logic to a Web Worker if INP is the issue. Edge-cache every public page.

Week 4. Re-test. Compare. Document. Set up Lighthouse CI or your provider's equivalent so regressions are caught before they ship.

A typical provider-template powersports site can move from "needs improvement" or "poor" on most metrics to "good" in 30 days if the provider takes it seriously. If they can't, that's the answer to whether you stay with them.

This guide is part of our Powersports Website Playbook, the full strategic frame, audit, 90-day plan, and provider questions for ranking and getting cited by AI search in 2026.

Frequently asked questions

LCP (Largest Contentful Paint) is the single most important metric. It measures how fast the largest visible element renders, which on a powersports VDP is usually the hero image or model headline. The threshold to hit is under 2.5 seconds on mobile. Every second past 2.5 measurably reduces both Google ranking and AI-search citation eligibility.

Use Google's PageSpeed Insights at pagespeed.web.dev for a free first pass. It gives you both controlled-test data and real-world data from actual Chrome users on your site. For diagnosis, use Chrome DevTools' Lighthouse and Performance panels with CPU and network throttling enabled. For ongoing monitoring, your website provider should offer Real User Monitoring (RUM) data segmented by page type and device.

Roughly 70-80% of vehicle research traffic on a powersports dealer site arrives on a phone, often a mid-range Android on a marginal cellular connection. Building for the desktop-fiber experience and assuming the phone will adapt produces sites that fail in real-world conditions. Both Google's ranking signals and AI search engines weight mobile performance more heavily for buyer-intent queries.

It depends on the underlying architecture. Easy wins (image optimization, preload directives, deferred JavaScript) can be done on most platforms. Structural wins (server-side rendering, edge caching, Web Worker filter logic) often require platform changes the provider may not support. If the provider can't commit to specific, dated improvements with measurable thresholds, the underlying architecture is usually the constraint and a provider migration is the more direct fix.

AI search engines like ChatGPT, Perplexity, and Claude pull sources at query time rather than from a pre-built index. A slow page is a page the engine skips in favor of a faster source, even one with thinner content. The thresholds aren't just about ranking; they're about whether your site is fast enough to be retrieved before the AI finishes generating its answer. A six-second page on mobile is effectively invisible to AI search.