I’ve been using Cache Everything for the last couple of months and yes, I can certainly see an improvement. Particularly with TTFB, which is down from nearly a second to 40-50 ms.
My Core Web Vitals field and origin data are now better. Up from poor to needs improvement. So that’s a win too.
But I have noticed one odd result. When I bypass the cache using Development Mode, my Core Web Vitals are much better.
For instance, Largest Contentful Paint is 2.4 sec with the cache but goes down to 1.9 sec with the dynamic cache in Dev Mode.
First Contentful Paint drops from 1.7 to 1.4 sec, and Speed Index from 2.1 down to 1.9.
Cumulative Layout Shift and Time To Interactive are about the same.
I have to say that from the tests I have done. my site is definitely much faster now. So I’m pleased.
But I am intrigued as to why there is such a difference in the Core Web Vitals results between cache everything and dynamic cache.
If anyone has some knowledge about this, I would really like to understand what produces the difference in the results.
If you can provide us with your domain we can look into this and check some things
Also a full list with your Cloudflare settings/configurations would be awesome, but just share what you are willing to share.
Depending what environment you are testing from are you able to confirm that you’ve disabled cache in your lab? If you’re using Chrome DevTools, for example, you can flip an optional switch in the Lighthouse tab to “disable cache when running tests,” or something like that, and you can also use Incognito to make sure no extensions are running at test time etc. These are important environments to hone in on accuracy in reporting. As I’m sure you’re aware it’s RUM (Real User Metrics) represented by Field Data that counts. There’s always going to be a lot of variance plus you can calibrate to different settings in testing labs. If you’re comparing apples to apples then achieving better scores when not caching everything would be concerning.
Thanks so much for getting back to me.
You’re right. In the end, it’s only field data that counts. At present, I’m doing okay there, but just missing LCP by a whisker. Google Search Console is showing 74% for mobile. So that’s why I’m hunting for the last drop of performance.
But the only guides to performance are lab tests. I use Google PageSpeed Insights, GTMetrix and WebPageTest. All of them give me similar results. All green and a 96-98% for all pages.
With Developer tools, when I test, the results with disable cache produce expected results. Always better with cache enabled.
The only measure I use that comes close to field data is Cloudflare Browser Insights. The small difference is that it records all browsers, (and my own uncached hits) hits so it’s not quite the same as Crux that Google uses. But it’s the closest.
But the reason I posted my question, was because I accidentally noticed a big difference in PageSpeed Insights scores when my site was in Developer Mode on Cloudflare. LCP down from 2.4 to 1.9 sec. As this is the metric I’m trying to improve, it got me curious.
So I suppose my question should be, what changes when I’m in Development mode that could affect these results.
Potentially, a lot can be different, such as caching policy (and other policies) at your origin can be set differently. If you’re using workers to create a new Response object then you can optimize headers (in case you aren’t already doing that). Some of the nice things I intend to do in this regard are to graduate preload from link element to link directive page header. If Cloudlfare isn’t making things speedier, actually making things worse, then that is puzzling. A test one could do would be to bypass Cloudflare for a lengthy period, enough to see what direction scores start taking in Browser Insights, and ultimately, what happens in your CrUX metrics. The big wild-card for variance is your audience. If you have a busy enough site for field data, great. At the same time, what comprises that audience? They could have high-end devices giving the impression your site is fast when faster sites with an audience connecting from regions with slow connectivity speeds ruin what would otherwise be great scores. The way to evaluate this is use the device simulations and throttling options in the lab. Hopefully, this helps you find ways to figure this out. My fear sometimes is that my workers are actually lowering scores instead of improving them. I spend an inordinate amount of my time re-evaluating and double-checking metrics to make sure I’m helping and not harming.
Thanks again for taking the time to respond.
I’m using WP Cloudflare Super Page Cache, but not using the workers option.
On my staging site, I’m not connected to Cloudflare. So between the two, I can sort of see what’s going on.
After doing a few more tests, it’s clear that my live site is always faster in almost all respects. It’s only LCP that is different. My staging site always has a lower LCP time, but of course, TTFB and server speed are much slower so it fails the CWV lab test.
On most pages, the LCP is the page title. So it’s nothing to do with large images. What could make a line of text affect LCP? I have no idea. I suppose that’s what I’m chasing.
As for performance in Browser Insights, the poor performer is Windows Chrome browser. Other browsers and operating systems are pretty much fine.
So I suppose the best avenue for me to take is to maintain my current setup, but keep looking for any small improvements I can make.
I don’t know if you can help with this.
But I noticed today that Browser Insights is redirecting to Web Analytics, which is now a Pro feature.
Hi, as mentioned in another thread, we had a bug missing the toggle. I’ll let you posted once the toggle for Beacon JS injection to disable/enable.
This topic was automatically closed 15 days after the last reply. New replies are no longer allowed.