I’m thinking about using workers sites to deploy a static site to the edge. If I do so, will that site benefit from Argo? It looks like both tiered caching and smart routing to the origin won’t really apply since the site will always be served from the edge, i.e. the POP closest to the visitor.
EDIT - Qn 2) If my understanding of workers sites is correct, it will be equivalent to having a 100% cache hit rate and that too from the closest POP to the visitor?
EDIT 2: Qn 3) (post title edited) Does the edge cache still have performance improvements for a workers site?
Thanks. About the caching, I’m confused by the information in the following link. It mentions that workers sites will also store static assets in the cache. Is this only to minimize the reads/lookups from the KV for cost reasons, or does the edge cache still improve performance even for a workers site?
I am not really too familiar with how Sites works, however from the article I’d understand content (served from KV instead of the origin) is still subject to the cache as well.
The question is what do you consider improvements. KV and the cache being on the same edge you wont experience any better network performance. If anything, I’d assume the file cache will return data faster than anything database related.
I guess I was wondering what potential measurable difference would be made, if any, because of Argo or edge caching, on a workers site. More out of curiosity than anything else.
If I understand Argo correctly, it doesn’t affect the connection between the PoP and the end user, only between the PoP and the origin. With Workers, you’re eliminating the latter connection, so Argo shouldn’t have any effect–it’s not coming into play.
@cs-cf - Thanks for the update. A couple of questions, please.
I am trying to decide whether the performance difference is worth the effort to set up a workers site for a low traffic site - maybe 1000 visitors a month (EDIT: split across ~100+ variations of my landing page). So the regular caching (non-workers site) will probably not work well because I expect the assets would expire often. All my visitors would be from the US and the origin server is in northern California. On Google Analytics, I see most page views load within 2-3s but for about 10% of them, it goes very high, sometimes exceeding 7-10 seconds. I’m trying to reduce that.
If I’m using a workers site, would edge cache expiry/deletion work exactly the same way or will the assets stay in the edge cache longer than they would for a non-workers site?
For a worker’s site, if the object is retrieved from the kv instead of cache, it would still be faster than having to retrieve it from the origin, right? How much difference can I expect?
I understand it’s impossible to accurately predict the performance improvement without running a test site, but I was hoping you could give me an indication.
We don’t treat objects retrieved from KV any differently than any . other object we might cache. So assuming the same cache attributes applied, they would be stored for the same length of time.
It’s almost certainly faster. How much faster? How long is a piece of string? Too many variables to provide a meaningful estimate.
Using a tool like webpagetest does it break down the source/cause for that? I wouldn’t expect a request to KV to ever take 7-10 seconds so, it’d (99.99% certainty) be faster assuming the cause of that delay is an asset you control and not a 3rd party resource (something that slow is often a 3rd party resource)
Got it, thanks! Though the difference in server response time should be possible to roughly estimate, for cache vs KV on the same POP, closest cache vs origin, and KV vs origin. Anyway, it’s not a “need to know”, just idle curiosity.
The tests on WPT, GT Metrix, etc are all much faster, 2 seconds or less on 3g mobile from multiple US locations. I think the outliers showing in Google Analytics might be either a measurement error or because those users are in a very slow network area. Interestingly these outliers are in both desktop and mobile, though.
Last question, if cache would be much faster than KV, then for a low traffic worker site, Argo’s tiered caching should provide some benefit, right?
That will be hard to estimate, simply because there are too many factors to take into account.
Roughly I’d say “Cache > KV >>>> Origin”. The cache will be likely faster than KV, but I assume this difference will dissolve in the network latency. Both will be certainly (somewhat) faster than requests to the origin, simply because there is not additional network activity involved.
Again, Argo should not play any role as you dont have the origin involved.
Well, if cache vs KV makes a difference then tiered caching (Argo) would be a factor, too. Though I agree with you the difference would probably be very small, maybe even insignificant.
Tiered caching only makes sense when the PoP can fetch data quicker from a fellow PoP than from the origin. In case of KV that should never be the case, as the KV data should always be local.
So only one PoP per region is the master for KV and all other purge on request? In that case tiered caching might offer an advantage, assuming that tiered cache is closer than the master KV.