"Always lowest latency"

Hi, Would someone from CF please clarify what exactly is meant by " Always lowest latency as mentioned here https://workers.cloudflare.com/ for the bundled plan. I’ve been testing free out extensively over the last few days.The problem that I see is a larger delay on 1st call - between 900ms and 1.5s and then this drops to anything between 90 and 500ms. But even on subsequent calls this sometimes goes upto 1.2s!

Would this problem be mitigated by switching over to the paid plan (bundled), which I’m happy to do BTW. But would like to get a confirmation.

As a side note I would like to add that the overall timing of the worker response is very high, the same as going to my origin server (IN to US), and in many cases the origin server responds in <500ms. The worker is simply returning a value from KV.

Are you in India? And is your worker running on a domain route?

If I’m remembering correctly, the free plan used to run on under-utilized servers and fewer pops than the paid plan. Someone from Cloudflare might be able to confirm that.

Are you in India? And is your worker running on a domain route?

Yes to both

If I’m remembering correctly, the free plan used to run on under-utilized servers and fewer pops than the paid plan. Someone from Cloudflare might be able to confirm that.

Yes, would be great if someone from CF clarifies. But thats not what they documentation leads me to believe. See lower down on that page (https://workers.cloudflare.com/) - the diff between the free and bundled.

I’ve done a lot of distributed load testing on the Workers and KV especially - a spike during the first start is normal (usually ~500ms) - even on the paid plan. However, if constant traffic is kept, like a couple of requests a minute, then the KV’s are kept warm and you’ll see sub 100ms requests constantly (No you can’t set a trigger for this, since KV’s are cached on the local pop and if you’re not updating the KV often, then you can use the CF Cache).

Timing depends on location, ISPs, DNS resolution and even SSL handshakes which are all typical with serverless. Until we see tests from production cases which we can compare to, it’s really going to be a guessing game.

2 Likes

I can live with 500ms, I’m seeing spikes up to 1.8sec. And if this also happens on the paid plan, then much as I hate to say it, but is CF then lying about 0ms in cold start, always lowest latency etc?

I didnt follow this - the whole point of using KV is that its not updated often. Thats what CF touts as the main usecase.

The reason I’m not using CF cache is due to the browser TTL of min 30 mins. I see no way to get around that except on the enterprise plan and thats a deal breaker for my requirements. I would love to be able to set browser TTL = 0, edge cache TTL would be 30 days or some such, I dont care, as long as cache can be invalidated when required (which of course CF allows).

Thank you Thomas for your comments and feedback!

You still can do that. The limit is on the edge cache TTL, not the browser cache. You can set an header for it from your server (or Workers) and set Cloudflare to respect the origin’s request. Then you can set the Edge Cache TTL to 1 month.

KVs are already cached when read frequently, but having it in the proper cache can ensure that keys are not re-fetched until the cache is cleared either manually or automatically.

Routing a request through ISPs is not predictable - even at the best of situations - spikes and latency can occur for more reasons than the endpoint being slow. I’d suggest getting a proper distributed load testing for your location so you can get better real-world data.

1 Like

I dont think so :slight_smile: Please see https://support.cloudflare.com/hc/en-us/articles/200168276

  1. The Cloudflare UI and API both prohibit setting Browser Cache TTL to 0 for non-Enterprise domains.
  2. Cloudflare overrides any Cache-Control or Expires headers with values set via the Browser Cache TTL option in the Cloudflare Caching app if The value of the Cache-Control header from the origin web server is less than the Browser Cache TTL Cloudflare setting, or

I hope I’ve not misinterpreted the above.

True and that I confirmed myself on my Free and Enterprise plans.

On the Free plan there is the option “Respect Existing Headers” which does exactly what I said. You can add an header for cache-control and set it to whatever you like. This needs to be done from the server or the Worker.

Ok I’m going to try what you said and report back. I guess this is the key line " Unless specifically set in a Page Rule , Cloudflare does not override or insert Cache-Control headers if you set Browser Cache TTL to Respect Existing Headers"

1 Like

Ok, I hear you

Can you please point me to documentation on this. From all the reading I’ve done earlier - KVs are in central store, and then get cached at the edge, just like workers. But I understood this to be fully automated, under the hood. There is no control on this and infrequent keys will drop off the edge.

You’re right that KV are cached locally, but the first read always have more latency than the subequent requests to it and this is per data-center. You can go around that by using the cache API or cache the value in a “global” inside the worker, this way you have more control over the KV store and how long it should be cached.

(Globals are simply a variable your set outside of the JS functions in the worker, like, at the top head. This value persists until the worker is restarted. Just beware that you don’t store large chunks of data, because then the worker will run out of RAM and constantly restart).

1 Like

Ok, step 1, just to confirm this works, I have set Caching app to Browser Cache TTL = Respect existing. And then I have a page rule to cache everything + edge TTL = 1 mo

With this I can see that the response does NOT have the cache control header, I have a HIT on CF Cache, and 200ms response time - so this is good.

Now I need to check with a worker talking to the cache, like @thomas4 mentioned, and then maybe I can get to a wee bit more predictability