Worker response is slow for first hit

It says in the docs that with the paid plan there is Always lowest latency for the paid plan, but I’m on the bundled (paid) plan and I’m seeing increased latency on the first hit to a worker instance. Can anybody clarify this?

What is your Worker doing?

It could be that any setup done in the global scope delays startup.

As stated here, “Always lowest latency” is kinda misleading and doesn’t mean it’s always loaded at every Edge Location. Once it’s evicted from on Edge Location the First Request will have a little Cold Start, but less than without the Plan.

There is no global setup being done in the worker, only responding to the fetch event.

I’ve just been testing it, and it seems that if it’s getting kicked, it’s happening after just two or three minutes of inactivity. I suppose this means that I’ll have to ping it regularly in order to ensure that my users are always seeing the best performance.

That would be possible if your Ping Server is in the exact location, where your User lives. But your Users won’t even notice the ~200ms Cold Start. And it can already get evicted after 30 seconds of inactivity. Depends on various Factors.

The problem is that 200ms is the absolute minimum I’ve seen. Usually it averages around 450ms. This isn’t an app that is going to be hit by millions of people, it’s highly specialised. The reality, therefore, is that every time a user hits the page, they will see this cold start.

I’m honestly finding it a bit disingenuous that Workers are touted as having zero cold start due to them being isolates, whereas the reality is that not only is there a cold start, it’s actually longer than those of some of their main competitors spinning up node instances.

If I have to deal with a 450ms cold start, then I have to reconsider where I host this app, because “no cold start” was in fact the primary reason I chose Cloudflare.

Hey,

That sounds like a way longer time than expected even for a cold start.
Can you share the domain?

2 Likes

Just wondering if you’ve taken the TCP setup and TLS handshake into account?

1 Like

I created a “Hello World” worker in my own (free, with workers paid) account out of curiosity (I don’t work on anything related to workers) and I’m definitely not seeing anywhere near the latency that you are. Are you doing any subrequests?

Yes there are sub-requests, but I don’t see why that would only affect the first response.

Well… if those sub requests take time… then that’s time waiting for a response

That would affect every response, not just the first.

Not necessarily. Depends on caching and stuff.

Again, can you share your domain? Guessing games aren’t fun

3 Likes

An easy way to confirm if it’s related. Test without each sub-request and measure the cold start/first request response time :slight_smile:

It would not. For example, the DNS lookup required for the subrequest would be cached after the first hit.

1 Like

The sub-requests are just to other services, and as I understand it, those don’t hit the network.

The domain is runestone.io. There are sub-requests, but they are all services so don’t hit the network. Each top-level request hits the KV store once, so perhaps it could also be something to do with KV caching.

KV calls do hit the network as KV data isn’t replicated across every Cloudflare location.
KV calls are then afterwards internally cached, which would explain why your first request takes longer.

1 Like

Indeed, if you’re using service bindings those would go directly the worker and not hit the network. You can reduce the impact of KV cold-reads if the data isn’t modified frequently using KV · Cloudflare Workers docs

1 Like