Hi All!
I am using workers to try delivering resources and answering requests as fast as possible to our users
everywhere in the world.
As such, I have been investigating some latencies we are seeing from our workers.
Indeed, the goal to using workers was being closer to the end user and hence being able to reply
as fast as possible.
Now take the below Hello World worker
addEventListener(‘fetch’, event => {
event.respondWith(handleRequest(event.request))
})
/**
- Respond to the request
-
@param {Request} request
*/
async function handleRequest(request) {
return new Response(‘hello world’, {status: 200})
}
If I benchmark it, calling it 1000 times, from place X in the world I get a median of 20ms to call it
and get back an answer.
Terrific, fantastic performances.
Now here is the catch and the why of this post.
If instead of benchmarking it, I call it once every X seconds. I am getting a median of 300ms to call it
and get back an answer. Out of those 300ms are 142ms spent in connect time.
As our users are everywhere. Chances they will bang on the same worker are slim. As such, that
explains that we see the latency of 300ms more than the 20ms one.
Last but not least, I am currently trying the Enterprise Plan and can still see the same outcome.
Request every 10 or more seconds -> 300ms
1000 requests together -> median of 20ms
So dear worker experts. Is this the expected behaviour and is there some way to get a constant 20ms
as I can see cloudflare can deliver or not?
I am guessing the 20ms is the resulting of caching by bombarding the worker but at the same time
when reading on cloudflare workers I had a feeling that those 20ms could be reachable thanks to
being at the edge and cloudflare usage of v8 isolates to bootstrap quickly a worker.
If one of you experts could explain the expected flow, performance, and if there is anything one can do.
Thanks to all in advance.