Worker Time To First Byte (TTFB) - is 30-40ms normal?

To evaluate Worker performance, I’ve written a very simple worker application which simply waits for a request then responds with “hello world” and some headers.

After hitting refresh a few times to warm up the worker, I can see that Time To First Byte (TTFB) is around 30 - 40 milliseconds.

Now this is very impressive response times compared to going back to origin, but it still seems quite slow for a function that takes single digit milliseconds to execute.

Is a TTFB of 30 - 40 milliseconds typical for a simple “hello world” type setup? I’m using the free plan which perhaps makes a difference? Or is it possible to tune this down even further?

I appreciate 30 - 40ms for TTFB is exceptionally good compared to other server options, but I’m curious to know just how far we can push the responsiveness of our web applications on Cloudflare Workers…

Well, the function takes a fraction of a second to run, but you cannot ignore network latency. So yes, that is a pretty decent value.

What’s the URL where the worker is mapped to?


After all it still is a network request that gets composed by your browser, sent through a network stack, routed through public networks, and handled on Cloudflare’s side where the response makes it back through these very same layers. That’s a lot more than a calling a function from within an executable, which takes a single digit millisecond.

1 Like

True, I hadn’t considered the network round trip latency. Here is the latency from my development PC to the worker URL taken from 25 ICMP pings:

min: 10.713ms
avg: 13.267ms
max: 18.309ms

Add all the other layers to it and you are at 30. That’s a good number.

1 Like

My record is 17ms, but I’m relatively close to the LAX datacenter. Generally, though it’s in the 25ms+ range.

1 Like

To add, you can get a better look at performance by opening a new incognito tab, opening the network tab in developer tools, then looking at the timings for all of the other steps (which were cut off since you already had a keep-alive or h2 connection open).

You can also view direct TTFB to an early Cloudflare server by going to /cdn-cgi/trace and looking at the timings for that; comparing that to your worker TTFB should give you a good idea of how much the worker is doing (± a millisecond with your bits traversing inside a CF DC). Also, If you worker does some fetch() calls, you should expect to see longer TTFBs as it’s waiting on those to complete.

TTFB will always be geographical distance relative due to networking and speed of light limitations. So <40ms is typical but can vary due to the network and geographical factors from your test client to CF edge server.

I typically see around 20-50ms TTFB from my CF worker cached responses.

Here’s my Wordpress blog’s index page served from CF cache via CF Worker from my Brisbane, Australia location to my VPS server in US West Coast, San Jose behind Cloudflare

But Cloudflare doesn’t cache dynamic HTML by default unless you tell it to via cache everything page rules or equivalent in CF Workers. So if you’re testing dynamic HTML asset, the TTFB is is not cached at CF edge and is connecting to your origin itself.

Also if testing from local PC’s browser devtools, consider your local network and PC’s work loads too. I posted an example of how Google Core Web Vital Metrics for LCP can drastically differ if a person is watching a video on same PC at the time of page speed meaurement at Performance Tutorials - Google PageSpeed & - render time for a H1 tag could blow out from 2 seconds to 19 seconds in such cases !