Cache in front of Worker

Currently, no matter what, a Cloudflare Worker will run in front of CF’s cache. This makes Workers unappealing for doing things like serving static content (obtained via a complicated scheme only Workers can do) since you will be billed for every request; in this situation, it makes much more sense to build on another Serverless platform like Lambda or Cloud Functions since then you could have it serve the static content and get CF to cache the responses, all at a fraction of the price of what it would be with Workers [this would exponentially grow with the number of requests you get].

This feature request is a request to change this. It would be amazing if Workers could be toggled to run behind the cache to save on billing costs.

+1 although a workaround for costs would be that cached worker requests get excluded from worker billing and only charge for non-cached worker requests :slight_smile:


+1 This can help significantly reduce billing costs. This is the only thing that’s stopping me from moving to a serverless platform like Cloudflare Workers.

Please correct me if I’m wrong…

Can’t you do that by adding options to the cf object?

Setting Cloudflare cache rules (i.e. operating on the cf object of a request)


I haven’t tried it but from the docs I got the impression you can interact with CF’s cache without using the Cache API which incurs in a worker hit.

Also, if you use a cloud function (eg: AWS Lambda) wouldn’t a significant portion of requests still go back to origin whenever a request hits a new data center without a cached version of the request?

For comparison:

AWS Lambda at Edge costs $0.60 per 1M requests + compute time + memory + network. You are also paying for idle time when waiting for the DB, fetch, etc. Also cloudfront costs for https I believe.

Google Cloud Functions cost $0.40 per 1M requests + compute time + memory + network. Not running on edge.

Cloudflare Workers running at edge costs $0.50 per 1M requests with no extra costs for cpu time, memory, or bandwidth.


You can set cache for fetch requests you do within the worker, however you can’t set it so that your Worker only runs if the request URL [that would have otherwise triggered the worker] isn’t in CF’s cache.

1 Like

When a URL served by a Worker is requested, CloudFlare should first check to see if the content is available in cache and is valid before sending the request to the Worker.

When the Worker responds, CloudFlare should inspect the cache headers in the response to determine whether the response should be held in cache and for how long.

Effectively this is the standard origin behaviour in Cloudflare but applied to Workers.

Implementing this feature will primarily improve performance for visitors (as content can be served from CloudFlare cache instead of the worker) but an additional side benefit is to reduce billing as less requests are needed to be processed by the Worker for static content.

Unfortunately, that would prevent CF Workers from doing CF caching itself which is what I am using CF Workers for = CF Worker cache bypass on cookie with per url path and time of day cache TTL values (off peak has higher cache TTL vs peak hour traffic with lower cache TTL values).

So, I think this thread is conflating technical architecture with pricing…

A Worker can use the Cache API to implement arbitrary caching logic. A Worker that makes good use of the Cache API should be able to achieve the same performance as if the cache ran in front of the Worker. Hence, there is not a technical reason for Workers to run behind cache.

But it sounds like the motivation here is to avoid being billed for requests that hit cache. This assumes that if cache ran in front of Workers, then we wouldn’t bill for a workers request when the request was served by cache.

However, that isn’t necessarily true. It’s entirely possible that in this scenario, Workers pricing would still be based the number of requests received, including those that hit cache and therefore avoided workers.

But why would that be? If clever caching saved the expense of running a Worker, shouldn’t we pass that savings on to the customer? Well that’s the thing: Workers are really, really fast. The cost of executing a Worker (if it’s already in memory) is actually much cheaper than doing a cache lookup. The expensive part of Workers is distributing the code to the edge and keeping a huge number of different Workers in memory at the same time. Clever caching that eliminates 90% of requests doesn’t necessarily reduce that cost, because the Worker still needs to be loaded to handle the other 10% of requests.

So, as it turns out, between a Worker that makes good use of the Cache API, vs. putting cache in front of the Worker, the cost to Cloudflare is not that different. And hence, I don’t know that we’d necessarily want to charge less for the latter. (Disclaimer: This is entirely hypothetical. We haven’t actually talked about this internally.)

Now you might ask, if our costs aren’t necessarily tied to request volume, why do we bill on requests in the first place? Well, if we tried to break down our real costs and charge directly for them, our pricing would be incredibly complicated and hard for you to predict. You’d have to think about things like how many colos your Worker is likely to run in (which requires knowing how your users are distributed), whether your traffic patterns are spread out vs. bursty, etc. You probably would have a hard time calculating these things, but you probably do know roughly how many requests you get. So charging on requests makes it easy for you to know how much Workers will cost. And if the pricing doesn’t actually match our own costs to deliver the service, that’s our problem to deal with, not yours.

With that said, a big problem with this pricing model is that it means we’ve had to put strict limits on CPU time. As you know, with Workers Unbound, we are introducing a new pricing model that has a much lower base price per request, but also charges for duration, thus allowing us to remove those limits. But this is also good for fast workers: if your Worker makes good use of the Cache API such that most requests return quickly from cache, then I would expect you will in fact end up paying less under Workers Unbound.

TL;DR: Putting cache in front of Workers would neither improve performance nor reduce cost compared to a worker that makes effective use of the cache API. OTOH, Workers that make good use of cache are likely to get cheaper under Workers Unbound.


This is really useful to understand. Thank you for taking the time and for being very thorough.

I think you are right that this is a billing question. People do not feel they should be charged for cache hits. Or at least, not charged as much as they are for worker hits.

As touched on by a poster above, I think this stems from there being a couple of patterns any enterprising developer can implement that will bring quite large cost savings for manipulation of static assets:

  1. Put a dumb cache in front of the worker.
  2. Use CloudFlare as a dumb cache pointing at a different serverless platform.

The feature request is really to bring option 1 inside the CloudFlare ecosystem.

What the costs are for Cloudflare is mostly irrelevant to the customer.

From the perspective of a customer, using Cloudflare’s Cache to cache static resources transformed by [email protected] will often be several orders of magnitudes cheaper than transforming them using CF Workers, without the option of putting CF’s Cache in front of it, due to the drastically reduced numbers of calls that are end up getting billed.

Workers Unbound makes Workers calls appreciably cheaper, but it’s nowhere close to even a single order of magnitude difference, so the economics aren’t in favor of Workers Unbound either when compared to CF Cache in front of [email protected]

The fact that it’s more economically feasible to use CF Cache + [email protected] over a fully vertically integrated CF solution (CF Cache + Workers) is the problem Cloudflare should be looking to address here. Either way, customers need to spend money somewhere for a serverless platform for transformating assets, but in the current landscape, it’s more prudent to spend that money on AWS than Cloudflare.

1 Like

The costs to CF determine how much they need to charge the customer to still make a profit, so yes, it is relevant to the customer. CF cache + [email protected] is cheaper because CF is basically free - if you were to use Cloudfront + [email protected] then it’d be more expensive than either of those scenarios.


Hi @user2442,

You raise a fair point about the pricing of Workers. All I’m saying is that this is fundamentally a pricing issue, not a technical issue, and therefore the proposed technical solution of running workers post-cache may not be the right way to address it.

We periodically re-evaluate our pricing and will take your feedback into account when we do.


Would love to see Workers $5 subscription include the first 100 million requests then charge above that. That would definitely encourage more folks to try and use Workers with less worry of the overages for traffic spikes :slight_smile:

To add to the discussion, it’s not just the cost of making calls, but also the cost of egress bandwidth, which seems to be another dealbreaker in this regard, since Workers Unbound using cache would still only save on CPU time, while still pay for the rather expensive $0.09 bandwidth cost.

In my application, I’m currently running a monolith server + cloudflare cache and I see >95% of requests are cached. I could easily swap out the monolith server with a kubernetes cluster to ensure I can scale up with demand just like I could with serverless.

Since the limiting factor for cloudflare is costs regarding ensuring minimal cold start times at dozens of edge servers throughout the world, it doesn’t make economical sense to offer the ability to run cf cache in front of a worker.

In applications with high degree of caching, the benefits of workers being deployed to edge servers diminishes, hence the best solution that lets both Cloudflare and us users benefit is a new Workers+Cache model which only allows the workers to be deployed to one location and let the users reap the benefit of economical static assets caching on top of that.

+1 on this one

@KentonVarda : Thank you so much for your in-depth response, and you are right, this is a “pricing issue”; I guess that most of us would love to avoid being billed for requests that hit cache.

What about some new page rules that would allow to trigger a Worker on certain conditions like : presence in CF cache before hitting the origin, or HTTP Status Code of the origin server ?

Utopian example, using a “yet to be released” cf-cache-status-before-origin :

  cf-cache-status-before-origin != 'FOUND'
  Forwarding URL (Status Code: 302, Url: https://${function}.${subdomain}

Example using the origin HTTP status code :

  origin-http-code != 200
  Forwarding URL (Status Code: 302, Url: https://${function}.${subdomain}

To be perfect, the worker would have a way to retrieve the initial URL called by the client when hitting the cache before the forward; I guess that some “automatic page rule added” header exposing the initial URL would do the trick.

Any way, thank you for the great job done at Cloudflare, and only consider this post as some kind of “christmas whishlist” :wink:

Ive never directly dealt with this problem since only my employees use my CFWs (low req counts). But has anyone tried TWO domains/TWO zones, a public zone/domain with no workers with a CNAME to the other undocumented zone/domain which has workers that connect to origin?

That won’t work since workers will intercept the request only when the request hostname is added to worker routes. So when you cname a dns record to a worker route the request hostname changes and because of that the workers won’t intercept the request and respond with a 522 connection timeout error.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.