How do you do the equivalent of the page rule cache_level: "bypass" in a worker fetch?

How do you do the equivalent of the page rule cache_level: “bypass” in a worker fetch?

Background: I’m trying to do a small (<20mb) range request on a large (> 512mb file) with a cacheable file extension. On the origin side, Cloudflare is (sometimes, but not always) stripping off the Range: header of the worker fetch request and requesting the full object.

This wastes a lot of bandwidth on the origin server and client for an otherwise uncacheable request (the file is too large for Cloudflare to cache), as well as slowing down delivery of the requested range to the client since it frequently, but not always has to stream all the initial bytes in the file first, even for a small range at the end of the file. Ex: If I’m requesting 10 megs at the end of the file, it’s requesting potentially gigabytes (since it sometimes asks for the full file) from the origin server,

I’ve tried various Cache-Control settings in the header, but Cloudflare seems to ignore anything I put there (as an aside, it also overwrites any Accept-Encoding setting in the request header).

I want something like the opposite of cf: cacheEverything. Like a cacheNothing, or bypassCache.

When a client specifies the Range header in a request for a cacheable resource, Cloudflare fetches the entire resource and serves the specified range. This has several advantages:

  1. Cloudflare only makes a single request to the origin. When the client requests the next chunk, Cloudflare can simply serve it from cache.
  2. Since a 206 Partial Response is not cacheable, all requests with ranges would otherwise go to the origin, significantly increasing load.

I believe Cloudflare also transforms HEAD into GET requests for the same reason.

With that said, I can totally see how this is a disadvantage in your case. Unfortunately I don’t think there’s a way to bypass cache in a Worker. Instead you must ensure the resource is not cacheable. This leaves you with two options:

  1. Create a Page Rule with Cache Level: Bypass matching the affected URLs.
  2. Rename your resources to a non-cacheable extension.
1 Like

Thanks for your suggestions albert.

The cache_level: bypass page rule doesn’t work in this case because it’s a resource on a non-CF domain (oraclecloud.com in this case), so I can’t give it a page rule. I can give one to the worker script, but that doesn’t seem to do anything.

I found the non-cacheable workaround yesterday, that’s how I narrowed down the problem. I was having the same Range dropping issue when I tried it initially two days ago even with a non-cacheable extension. It’s possible the Content-Type may play a part as well, or something was fixed in the mean time. I had originally just made a duplicate of the file with a non-cacheable extension, which would have preserved the Content-Type as something cacheable.

In the more recent tests yesterday, I created a new test file from scratch test which had a non-cacheable Content-Type from the beginning, and that seems to be working reliably (hopefully). Since I wasn’t quite sure how reliable it was going to be, I went with another option which was to just slice the file into pieces, since the range I need isn’t dynamic. I’ll switch back to a single file if Range requests seem to be reliable as long as the url didn’t previously have a cacheable Content-Type.

Assuming ranges are working reliably on non-cacheable extensions (and associate Content-Type), the non-cacheable extension workaround is useable, but still less than ideal. In my case I would need to duplicate the (largish) files with both extensions, if I want to allow for direct download.

It’s also an issue for the more general case where workers need to use Range requests on a URL they don’t control.

This is probably not a widely known fact, but any Page Rule you create will apply to the current zone and any domain that does not use Cloudflare. So you can simply create a Page Rule (on the zone running the Worker) that matches oraclecloud.com/* with Cache Level: Bypass and it will work without issues. See this example using Forwarding URL:


You need to have SSL for SaaS enabled to match hostnames outside your zone. You can enable this for free by going to https://dash.cloudflare.com/?to=/:account/:zone/ssl-tls/custom-hostnames.

1 Like

Perhaps that requires a higher tier of account? I get the error “Your URL should reference the domain ‘XXXX.XXX’ in some way.” when I try it.

Hmm. Apparently you need to have SSL for SaaS enabled to match hostnames outside your zone. You can enable this for free by going to https://dash.cloudflare.com/?to=/:account/:zone/ssl-tls/custom-hostnames.

1 Like

Enabling SSL for SaaS and using a Page Rule with Cache Level: Bypass worked! Thanks!! That takes care of my issue of needing to duplicate the file!

It might be helpful to add the SSL for SaaS requirement for third party domains to your original solution post, in case people don’t read the entire thread.

It would still be nice to have a fetch option for the general case though, since creating a page rule or using non-cacheable file extension/content type may not be realistic in all cases (for more dynamic situations).

2 Likes

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.