Cloudflare Workers Beta Feedback

Hi @saul! Stay tuned for the invite in the next few days!

Hi @dan42! We are looking into it.

I am not able to save a script - 403 response from the API

{code: 10015, message: “edgeworkers.api.error.not_entitled”}

Hi @z.knops! It looks like you don’t have access to the beta yet. Did you receive an email to opt-in to the beta?

Yes, you confirmed it on the 23rd

5 posts were split to a new topic: Not possible to override the Host header on Workers requests

Hey there,

It looks like right now a Worker is executed on every request to a route, before the edge cache is checked, is that correct?

Are there any plans to allow workers to run between the edge cache and the origin? So the worker would only run if the route wasn’t already cached in the edge cache, and the worker’s response would be cached in the edge cache?


1 Like

Hi @mike6,

That’s correct, Workers run “before” cache, so that subrequests can hit cache. We’ve found that this is preferable for most use cases, because one of the main use cases for Workers is to transform and break up queries in order to improve edge cache hit rate. Moreover, most workers run very quickly (less than 1ms), and so there’s not much benefit to caching the output.

With that said, there are obviously some use cases where caching the output of a worker could be useful, e.g. if it performs a CPU-intensive operation like resizing an image. To that end, we plan to implement the standard Cache API. However, we don’t currently have any estimate when that might become available.

Do you have a use case that would benefit from caching output?


I’m about to start playing heavily tomorrow, testing different scenarios, and was wondering a few things:

  1. During the beta are there any guidelines on public writeups of performance benchmarking. I’m assuming something like “run it by us first to make sure you’re not doing something stupid that would falsely represent how kickass our product is”
  2. I see that Workers run before the Cache hits. Can you clarify where it runs with respect to things like a) page rules b) WAF rules c) IP Firewall d) Rate Limiting?



@nick6 If you do performance benchmarks I’d love to see what you end up with. I’m pretty happy with what we have so far, but I also expect there’s lots of room for improvement. So, if you see a performance number you’re disappointed with, it’s likely we can improve it.

The rule of thumb for workers is that they run “after security but before everything else”. Note that whether they come before or after page rules depends on the rule – a page rule disabling WAF runs before workers, but most page rules run after.


Thanks. So would the following be a reasonable expectation:

CF DNS > WAF (IP or Rules) > Workers > Rate Limiting > Load Balancing > Argo

Assuming we are doing more filtering like and we had Rate Limiting also enabled. So if we 403’d the customer or other, then we would assume it would not impact Rate Limiting, Argo or Load Balancing metrics/spend?

I have a suggestion for the docs. When giving an example that involves modifying the request, always include ‘body’ in the init parameters, e.g.

const init = {
  method: request.method,
  headers: request.headers,
  body: request.body
const modifiedRequest = new Request(request.url + suffix, init)

Because if the request happens to be a POST, it’s gonna need its body!

This is something that tripped me up a little; while the bug was easy enough to identify, it happened in part because I based my worker code on one of the examples.

Looks like there can only be one script per zone id. I feel that this is quite limiting. A given zone id can contain many different sites with drastically different requirements. I know you can add a conditional at the begining of the script to handle this case, but it’s introducing unnecessary complexity and increasing the likelihood of bugs (via shared global scope etc). So what I would like to see is the ability to add multiple scripts per zone, with routes dictating when the script executes, and each script running in it’s own worker instance.

script 1 => route:*
script 2 => route:*

If you have multiple matching routes, only the first one executes.

Hi @tom5,

The trouble is, each script needs to be set up in its own sandbox, which has some fixed overhead. Although our implementation is very efficient, it nevertheless tends to be the case that a site with two scripts will take twice the resources compared to the same site with one script containing all the logic. We figure most customers would rather merge their code than pay twice the price. We do, however, allow enterprise customers to use multiple scripts – but this is somewhat of a temporary measure until we can come up with better tools.

Note that you can keep your code organized by splitting it into multiple event handlers. For example, you can write:

addEventListener("fetch", event => {
  // This handler is for only.
  if (new URL(event.request.url).hostname != "") return;

  // ... handle /foo ...

addEventListener("fetch", event => {
  // This handler is for only.
  if (new URL(event.request.url).hostname != "") return;

  // ... handle /bar ...

All event handlers will run on every request, but at most one can call event.respondWith(). If none call respondWith() then the request is sent to the origin as normal.

@nick6 I think Rate Limiting is considered a “security feature” and thus comes before Workers, though I could be wrong. We should have better documentation on this soon.

Regarding how Workers affects usage-based billing for other features, unfortunately we have not worked out the details yet. At present, please assume the other features will be charged regardless of what the Worker does. I agree, though, that that’s probably not the right answer in the long-term, we just need some more time to get this figured out.

@dan42 I agree, but unfortunately, due to a bug, the code you wrote currently will throw an exception for GET requests (which have no body). This is actually a bug in our implementation; under the spec it’s supposed to work. We’ll fix this soon.

I’ve also proposed to the Service Worker spec authors that we really ought to have a nicer way to rewrite a request URL, that doesn’t require listing all members of Request. method, headers, and body are the most important ones, but technically to make a perfect copy you also want redirect and possibly other fields.


I have the following listener set up:

addEventListener('fetch', event => {

This is an adaptation of the first footnote in the Reference section of the docs.

In the editor the response is as expected; a bunch of XML is spat out. But when requesting the URL directly - for example in my browser or using curl - an “Error 1101” (rendering error) is returned.

Is there something that I’m missing?

After some more investigation it appears that this is occurring because the response is XML.
I tried:

addEventListener('fetch', event => {

And that works just fine. So I tried editing the headers:

addEventListener('fetch', event => {

async function fetchAndApply(request) {
  const response = await fetch('')
  const modifiedHeaders = new Headers(response.headers)

  modifiedHeaders.set('Content-Type', 'text/xml')

  const init = {
    status: response.status,
    statusText: response.statusText,
    headers: modifiedHeaders
  const modifiedResponse = new Response(response.body, init)
  return modifiedResponse

And that works fine. I guess the assumption I was making was that the headers from the fetch('') would be preserved when responding.

The actual URL I’m trying to retrieve is which works in the editor using the httpbin example code block but not with this Google News RSS…

Hi @kieran.hunt92,

I think what you are observing might be browser-side behavior. Using Google Chrome, I see something different from what you described: In the preview, I see just the unstyled text “Wake up to WonderWidgets! Overview Why WonderWidgets are great Who buys WonderWidgets” – this appears to be the textual content of the XML file with all the styling removed. But when deployed to prod, I see Chrome’s XML renderer, which gives me a tree view of the XML AST. It looks like the difference is because XML in an iframe is rendered differently from XML outside an iframe. The preview is shown inside an iframe, but if I extract the frame’s address and load that directly in a new tab, I see exactly what I see when the script is deployed.

What browser are you using? Could this “Error 1101” actually be your browser complaining that it doesn’t know how to render the XML? But in an iframe, it decides to dump it as text instead?

Note that returns a Content-Type of application/xml, not text/xml. text/xml is not the correct MIME type for XML, so when you change it to that, your browser’s XML rendering behavior is probably disabled, hence “fixing” the problem.

Hey Kenton,

Using Google Chrome, I see something different from what you described

I think the issue was actually the RSS feed I was trying to consume. I’ve now run through a whole bunch of other feeds that all work perfectly.

But when deployed to prod, I see Chrome’s XML renderer, which gives me a tree view of the XML AST

So I see the same behavior with most of the feeds with the exception of the Google News one. That one renders without any styling in the editor preview but results in an error when deployed to prod.

This code will cause my prod deployment to return that Error 1101:

addEventListener('fetch', event => {

Here’s the screenshot of the page (which is, I’m pretty sure, Cloudflare complaining):

What browser are you using?

I’ve tried this on both Firefox 57 and Chrome 63.