Transfer-Encoding header being ignored by Cloudflare Workers

I’m trying to generate chunked gzip response with Cloudflare Workers but it seems the “Transfer-Encoding” is simply being ignored, EVEN IF 'Cache-Control': 'no-transform' header is used. Here’s the code:

const bytes = new Uint8Array([0x00, 0x00, 0x13, 0x0d, 0x0a, 0x31, 0x36, 0x0d, 0x0a, 0xf3, 0x48, 0xcd, 0xc9, 0xc9, 0x57, 0x08, 0xcf, 0x2f, 0xca, 0x49, 0x51, 0x04, 0x00, 0xa3, 0x1c, 0x29, 0x1c, 0x0c, 0x00, 0x00, 0x00, 0x0d, 0x0a, 0x30, 0x0d, 0x0a, 0x0d, 0x0a])

addEventListener('fetch', event => {
  event.respondWith(new Response(bytes, {
    headers: {
      'Cache-Control': 'no-transform',
      'Content-Encoding': 'gzip',
      'Transfer-Encoding': 'chunked'
    }
  }))
})

This worker should generate “Hello world” gzip response, but because Cloudflare decides to skip Transfer-Encoding header it doesn’t work.

How to create response (or more specifically streaming response), with Transfer-Encoding being properly set? Currently it’s impossible to generate any compressed response by Cloudflare. It limits its uses very much unfortunately.

Could you provide an example of Cloudflare Worker that returns generated and chunked gzip response? How to ensure Transfer-Encoding is set properly?

1 Like

(This is a very old question but someone asked me about it so I’m going to post an answer!)

First, regarding chunked encoding: You do not need Transfer-Encoding: chunked to use gzip. Transfer-Encoding: chunked is only needed in cases where the the body size is not known in advance. Since the Workers platform always knows whether it has the body size in advance, it automatically chooses whether to use Content-Length vs. Transfer-Encoding: chunked, and it will ignore whatever you specified.

Now, the real issue with your example code is that because it specifies Content-Encoding: gzip, the platform thinks you are saying: “Please gzip-compress my bytes for me.” So, Cloudflare dutifully applies gzip compression on top of the bytes you give it. But since your bytes are already gzip-compressed, you end up with double compression. On the receiving end, your user-agent decodes the outer layer of compression, but not the inner layer, so it ends up presenting you with bytes that are still compressed.

In order to suppress this auto-compressing behavior, you need to use encodeBody: "manual":

  event.respondWith(new Response(bytes, {
    headers: {
      'Cache-Control': 'no-transform',
      'Content-Encoding': 'gzip'
    },
    encodeBody: "manual"
  }))

This tells the system to assume that the body bytes are already encoded as specified and don’t need to be encoded again.

You might wonder why manual encoding is not the default, since it seems more intuitive. The problem is, the fetch() API is defined, by standard, to automatically decompress response bodies. Therefore, respondWith() must recompress them. Otherwise, respondWith(fetch(url)) would have the inadvertent effect of decompressing the response body. (Note that, under the hood, the system doesn’t actually decompress and recompress the same data in this case – it notices what is happening and optimizes out the round trip.)

2 Likes

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.