When using a browser that supports zstd compression, responses sent by Cloudflare that use zstd compression are being randomly truncated
What steps have you taken to resolve the issue?
We disabled Cloudflare’s CDN for the production version of our website.
What are the steps to reproduce the issue?
Use Chrome or Firefox to hard refresh https://staging.turbowarp.org/ without caches several times. Eventually one of the JS files will be truncated and you’ll see an error. I’ve seen this on Windows, macOS, Linux, and Android on several networks. This is also happening on https://zstd-bug-demo.turbowarp.org/testfile.js which is just an nginx static file server behind Cloudflare and even on https://packagerdata.turbowarp.org/vendors~editor~embed~fullscreen~player.7a5a1df87d0f937ad022.js which is a Cloudflare R2 custom domain (it does seem less likely to happen on these, but responses are still being truncated randomly).
In Chrome, if I go into chrome://flags and disable “Zstd Content-Encoding”, the issue is fixed. In Firefox, if I go into about:config and set network.http.accept-encoding.secure to gzip, deflate, br to disable zstd, the issue is fixed. If I bypass Cloudflare and connect directly to our backend then gzip, brotli, and zstd work fine. This all leads me to believe the problem is Cloudflare’s zstd implementation.
I can skip the browser entirely and reproduce this using this shell script to make requests until an unexpected response is received. It seems on very fast connections adding a rate limit helps surface the bug more but I’m not sure, it’s very inconsistent. From this script I’ve seen that uncompressed, gzip, and brotli always work but zstd is being randomly truncated. Example output when zstd failed on my R2 custom domain: gist:1aa48d57629b9be8945dcac3b28e37a3 · GitHub
#!/bin/bash
url="https://staging.turbowarp.org/js/pentapod/vendors~editor~embed~fullscreen~player.7a5a1df87d0f937ad022.js" # Nodejs behind nginx behind Cloudflare
#url="https://zstd-bug-demo.turbowarp.org/testfile.js" # nginx behind Cloudflare
#url="https://packagerdata.turbowarp.org/vendors~editor~embed~fullscreen~player.7a5a1df87d0f937ad022.js" # R2 custom domain
expected="ff8c6e6a7b30948b613ad74a41b5e9b724593ecb2d6e42874b813926aaaca8e2"
while true; do
clear
#hash=$(curl --limit-rate 1M -s --verbose "$url" | tee output.txt | sha256sum | head -c 64)
#hash=$(curl --limit-rate 1M -s --verbose -H "Accept-Encoding: gzip" "$url" | gzip -d -c | tee output.txt | sha256sum | head -c 64)
#hash=$(curl --limit-rate 1M -s --verbose -H "Accept-Encoding: br" "$url" | brotli -d -c | tee output.txt | sha256sum | head -c 64)
hash=$(curl --limit-rate 1M -s --verbose -H "Accept-Encoding: zstd" "$url" | zstd -d -c | tee output.txt | sha256sum | head -c 64)
if [[ ! "$hash" = "$expected" ]]; then
echo "FAILED: expected $expected but got $hash"
exit
fi
echo "Passed, will try again"
sleep 1
done
@muffins I have given you 5 Compression Rules - we will be backfilling all plans to have Compression rules soon and improving the simplicity to deploy a rule like below – however.
You can now create a rule like below. When you confirm the error is fixed - can you exclude your test host, so we can have place to replicate? i.e add an and host is not staging.turbowarp.org when ready?
Filter – this is our default of whatr we compress:
The compression rule works great in my testing, thank you. I’ve exempted staging.turbowarp.org and the other domains I mentioned so the bug can still be reproduced. I’ll re-enable Cloudflare for production traffic probably sometime tomorrow, will let you know if there’s any continuing issues
Hi there. We seem to be experiencing a similar issue at www.zetland.dk at the moment (since yesterday).
I can’t seem to find where to configure compression rules in the Cloudflare console. Maybe it is not included in our current plan. What would you advise we do as a workaround until a fix is out?
When I wrote you ~1 hour ago, we were able to replicate the issue on our main site with between 2 and 10 hard refreshes. I am now no longer able to reproduce.
Did you change anything in our site’s configuration, or did you just unlock the compression rules feature for us to try out this fix?
Around the same time as I wrote in this thread earlier, we did also deploy some changes to our site that had to do with compression headers from the web application itself, removing some sort of workaround that had been in our codebase for a few years. A theory – in case you did not change anything to the deployment of our site – could be that this workaround, in combination with some changes rolled out at your end yesterday, caused the undesired behavior.