PDFs Downloaded from Workers Site Corrupted

I’m in the process of finishing up a site that it hosted by workers-sites, and I’m having an issue with PDFs that are embedded on the site. For whatever reason, the PDF file becomes corrupt after you refresh the page on mobile. This is only an issue that occurs when the PDF is served by CF, and doesn’t happen when hosted locally.

The full pdf is only about 400kb, which I would think is well within the acceptable size range for workers to deliver. In addition, the issue also occurs with a smaller PDF that is only 200kb. It seems to be fine on desktop browsers, but when you visit the url to the PDF on mobile, the corrupted file that CF serves is 25kb, and the contents when viewed with a hex editor are only the last bits of the actual file.

In the workers dashboard there have been zero errors in the last 24 hours.

To replicate the issue you can try visiting this page, which has the embeds on it, on your phone. https://www-twenty.willbicks.com/projects/crow-research/ You should notice it work perfectly the first time, but if you refresh once or twice it will stop working and give a red corrupted file warning. Once it’s stopped working, you can copy the direct url to the PDF (https://www-twenty.willbicks.com/projects/crow-research/2021-01-22_Methods_for_Counting_Roosting_Birds.pdf) and paste it in a new browser tab, and you’ll get the corrupted partial file from the cloudflare worker. I tried this on my phone, and also had a friend with separate internet connection and separate phone try it, and he encountered the exact same issue, and the corrupted 25kb file he was served had the exact same hash.

This seems to occur with the Chrome app on Android but not the Firefox app, and am curious how it extends to other browsers.

Any help in nailing down this annoying issue would be very much appreciated!

Hey, I’ve just tested it with Chrome and Safari on iOS, and didn’t notice any errors, I refreshed the page many times and they still embed nicely. I also didn’t see any errors with the PDF its self. What exactly are you seeing?

Seems like it’s specific to chrome on Android then for whatever reason, thanks for the fast testing.

Here’s what the page looks like after you refresh, which is the standard error PDF.js spits out when it can’t read the PDF file. Perhaps more problematic though is here is what is downloaded when you go directly to the PDF url.

To further try and determine the source of this issue, I pushed the exact same website to a server running Apache and proxied through Cloudflare. The site running Apache is fine, but the site running on workers-site serves corrupt PDFs after a few refreshes on Chrome.

Here are the links to both sites:
Apache, works perfectly: Will Bicks | Crow Research
Wrangler / CF Worker, breaks after a few refreshes: Will Bicks | Crow Research

It took many more refreshes, but I also have the exact same thing happening on Chrome on Windows, so seems like it’s not necessarily specific to Android.

When you clear the browser cache the issue disappears initially, but again after enough refreshes it returns. Based on the above comparison I’m left with the impression that this must be caused by something with Wrangler / CF Workers, but I’m coming to the community to see if anyone can help identify the cause of this bug!

1 Like

Hey, are you using a specific template for your worker? Or have you written your own?

I noticed that Chrome on Android sends a range header when requesting the PDF document, are you respecting this header? E.g 'range': 'bytes=0-65535',

I’m using the default page handler in workers-sites to serve the PDF, and a cursory search of the source doesn’t turn up anything that’s interpreting the range header.

I’ve also recently confirmed that this is also happening on Chrome for Windows, and I got the corrupted file after only one refresh on a different computer.

You did encourage me to dig deeper into the range header and the getassetfromkv function, which turned up this bug from a year ago. https://github.com/cloudflare/kv-asset-handler/issues/60

However, it would appear this has been fixed, and I’m using CloudFlare/[email protected] on GitHub actions which should be using a recent version of the kv-asset-handler.

I believe the issue is something to do with the if-none-match header, I think your worker is incorrectly caching an empty response, which is why it works on the initial request, and then fails on subsequent ones

Interesting theory, and I do see that if-none-match header added in the requests that fail, but id imagine that must be an issue with wrangler or the kv-asset-handler. I just upgraded from kv-asset-handler v0.0.11 to v0.0.12 but haven’t noticed a difference after re-deploying.

Any suggestions on how to resolve the issue if the problem is as you suggest?

1 Like

Hi @bicks.will,

I take it you’re using Workers Sites, without any modification to the generated Workers script? To put it another way, if I were to reproduce your setup, would it just be something as simple as this?

wrangler generate --site mysite
cd mysite
# copy content into public/
wrangler publish

I can reproduce the issue you describe on your site using Chrome on Android, and also using Chrome on Linux using mobile device simulation (I used the Pixel 2XL preset).

When using mobile device simulation, I looked at the network traffic in devtools. On the last successful render before the buggy behavior, Chrome issues three requests:

  • Request A: regular GET request that completes normally.
  • Request B: range GET request for the first 64K, This completes with a 0-length 200 with no Content-Range header, but an ETag header with a strong ETag. I’m not sure how to explain this one.
  • Request C: range GET request for the last 24K-ish bytes. This completes with a 200 status code with a Content-Range header, and with the same strong ETag as the previous response. This is totally wrong: a Content-Range header is meaningless on a 200 response (they’re supposed to go on 206 responses), so the browser caches the last 24K-ish bytes as the full representation of the asset. This is the proximate cause of the bug.

When the buggy behavior occurs, I only see one request:

  • Request D: conditional GET request (if-none-match) using the strong ETag from requests B and C, that completes with a 304 response, telling the browser to use its broken cached copy from request C.

So, theory time:

Presumably this should be an easy fix, but I’m not an expert on this codebase, so I’d like to get a second opinion. I’d encourage you to file a support ticket, if you haven’t already. (If you have, could you share the ticket number?) This will help get resources allocated to fixing this. I’ll file a kv-asset-handler issue.

Edit: ticket here: https://github.com/cloudflare/kv-asset-handler/issues/165




Thanks so much for the detailed and thoughtful analysis! I’ll admit I have never delved into caching enough to fully understand ETags and conditional if-none-match requests, but your insight into the matter is very helpful.

I have opened ticket number 2073705 now that it seems that this is an issue not unique to me.

I did add a bit of extra logic to the workers site which was being tested above, so I created a new workers site as you suggested to help eliminate that variable. I published the site using the code automatically generated by Wrangler, with one modification. To allow the the PDF viewer to be embedded using an iframe, I commented out the following line from index.js.

//response.headers.set('X-Frame-Options', 'DENY')

You can view that new site at www-twenty-r1.willbicks.com, and I have observed the same bug occurring.

I will eagerly follow the issue #165 in the kv-asset-handler repository! Thanks again.


Hey @bicks.will @harris - thanks for diving into this! I put in a PR just now (Frank/#165 fix by shagamemnon · Pull Request #166 · cloudflare/kv-asset-handler · GitHub)