For Workers & Pages, what is the name of the domain?
n/a (this is for R2)
What is the issue or error you’re encountering
When I upload objects using aws-sdk-go, the uploads are done as an AWS chunked upload. This leaves the uploaded object with a Content-Encoding set to “aws-chunked”, which does not make sense for retrieval. AWS S3 ignores aws-chunked in the Content-Encoding field on uploads, which is how it’s supposed to work.
What steps have you taken to resolve the issue?
I have tried disabling the request and response checksums (as specified on the Cloudflare docs for aws-sdk-go with R2) but it made no difference. As a workaround I could probably do a CopyObject after uploading to make a copy of the object without the header, but that’s a bit of pain. I guess it must also be possible to disable multiple chunk uploading somehow, but I’m not sure how to do that with aws-sdk-go.
What are the steps to reproduce the issue?
Enable public access to your R2 bucket
Upload object to R2 via aws-sdk-go with default settings
curl the object from its public URL, observe that it has Content-Encoding: aws-chunked (which is wrong and something that only makes sense for uploads…and I never set ContentEncoding in s3.PutObjectInput anyways so this header should not be sent)
I have found a workaround that avoids the copy. If you specify an object checksum manually, then the AWS Go SDK doesn’t do a chunked upload, I guess because it wouldn’t be able to calculate the checksum of the chunks? Anyways, it seems like R2 supports CRC32 checksums, so if your object body is in an io.Reader named r, you can do something like:
And then there will be no Content-Encoding header on the object. But this is a bit hacky and requires you to read the entire object into memory! So it would be nice to have a proper solution here. (I think that would require changes on the R2 server side though?)
I’m also experiencing this issue on my site. Until recently, my static files (like JS and CSS) were served with a Content-Encoding of gzip or br, but now they’re responding with Content-Encoding: aws-chunked instead.
I’m using aws s3 sync to upload static assets (images, JS, CSS, etc.) to my R2 bucket. This is done using the awscli Python package (version 1.38.29) along with boto3 (version 1.37.29).
According to the Cloudflare docs, they should be compressed, right? There could also be a recent change in the aws cli that I may have missed.