I’m trying to understand whether something is not working or working as expected. I created a test R2 bucket a few days ago and today set it to public. The UI confirms it’s public and gives me a public url that is something like https://pub-klsdfklsdklfjsdkljfklsdjklfj.r2.dev (I’m definitely not using the S3 API url). The help doc Public buckets · Cloudflare R2 docs also warns that “You can now access the bucket and its objects using the Public Bucket URL”.
I am indeed able to access files directly if I know their names and paths, but dir listings result in a 404 page
Given the ambiguous message from the help site, can someone 100% confirm whether public access should result in open dir listings or not, and whether the current behavior is by design and will not change? Or is it only for public file access where you have to know the file name and path? And then some files may be set to private using the S3 API?
In my case, I’d like to make some files available without protection (images) and some only accessible with a hash that expires every, say, hour. I do NOT want anyone to be able to list the files.
I don’t see why a worker would be needed? The logic would be: public bucket, when uploading image files - upload normally. When uploading other files, use the S3 API to set them private, then generate signed links with a predetermined expiration using something like Configure `aws-sdk-php` for R2 · Cloudflare R2 docs. Why would I need a worker here?
Doh! I wasn’t aware of this. In this case, we’re going to continue using hmac verification through CF WAF. Functionally, it should be the same, I think, as long as nobody ever finds out the public bucket url? Is it possible for anyone to deduce or discover it somehow?
Since I’ve attached a domain to the bucket, as long as we use the CNAME, everything will flow through CF and get CF-cached as well as CF-WAFed, right? And if we use the public bucket url, CF WAF and cache are bypassed. But if we use the CNAME, we lose the free R2 egress and have to pay for the bandwidth?
Right, but to serve the images from R2 with the benefit of browser caching, I need the bucket to be public, otherwise the signing params would be generated on every page load and the files will not be cached by the browser.
CNAME or pub-<id>.r2.dev is the same pricing and feature wise on R2’s end. The CNAME adds your account’s Cloudflare features on top.
To configure a CNAME (which is not actually a CNAME to setup in the DNS tab, it’s the custom domain in the bucket’s config) you don’t need to allow the public URL (pub-<id>.r2.dev). They are different setups.
The signing will be limited to just the paths you setup in the WAF, the others will be freely accessible.
I already configured this CNAME there, and it also shows up in the DNS entries pointing to public.r2.dev, which is good since it doesn’t expose the full public pub-.r2.dev subdomain. I realize that I don’t need to enable public access to enable this CNAME, but I still need to enable it in order to serve images without S3 auth hashes, unless I’m misunderstanding something…
Update: After more testing, I now see what you meant… the CNAME mapping is basically the same as the public setting when it comes to access to files - the bucket acts as a public one when accessed via the CNAME and all WAF rules apply instead. And the bucket’s public setting can be disabled without affecting the CNAME. Wow, this is so confusing and ambiguous/borderline misleading.
I think this is the way I’ll go for now. The only downside is all the transfers will count toward our CF Enterprise bandwidth allocation. I was hoping to drive that down to almost nothing by serving directly from R2 due to free egress, but then either all files will require auth or none will require auth, or we’ll have to use 2 buckets. But if all this bandwidth will now not cost anything, I’d be able to renegotiate the CF contract down and save a bunch of $$$. Am I understanding this correctly?