S3 bucket for images served via CloudFlare

Hello, I outsourced my images to a S3 bucket. Now I would like to prettify the domain and deliver it through Cloudflare.

I have problems with the bucket address. To point the CNAME record to the bucket, the bucket needs a name format like img.mydomain.com

But when I create such a bucket, S3 only shows me an URL path like s3.region.amazonaws.com/img.mydomain.com/image.jpg instead of a subdomain format like img.mydomain.com.s3.region.amazonaws.com/image.jpg

When I create a bucket without dots (.) in the name, I get my target bucket format but the CNAME doesn’t work with this.

edit: When I try to point the CNAME to img.mydomain.com.s3.region.amazonaws.com, I get an error 526 “invalid certificate”

Is there a way to get the desired result? What can I do on the S3 or CF end?

Thank you very much!

That’s a question on how to configure Amazon and something you need to clarify with them. Cloudflare will just proxy through whatever it gets.

I imagine you will need to configure your domain properly and Amazon’s side and make sure that it responds with a valid certificate as well, but again, that’s something to ask Amazon.

For now it will be best to unproxy the record, in that way you hit the server directly and can fix your configuration there. Once that is all working on HTTPS you can proxy the record again and it should work out of the box.

Use Workers.

The above example is based on Google Cloud CDN, but I believe it will work for Amazon S3 as well.

I guess this is expected, as you have multi-level subdomains with S3 - their SSL cert only covers *.s3.region.amazonaws.com, but not *.*.*.s3.region.amazonaws.com.
No matter what you are trying to do, @sandro will always advise you not to go with other SSL encryption modes (except Full Strict mode - which is what we recommended), although Full mode should work with less security - which is NOT recommended.

I just read through AWS documentation, and this is correct.

One other thing that would work would be a CNAME record, but in that case you’d still need a valid certificate for that Amazon hostname (not for your own domain though).

As always, if you had provided the actual hostnames it would be a lot easier :wink:

If his bucket name does not contain any period . in it then HTTPS will work just fine.

But in order for CNAME to work, the bucket name must be his domain name that is going to CNAME from. But here comes the issue: whenever you have period . in your bucket name, your S3 URL would become like this: www.example.com.s3.region.amazonaws.com, and this is where SSL cert validation will fail - because AWS only serves *.s3.region.amazonaws.com wildcard SSL cert to the user.

Now, in order for custom domain to work, there’s few options:

  1. Use Cloudflare Workers to rewrite the URL (as what I mentioned in the previous reply). If you have a lot of requests, then you might need to pay a bit for the Workers subscription.
  2. Set up a Cloudfront distribution - from there you can use your own domain name in front of the S3 bucket - but what’s the point of using Cloudflare if you already have Cloudfront for the CDN.

If HTTPS works fine there is not much to do. Set up a CNAME and be done with it. Of course it still needs to accept the domain in question.

Not with a CNAME, but again the host still needs to accept the domain as well.

The OP really best tries this with an unproxied record first and switches only once everything else checks out but we are getting into Amazon specifics at this point.

Rewriting the URL with a Worker is certainly an option but would be a paid option if it exceeds the free limit.

Again, if we knew details we wouldn’t need to guess.

Thank you both for your replies. The workers option would be a solution, even if I’m not familiar with that technique.

@sandro I created a CNAME for img.xyz.net and point it to img.xyz.net.s3.eu-central-1.amazonaws.com. As you recommended I disabled the proxy for now. So I can see images from the bucket but without SSL.

Yeah, there is no valid certificate configured on that machine for either hostname.

You’d need to get a valid HTTPS setup running first before even considering Cloudflare.

I remember there were some threads in the context of Amazon and they all did suggest that Cloudfront is a requirement for HTTPS on Amazon, but that’s something you’d need to discuss with Amazon I am afraid.

As it is right now you could not have a secure setup as that host does not return any valid certificate. Even the Worker option would not work as there is nothing you could rewrite that would be valid in this context.

If you can rename your bucket to something without an FQDN (as mentioned earlier) you could cover one of the wildcard certificates and in that case you could set up a CNAME record pointing to WHATEVER.s3.eu-central-1.amazonaws.com.

That might be the only option how you could get a valid SSL connection working.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#VirtualHostingCustomURLs

Your bucket name must be the same as the CNAME. For example, if you create a CNAME to map images.awsexamplebucket1.net to images.awsexamplebucket1.net.s3.us-east-1.amazonaws.com , both http://images.awsexamplebucket1.net/filename and http://images.awsexamplebucket1.net.s3.us-east-1.amazonaws.com/filename will be the same.

This is what I already mentioned in post #6. It’s impossible not to use FQDN as the bucket name if he wants to CNAME into it.

In that case SSL itself won’t work. @kontakt26, either look into the Cloudfront option or use some alternative hosting.

I guess Cloudfront is a more expensive option. Go with Workers instead.

But yeah, need to see the estimate number of requests and bandwidth usage of the OP’s website.

As mentioned earlier, Workers won’t work here either as it will be impossible to establish a valid SSL connection. If that was possible a CNAME record would also do.

If he gonna use Workers to rewrite the URL, then there’s no need to use his FQDN as the bucket name. Just use a normal name without period . as the bucket name. :smirk:

If there’s a way to configure Amazon like Google and route it to a hostname with a valid SSL setup and only require an additional path element, then yes, that would work.

This is definitely working.

image

And this will not work:

So I think it’s clear now. @kontakt26 may go with either options (Cloudfront or Workers).

Sure, there’s the wildcard, but can you still reference the right location?

Do you mean reference to the correct S3 bucket? Yes if the OP uses the same S3 bucket name.