Transform Rules Cannot Set Host Header?

I am attempting to create a transform rule to modify the host header for a request sent to my origin, but receive the following error when attempting to save the rule both via the console and the API:
image

The documentation does not indicate that this header is restricted from modification:

Important

  • You cannot modify or remove HTTP request headers whose name starts with cf- or x-cf- except for the cf-connecting-ip HTTP request header, which you can remove.
  • If you modify the value of an existing HTTP request header using an expression that evaluates to an empty string ( "" ) or an undefined value, the HTTP request header is removed .
  • Currently, there is a limited number of HTTP request headers that you cannot modify. Cloudflare may remove restrictions for some of these HTTP request headers when presented with valid use cases. Create a post in the communityOpen external link for consideration.

I searched the community for other references to transform rules modifying headers, and the closest I found was this thread but it was the Authorization header which eventually was allowed to be modified. I would like to confirm if this is intentional or a bug, or is there a different more preferred way to modify the host header sent to the origin on the pro plan?

Thanks!

See - Not possible to override the Host header on Workers requests - This is not allowed to prevent abuse of Cloudflare’s systems, and I imagine the same reasoning applies to transform rules as it does to Workers.

Workers were changed to allow the target HOST header to be sites on the same account, so maybe that can be applied to transform rules as well. Regardless, you should explain your use case for why you need to change the Host header at all. The most common use case is generally to get a free SAAS subdomain without the SAAS knowing/getting paid for it, which is not what CF is trying to encourage.

2 Likes

A potential use case for this would be hooking up an s3-compatible cloud storage bucket behind Cloudflare using a CNAME. They usually require the host header to be set to their thing to identify the bucket based on subdomain.

In a way, that is very similar to my use case - only I did not need the host header, as our provider is not using an s3-compatible scheme, and can deliver files based on path only.

I don’t understand how the potential attack described in the post is relevant. You won’t know their origin IP making it moot in most cases. If you just hook them up as a CNAME, Cloudflare would be sending to itself which surely doesn’t work as an attack either. You could also use unique client certificate for each zone for authenticated origin pulls. In fact, you could do that right now using custom origin pull certificate, unless that’s another one of those annoying enterprise only features (the documentation doesn’t say).

But if you’re still worried about that, why not introduce a whitelist for what we can freely set the host header to, and have a process for people to get new common cloud storage providers included in that with some manual review?

1 Like

This applies to every server, not only Cloudflare. With a free-for-all host header you could literally impersonate any website.

2 Likes

You can also do that without using Cloudflare, like any reverse proxy could, what’s the extra risk I’m missing?

The only difference is that the request will be coming from Cloudflare, which would matter for the target if they’re also a Cloudflare customer, but they can identify if they’re for their zone or not with custom authenticated pull cert.

That they would fronting it, with massive amounts of bandwidth and servers available, blocking stuff is harder (millions of IPs, versus a few) especially if the third party is using Cloudflare.

Sure, but it’s an additional step to configure, which may not be possible.

1 Like

If it’s just about impersonating a website, you can actually do this right now using Cloudflare Workers - just fetch() their stuff using their public url and return it. I don’t think that is relevant to the attack it is intended to protect against.

A noteworthy attack can only happen if three conditions are true at the same time:

  • they are using Cloudflare
  • you have discovered their origin IP behind Cloudflare
  • they have not used a custom pull certificate

You’d be quite surprised just how many people don’t do any custom stuff with Cloudflare, simply adding their website and leaving it at that, even with just Flexible SSL.

You also can be surprised at just how far attackers are willing to go - fraud is rampant, and anyone with a $5 capable stolen credit card/identity can buy a domain & sign up for CF.

The attack goes like so, in terms of being able to modify Host in transform rules:

  1. Malicious actor has domain malware.example on Cloudflare and creates the subdomain target with the IP address of the target server.
    1a. This target did their due diligence by setting a default vhost and by blocking IPs other than Cloudflare’s, but doesn’t use authenticated origin pulls (but has not gone through the trouble of setting up per-zone authenticated origin certs). This is the most common configuration for competent sys-admins that use CF primarily for DDOS protection, and has been the standard for a long time.
  2. Malicious actor adds transform rule to set Host to target.example
  3. Malicious actor hits his own subdomain with malicious requests

In this scenario, CF doesn’t apply the target’s zone’s firewall rules or trigger WAF since it never hits that zone - just the IP with a custom header. Besides some mechanism where Cloudflare introspects the hostname to see if it’s an active zone and blocks the transform rule if so, this would cause issues.

Another attack is just regarding spam, where the malicious actor attacks non-Cloudflare hostnames with the Cloudflare IP addresses. With a custom HOST header, there’s no way for website admins to see who’s sending these requests other than that Cloudflare’s primary IP ranges are doing it.

Finally, another attack is like the one I originally described, where a customer on a budget signs up for a free SaaS company’s service (eg. Freshdesk, uptime robot, etc) in which their plan only includes a public subdomain like company.freshdesk.com, not a custom domain. These customers could just use a transform rule to have Cloudflare pull from company.freshdesk.com with the Host header set to the same thing, thus making it so the SaaS provider has no idea about the custom domain and it looks like all incoming requests are for the subdomain. This is generally just bad manners and I’m sure SaaS companies wouldn’t appreciate Cloudflare enabling this behavior.

Also note that DNS history is very easy to find - securitytrails, for instance, provides DNS history for free at The World's Largest Repository of Historical DNS data . Another situation is when the website owner doesn’t do their due diligence/doesn’t take hiding their server seriously - you can find many websites with Cloudflare origin certificates/self-signed certificates on shodan or censys because they didn’t block IPs outside of CF, or because they don’t have a default server that serves a fake certificate.

6 Likes

In this scenario CF Workers hit another Cloudflare zone. It’s fast since it stays within the Datacenter but in this configuration the Worker does hit the frontend load balancer within that DC so all rules, workers, etc. are applied before CF makes the request to the target domain’s origin server.

2 Likes

Thanks, I see now why it’s potentially an issue. But, perhaps there could be some process for non-problematic usages to get approved?

I was lucky to find a cloud storage provider with sensible pricing that does NOT use an s3-compatible scheme, otherwise this would have been a massive problem.

And in addition, Workers do have a specific IP inside Cloudflare systems for these requests and set a header allowing you to block them specifically.

The use case is serving objects from an AWS S3 bucket that was created years ago and does not have a bucket name that is compliant with accessing it via a CNAME alias just like @zeblote mentioned. Using Cloudflare to proxy S3 is mentioned in the following 5 places in documentation and other CloudFlare articles that I’ve found, so I did not get the impression that it was something that was discouraged:

  1. Adding vendor-specific DNS records to Cloudflare - There is a section specific to Amazon S3

    Add a CNAME record for the AWS bucket in Cloudflare DNS

  2. Configuring an Amazon Web Services static site to use Cloudflare

  3. Per Origin Host Header Override - Though the restrictions mentioned at the bottom seem a bit confusing -

    4. We allow fully qualified domain names (FQDN) and IP addresses that can be resolved by public DNS.

    followed by

    5. The FQDN in the Host header must be a subdomain of a zone associated with the account, which is applicable for partial zones and secondary zones.

    In the blog post above, this example is referenced:

    To reach example.com hosted on Heroku, you would set the heroku url example.herokuapp.com as the origin Host header “Host:example.herokuapp.com”

    but in the documentation it states that you must create a custom Heroku subdomain on your existing domain to be able to use it.

  4. The all plans comparison page under Customization & Optimization lists this is an option for enterprise customers:

    Header Rewrites - Used to rewrite headers if customers don’t have a way of controlling their headers. For example, if your Amazon S3 bucket requires a specific host header set in order to fetch assets.

  5. Using Page Rules to Re-Write Host Headers

    A common use case for this functionality is when your content is hosted on an Amazon S3 bucket. Amazon has designed their system to only accept host headers that have the same name as the bucket hosting your content. In this way, a request to “Host: your-domain.com” must be re-written to “Host: your-bucket.s3.amazonaws.com", or else the request will be denied.

I get that the last two are specific to enterprise plans, but our organization does not need any features beyond the pro plan, which is why when I came across the new Modify HTTP request headers with Transform Rules functionality I was hopeful this would fit the use case as had been mentioned in other places. Unfortunately after testing, the experience left me under the impression that changing the host header with a transform rule was intentionally prevented in order to drive people to the enterprise plan. I had not considered the implications this ability posed for those with malicious intent, but it does make sense and is most certainly NOT the case here. (I am a bit curious if there are safeguards/restrictions similar to the ones mentioned for origin host header override, or workers, that are imposed for this capability on the page rules for enterprise?)

Because the error message when attempting to save this transform rule was somewhat vague and this is a new feature I thought I would ask for clarification. I’d be curious to know if this capability with the host header would ever be considered for common domains like amazonaws.com?