Cloudflare Tunnel vs Application - too easy to make a costly mistake?

I’m currently evaluating Cloudflare Zero Trust and having a concern about the separation of “Tunnels” and “Applications” concepts.

In my mind, the concepts are loosely translated as:

  • “Applications”: Access policies for hostname pattern.
  • “Tunnels”: Network connection between my resource and the CF backend via cloudflared daemon.

I expect to be able to access to my applications only after I defined the proper policy for it in Applications.

However, it looks like the situation at the moment is the opposite. When I create a Public Hostname in a tunnel config, the CNAME record is created automatically and my backend becomes immediately* available to the public. This is because hostnames that are not covered by any application policy, are allowed to be accessed from the public web.

This caught me off guard few times in my tests when I changed the URL in the tunnel, but forgot to adjust it in the Application. As a result, the application was publicly accessible but I didn’t realised that immediately since I just assumed it was my login cookie that kept me logged in.

To avoid potential issues like this in the future, I created a “catch-all” application with a single “block everyone” rule for *.acme.com. Hopefully, this can minimize any future potential mistakes on my side.

However, the question still stands - why is this designed in such way? It seems quite error prone.
Ideally, I would expect that the actual CNAME-to-Tunnel binding to happen at the Application config stage. Or, alternatively, having an option to block any access that is not covered by Applicaiton policy.

Did I miss anything? What do you think?

Cheers!

Because tunnels are designed to connect resources to Cloudflare’s edge. They have nothing to do with Application zero trust policies or at least they are not designed exclusively for that use case.

Lots of companies use tunnels to expose public resources like www and lots of companies put access policies in front of resources w/o using tunnels to expose them.

Setting a default deny is the right approach for your use case most likely so you are on the right track.

1 Like

That doesn’t answer the why. Yes, tunnels and policies are two different things. But there is no reason why a tunnel definition in the UI should open your resource to the wide web by default - which doesn’t happen when using cloudflared and a config file.

1 Like

Dashboard management is just an abstraction of local management - locally, you’d have your config file (or pass in --url on the command line) and also run cloudflared tunnel route dns <tunnel-name> <hostname>.

Sure, Tunnels and Access pair well but by no means are Tunnels specific for services that shouldn’t be publicly accessible. They’re great for ensuring origin security (a firewall that allows nothing is better than a firewall that only allows Cloudflare) and also help in situations where you don’t have control over inbound NAT since it establishes long-lived outbound connections for ingress instead.

Local management will always have more flexibility, dashboard management is just the easier route to entry when starting off with Tunnels and it makes sense to make it as easy as possible.

It’s a lot easier than telling people ‘create the public hostname, now go back to the Tunnels page and copy the UUID under the tunnel name and then create a CNAME for that public hostname which points to UUID.cfargotunnel.com

The best practice you’ve arrived upon is one I have been providing to Cloudflare customers in this forum for several years (and the UX is just a few months old). There’s nothing special about the UX vs a config file.

Tunnels map a host name to an origin. There is nothing magic or special about them from a security perspective except what you are ascribing to them.

The better question would be why do :orange: records allow anyone to access a proxied website by default?