Architecture for Large Enterprise

What is your current architecture for a large enterprise fronting web traffic via Cloudflare? Do you use tunnels or on-prem appliances for load balancing/ reverse proxying? We are looking to revisit our current structure with Netscalers fronting dedicated reverse proxy appliances in the DMZ that forward request to our internal servers where applications sit.

Is Cloudflare Tunnels something you can recommend for enterprise datacenter traffic with applications getting total of 200 million requests totaling of 20-30 TB? If so is there a enterprise support for it? I do not see any tier differences for that. Is it free except for Load Balancing?

It depends.

If you are using those to provide load balancing then it gets trickier with tunnels. In some instances you may be able to use equivalent LB features in Cloudflare with equal or better performance, but there are sometimes router specific features that are either really specialized (and shmaybe could be replicated using custom code in workers) or do local network things that can’t really be done anywhere else.

But ~= 80% of the time when I’ve worked with Enterprise customers who were thinking of dropping dedicated appliances or substituting Cloudflare’s LB for another Cloud provider’s it was pretty straight forward to do so (devil is always in the details though).

Some of Cloudflare’s largest customers (1M+ RPS) utilize tunnels exclusively to front their architecture. Being able to incorporate tunnels into infrastructure as code also allows for some pretty cool increases in agility and service delivery.

I would say start with a non-critical app and/or dev/staging to get the tooling, monitoring and processes down and then expand from there. If you’re on an Enterprise Plan you can also speak with your SE/CSE about more detailed specifics regarding your architecture and use case(s).

Using Cloudflare LB + Tunnels has (in my experience) been significantly less costly than using Cisco/Citrix hardware.

3 Likes

Thanks for the reply, this is very helpful. So our environment basically uses Netscaler LBs to front and send the host traffic to the correct appliances that do reverse proxying.

Then the appliances forward traffic to the servers/farms along with load balancing method. They are also used for url rewrites, cert termination, redirects and such.

Well rewrites, redirects and such is potentially possible with tunnels (or workers), but use cases would need to be individually vetted. If something can’t be done with Cloudflare a container running a simple nginx server could potentially be a viable alternative to a $$$ appliance for corner cases that Cloudflare can’t solve for internal transformations.

Cert termination… well you /can/ direct traffic which was received on HTTPS by Cloudflare’s edge to an HTTP port. If you’ve installed the tunnel in a container where the traffic is really not in a position to be sniffed then awesome.

But in general I’m against traffic transiting any network unencrypted. I know people do it for performance reasons, but from a security perspective trusting your trusted network actually to be secure is not a design principle I endorse. YMMV.

2 Likes

Thanks again.

By container, I believe you refer to the open source NGINX correct or the F5 one with enterprise support? One of the solutions I was looking at was Nginx plus for reverse proxy/lab after fronting waf with Cloudflare.

I have another question for the tunnel. Does that live in a system(I mean the YAML file with configuration/and daemon) within my network where the ingress config tell where the tunnel should go within my network. I can configure it from my system but how does that config stay alive? I am not deployed it to every server right? The documentation seems to be a bit confusing.

Unless your company requires enterprise support, most folks I know run the open source version or some other proxy equivalent (NGINX was really just a placeholder example).

Well it’s read by the local tunnel instance, where the configuration lives / is managed has a lot of possible answers. Here’s an older Terraform example:

and it can be managed in the dashboard (and one assumes via API to the backend as well). You don’t have to map tunnels 1 to 1 with origins and where they live in the network is a design choice. If it’s going to by talking to origin servers in plain text then my design choice is as close to the server as possible… probably on it. If the data is still encrypted then where the tunnels live become more flexible in my mind and it is usually a discussion between developers and infrastructure teams about what is the easiest to manage long term.

2 Likes