Argo Tunnel for NGINX ingress controller

Hello,

Is it possible to have an Argo tunnel for a set of nginx ingress controllers? We have a large number of microservices which we would like to expose using our nginx ingress and have that connected to Cloudflare with an Argo tunnel. Please advise.

Thanks,
Paul

2 Likes

We don’t have a recipe for an nginx ingress controller, but this may work for you:

We have an Argo-Tunnel ingress controller that you can use w/ or w/o our loadbalancer.

2 Likes

The problem is related to the number of origin endpoints and the need of load balancing them separately. We don’t need to load balance every single origin, but just the nginx ingress controllers aggregate all of the our services. Is it possible to do this? Maybe through a wildcard argo ingress controller entry pointing to the nginx ingress controller service?

You could create a service that point to your nginx controller pods then make an argo ingress pointing to that service.

But that would mean you’d have to configure twice the ingress, once for your ingress-nginx and once for your ingress-argo.

In theory, it should work.

Is the goal of not load balancing every origin to reduce the number of origins you have to pay for in Cloudflare? Is there a disadvantage to load balancing between all of them other than that?

Do you know if anyone ever tried this? Are there any examples out there?

Hello Zack,

It’s a combination of things. One this is for a proof of concept client demo where we are investigating replace traffic manager and Azure App Gateways with Cloudflare Argo Tunnels w/ Load Balancing. We have a free tier account and don’t have a problem for paying for more origins, but going enterprise is unfeasible due to the timeline of the client demo (early July).

The second thing is load and utilization. It is less about performance, and load balancing, and more about automatic failover. We would like to have a single health check for all of the microservices and have them fail over to another cluster at the same time. Since this is for a client demo/PoC traffic utilization would be minimal (requests in the hundreds per day at most). Keeping the cluster origins together would help with our multi-master DBs together and keeping data consistent. Our code is still immature (as is for most PoC code) and we cannot guarantee health check failures for all services when we trigger an outage / taking our DBs offline. Keeping the origins together and having a single health check would be the best way to ensure a simple failover for demo purposes. That said for production this should be revisited or it might make sense to use an API gateway type solution (Istio/Envoy, Ambassador, Kong, etc) which is impossible given the timeline.

The best solution for now is having a single nginx ingress (which would help with a single health check and keeping failover simple), the second best would be increasing the number of origins to either 8 or 12, however creating a enterprise account for a demo environment for 2 weeks would be a hassle for us as well as Cloudflare itself. Not to mention the additional complication of ensuring all services to fail and trigger a full failover.

Thanks,
Paul

Did you ever get anything with ingress nginx and argo tunnels?

were you able to solve this?