503 Service Unavailable The origin has been unregistered from Argo Tunnel (after 12h)


I’ve setup argo tunnel as a k8s ingress to one of my cluster deployment. Everything works fine when it’s been just setup, however after some time (6h-12h) I get a lot of instability.

Recently the argo tunnel has been unregistered (without me doing anything). What happened ? Where does this instability come from ?

Hello floran,

Are you seeing any errors from Cloudflared? What platform are you running on? Are you using an ingress controller or running Cloudflared by itself? Is it running as a service or through the CLI? What version are you running?


Hi Joaquin,

Thank you for getting back to me.

I do not see any error in Cloudflared, nor in the Cloudflare-controller pod. I’m running on Ubuntu 16.04, using the ingress controller and procedure described here (https://github.com/Cloudflare/Cloudflare-ingress-controller), as my backend is running on a kubernetes cluster.

I created a kubernetes service, a deployment of my backend and an ingress as described there. Cloudflared’s version is Cloudflared version 2018.5.4 (built 2018-05-14-2343 UTC).

What is puzzling me is that it works for a bit of time and unregisters itself afterward.

Here is a snippet of the controller (even after unregistration) :

I0519 08:45:54.929032       1 controller.go:699] Validation ok for running default/backend/backend with 1 endpoint(s)
I0519 08:46:02.831740       1 reflector.go:428] github.com/Cloudflare/Cloudflare-ingress-controller/pkg/controller/controller.go:320: Watch close - *v1.Service total 0 items received
I0519 08:46:02.835286       1 round_trippers.go:436] GET 200 OK in 3 milliseconds
I0519 08:46:51.629027       1 reflector.go:286] github.com/Cloudflare/Cloudflare-ingress-controller/pkg/controller/controller.go:322: forcing resync
I0519 08:46:51.629127       1 controller.go:185] Annotation kubernetes.io/ingress.class=argo-tunnel
I0519 08:46:51.629136       1 controller.go:185] Annotation kubernetes.io/ingress.class=argo-tunnel
I0519 08:46:53.332200       1 reflector.go:286] github.com/Cloudflare/Cloudflare-ingress-controller/pkg/controller/controller.go:320: forcing resync
I0519 08:46:53.332301       1 controller.go:246] Watching service default/backend
I0519 08:46:53.332327       1 controller.go:699] Validation ok for running default/backend/backend with 1 endpoint(s)

It’s also worth mentioning I had to use a workaround to create the LB pool myself on Cloudflare are seen in this post Kubernetes ingress controller errors - #7 by barry1

In order to better understand what is happening, could you explain what are the conditions for an Argo Tunnel to unregister itself ?

Update : I’m using Flask as my web app, and it works well. As soon as I embed this app in either gunicorn, or uwsgi, I receive empty json data, which create an error and unregisters the tunnel.

Has anyone been successful in using argo-tunnel with gunicorn/uwsgi (sending JSON data) ?

Final update : argo-tunnel seems to send only chunked data (Transfer-Encoding: chunked) which makes it very hard to work wsgi web servers (basically fails without a workarounds).

@floran, do you have a copy of the app on github? We’ll work on a recreate, but I want to make sure it’s as close to what you’re running as possible.

Hi @joaquin

Sorry for the delayed reply.

Unfortunately not, the code is private. What I run is essentially a python Flask webserver with a gunicorn wgsi http server as a front. This runs as a docker container on a kubernetes cluster.

I then have an ingress controller for the argo-tunnel (i followed this guide here : GitHub - cloudflare/cloudflare-ingress-controller: A Kubernetes ingress controller for Cloudflare's Argo Tunnels)

Hope this helps, let me know if you need additional information.

I’m really struggling to find any logs that relates to the unregistration of the tunnel. I have the feeling it just happens randomly (might be wrong though…)

Any idea where I could find this information ? The argo-controller pod unfortunately doesn’t output anything useful in this regard as far as I have seen.

Hi @floran,

I was able to replicate the error you were having with Gunicorn and Flask. While I’m trying to find another way to get WSGI support of chunked transfer encoding, a good workaround for now is to put an Nginx reverse proxy in front of Gunicorn, and configure it so that traffic first goes to Nginx, then Gunicorn. Then, you can specify chunked_transfer_encoding as off in the Nginx config, as shown here.

This is a helpful tutorial on how to set up an Nginx->Gunicorn->Flask Stack.



I have a workaround in place that aggregates chunks while I’m setting up NGINX.

Thanks for the links