Problem with Argo Tunnel

When I issue a tunnel list command: ./cloudflared tunnel list, I see:
“You have no tunnels, use ‘cloudflared tunnel create’ to define a new tunnel”
However, when I run:
./cloudflared --origincert /root/.cloudflared/cert.pem --no-tls-verify --hostname ssh.mydomain.com --url ssh://localhost:22

I see:
2021-02-16T17:47:00Z INF Cannot determine default configuration path. No file [config.yml config.yaml] in [~/.cloudflared ~/.cloudflare-warp ~/cloudflare-warp /etc/cloudflared /usr/local/etc/cloudflared]
2021-02-16T17:47:00Z INF Version 2021.2.2
2021-02-16T17:47:00Z INF GOOS: linux, GOVersion: devel +11087322f8 Fri Nov 13 03:04:52 2020 +0100, GoArch: amd64
2021-02-16T17:47:00Z INF Settings: map[hostname:ssh.mydomain.com no-tls-verify:true origincert:/root/.cloudflared/cert.pem url:ssh://localhost:22]
2021-02-16T17:47:00Z INF Environmental variables map[TUNNEL_ORIGIN_CERT:/root/custom/cloudflared/cert.pem]
2021-02-16T17:47:00Z INF cloudflared will not automatically update when run from the shell. To enable auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/
2021-02-16T17:47:00Z INF Initial protocol h2mux
2021-02-16T17:47:00Z INF Starting metrics server on 127.0.0.1:33965/metrics
2021-02-16T17:47:01Z INF Connection established connIndex=0 location=ATL
2021-02-16T17:47:02Z ERR Register tunnel error from server side error=“There is already an active tunnel for ssh.mydomain.com. To distribute requests between multiple origins, use Cloudflare Load Balancer. Existing tunnels not using the Load Balancer will need to be registered again” connIndex=0
2021-02-16T17:47:02Z INF Tunnel server stopped
2021-02-16T17:47:02Z ERR Initiating shutdown error=“There is already an active tunnel for ssh.mydomain.com. To distribute requests between multiple origins, use Cloudflare Load Balancer. Existing tunnels not using the Load Balancer will need to be registered again”
2021-02-16T17:47:03Z INF Metrics server stopped
There is already an active tunnel for ssh.mydomain.com. To distribute requests between multiple origins, use Cloudflare Load Balancer. Existing tunnels not using the Load Balancer will need to be registered again

Hello,

Did you perform cloudflared tunnel login as per the Setup instructions? https://developers.cloudflare.com/argo-tunnel/getting-started/setup

You may also be interested in reading this Tutorial for an end to end view of securing an SSH server with Tunnel and Access: https://developers.cloudflare.com/cloudflare-one/tutorials/zero-trust-security/gitlab
(more tutorials at https://developers.cloudflare.com/argo-tunnel/learning/tutorials)

Hi DotMrCode,

Thanks for your response. Yes, I did perform the ‘cloudflared tunnel login’ which got me the cert.pem certificate. I will go through the CF tutorials tomorrow and will post an update afterwards.

Following the CF tutorial, I’ve gotten further but still not there.

I’ve created a tunnel and I can manually start it; however, I need for the tunnel to start on boot. Since I’m running unraid, I cannot do ‘cloudflared service install’.

I saw a post where someone is using supervisord to start the tunnel:
/root/custom/cloudflared/supervisord -c /root/custom/cloudflared/supervisord.conf -d
Contents of the supervisord.conf:

[program:gitlab]
command = /root/custom/cloudflared/cloudflared --origincert /root/.cloudflared/cert.pem --no-tls-verify --hostname gitlab.mydomain.com --url http://localhost:9080
autostart = true
autorestart = true
startsecs = 20
startretries = 100
redirect_stderr = true
stdout_logfile = /var/log/cloudflared_gitlab.log
stdout_logfile_maxbytes = 2M
stdout_logfile_backups = 0
stopsignal = INT

When I try to start the tunnel using supervisord, I get flooded with fail errors upon boot:

INFO[2021-02-17T15:12:46-05:00] fail to wait for program exit program=gitlab
INFO[2021-02-17T15:12:46-05:00] fail to wait for program exit program=gitlab_ssh
ERRO[2021-02-17T15:12:54-05:00] fail to start program because retry times is greater than 100 program=gitlab

That’s encouraging. My lazy approach to keep such a process running was to write a script that uses pgrep to see if that process is still running. If not, then run it.