Problem configuring Cloudflare Tunnel with Kubernetes on GCP

I want to use Cloudflare Tunnel as an SSL ingress proxy. I am running n8n on a Google Cloud Kubernetes cluster. I followed the Cloudflare setup instructions here https://github.com/cloudflare/argo-tunnel-examples/tree/master/named-tunnel-k8s

As part of the instructions I setup my cloudflare DNS to point to n8n.mydomain.com using the command cloudflared tunnel route dns n8n n8n.mydomain.com

I confirmed the name of the n8n-service on GCP (which is accessible via a LoadBalancer and in the default namespace) using kubectl get svc -A

I modified cloudflare.yaml to add an ingress that points to the n8n service and deployed it using kubectl apply -f cloudflared.yaml based on this file https://github.com/cloudflare/argo-tunnel-examples/blob/master/named-tunnel-k8s/cloudflared.yaml

    # Name of the tunnel you want to run
    tunnel: n8n
    credentials-file: /etc/cloudflared/creds/credentials.json
    # Serves the metrics server under /metrics and the readiness server under /ready
    metrics: 0.0.0.0:2000
    no-autoupdate: true
    ingress:
    ingress:
    - hostname: n8n.mydomain.com
      service: http://n8n-service.default.svc.cluster.local:5678

Problem: I can’t load the site when I visit http://n8n.mydomain.com:5678 or https://n8n.mydomain.com:5678. However, I can visit the site when I go to http://EXTERNAL-IP:5678

Any ideas what I might want to consider doing to get this working? Thanks kindly!

Just a follow up to share how I self-resolved this issue a few days ago.

Restarted Cloudflared Pods: Sometimes, changes to ConfigMaps might not be immediately picked up by existing pods. To ensure your pods are using the latest configuration, delete the current cloudflared pods to force a restart kubectl delete pods -l app=cloudflared

Kubernetes will automatically create new pods with the updated configuration.

Checked for Successful Pod Restart: Ensure that the cloudflared pods are restarting successfully kubectl get pods -l app=cloudflared

Reviewed Logs Again: After the pods have restarted, check the cloudflared logs again kubectl logs -l app=cloudflared

In my case, everything just worked.