Connecting through Cloudflare Access using kubectl issues - bad handshake


I am having some trouble using a cloudflared tunnel to connect to my kubernetes clusters. I have multiple existing k8s clusters hosted in AWS EKS with cloudflared running, and tunnels in each cluster that are currently routing to various http services, all of which are working as expected. However, when following the instructions in this article provided by cloudflare, I am seeing the error: ERR failed to connect to origin error='websocket: bad handshake' originURL= when attempting to connect via a client machine.

To Reproduce
Steps to reproduce the behavior:
I have followed all of the steps in the document provided above. From the start, I have:

  1. Created a zero trust policy for my application
  2. Installed the cloudflared deployment on my cluster
  3. Created and configured a tunnel using ingress rules
  4. Created a DNS record to route traffic to the tunnel
  5. Run the tunnel

The last step, attempting to connect via cloudflared access tcp , brings me to the error mentioned above. My ingress looks like this:

tunnel: redacted
credentials-file: /etc/cloudflared/creds/cloudflared
no-autoupdate: true
  - hostname: redacted
  - hostname: redacted
  - hostname:
    service: tcp://kubernetes.docker.internal:6443
      noTLSverify: true
      proxyType: socks
  # any traffic which didn't match a previous rule, and responds with HTTP 404.
  - service: http_status:404

Attempting to connect via another machine looks like this:

create connection to Cloudflare: cloudflared access tcp --hostname --url
Then in another terminal window: env HTTPS_PROXY=socks5:// kubectl get po

Expected behavior
Expected behavior is to have the ability to run kubectl commands via this configuration.

Environment and versions
Local machine attempting to connect is MacOS Montery v12.1. Kubernetes clusters are hosted in AWS using EKS managed node groups.

Logs and errors
Aside from the errors mentioned above, I can see these logs from the cloudflared pod:

ERR  error="dial tcp: lookup kubernetes.docker.internal on no such host" cfRay=7149a9a43f5d7dd2-LAX ingressRule=2 originService=tcp://kubernetes.docker.internal:6443
ERR Failed to handle QUIC stream error="dial tcp: lookup kubernetes.docker.internal on no such host" connIndex=2

The IP I believe corresponds to the default service kubernetes.default.svc.cluster.local.

Additional context
Any help with this is greatly appreciated, as the Cloudflare docs are limited and there doesn’t seem to be much information about this particular issue online.

An update on this post:

I’ve noticed that while the Cloudflare documentation states to use tcp://kubernetes.docker.internal:6443 as the target service in the ingress config, I believe for AWS this should be the EKS API server endpoint - in my case (though this needs to be TCP). However, using tcp:// as the target service provides me with the error Unable to connect to the server: EOF .

Since there is basically no feedback loop when debugging this, I’m not sure if this address is correct, or if the port is correct (AWS documentation points to the endpoint being served on port 443, not 6443 like the usual k8s default), or if I am even making it through the tunnel, as I am no longer seeing logs in the cloudflared pod. I may be moving in the opposite direction, but I’m trying everything I can at the moment since it seems I am a guinea pig for this particular use case.

Were you able to fix it? I’m having the same issue

No, I was never able to fix this. I ended up using some EC2 instance in the same network as a bastion host and tunneled through there, allowing us to connect to our database. The initial plan was to use the tunnel to connect to the cluster and port forward to some pod to access the database, but with no help from CF community or GitHub CF issues I pivoted and used a bastion host instead. A compromise, but a solution nonetheless.

Ha! I am doing exactly the same now (bastion host) for similar scenario. I was able to run kubectl-with-zero-trust blog post, but I couldn’t use it without desktop WARP client (and this is ofc the case for CI pipeline for example). It is a pity that there is no much help from CF in terms of better docs.

1 Like

Nice! How are you connecting to the bastion hosts? We are using the WARP client as opposed to having devs SSH into the servers, so we don’t have to distribute .pem files.

For devs I use WARP client connecting directly to k8s API (without bastion). In the pipeline I’m planning to use connection through bastion host and actually maybe entirely remove connection through CF tunnel as I cannot find the way to use warp-cli to do that.