I was running cloudflared on two seperate machines for seperate hosts, not sure how that seemed to have collided but I’ll attempt again tomorrow.
I’m also having issues with the following:
Setting up my CNAME to route FROM “*” TO “@”.
EG. I have my ingress controller running on “example.com” and I want to have a CNAME with “.example.com" to redirect to “example.com” this use to be achievable - I would need to enable the CNAME to be proxied to avoid leaking the resolvable host IP but now when I enter the "” wildcard into the CNAME it says “DNS only” and won’t allow me to proxy the CNAME?
Secondly, I’ve Attempting to follow https://developers.cloudflare.com/argo-tunnel/reference/sidecar/ documentation.
I’ve built the docker image that I’m using here: multi-arch-images/Dockerfile at main · raspbernetes/multi-arch-images · GitHub
I’m running it as a sidecar container in my deployment:
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: “12”
helm.fluxcd.io/antecedent: kube-system:helmrelease/nginx-ingress
creationTimestamp: “2020-04-27T01:43:03Z”
generation: 13
labels:
app: nginx-ingress
app.kubernetes.io/component: controller
chart: nginx-ingress-1.36.3
heritage: Helm
release: nginx-ingress
name: nginx-ingress-controller
namespace: kube-system
resourceVersion: “3211025”
selfLink: /apis/apps/v1/namespaces/kube-system/deployments/nginx-ingress-controller
uid: b3042bb6-2846-4598-9ff9-ca699f43569c
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
release: nginx-ingress
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
app.kubernetes.io/component: controller
component: controller
release: nginx-ingress
spec:
containers:
- args:
- /nginx-ingress-controller
- –default-backend-service=kube-system/nginx-ingress-default-backend
- –election-id=ingress-controller-leader
- –ingress-class=nginx
- –configmap=kube-system/nginx-ingress-controller
- –default-ssl-certificate=kube-system/acme-crt-secret
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm:0.30.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 600Mi
requests:
cpu: 25m
memory: 500Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- args:
- –url=192.168.1.155
- –hostname=raspbernetes.com
- –origincert=/etc/cloudflared/cert.pem
- –no-autoupdate
- –proxy-connect-timeout=60s
- –proxy-tls-timeout=60s
- –no-tls-verify
- –loglevel=debug
command:
- cloudflared
- tunnel
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: raspbernetes/cloudflared:local
imagePullPolicy: IfNotPresent
name: cloudflared
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/cloudflared
name: tunnel-secret
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: nginx-ingress
serviceAccountName: nginx-ingress
terminationGracePeriodSeconds: 60
volumes:
- name: tunnel-secret
secret:
defaultMode: 420
secretName: raspbernetes.com-cloudflared-cert
I am successfully able to run my main container without the sidecar cloudflared container. I have also ran the cloudflared daemon on my node outside my kubernetes cluster and configured it to work with my static IP and that worked.
However, when running it within the container inside my cluster it would not work.
Errors look along these lines:
logs
time=“2020-04-27T13:50:38Z” level=warning msg=“Cannot determine default configuration path. No file [config.yml config.yaml] in [~/.cloudflared ~/.Cloudflare-warp ~/Cloudflare-warp /usr/local/etc/cloudflared /etc/cloudflared]”
time=“2020-04-27T13:50:38Z” level=warning msg=“At debug level, request URL, method, protocol, content legnth and header will be logged. Response status, content length and header will also be logged in debug level.”
time=“2020-04-27T13:50:38Z” level=info msg=“Version 2020.4.0”
time=“2020-04-27T13:50:38Z” level=info msg=“GOOS: linux, GOVersion: go1.14.2, GoArch: arm64”
time=“2020-04-27T13:50:38Z” level=info msg=Flags hostname=raspbernetes.com loglevel=debug no-autoupdate=true no-tls-verify=true origincert=/etc/cloudflared/cert.pem proxy-connect-timeout=1m0s proxy-dns-upstream=“https://1.1.1.1/dns-query, https://1.0.0.1/dns-query” proxy-tls-timeout=1m0s url=192.168.1.155
time=“2020-04-27T13:50:39Z” level=info msg=“Starting metrics server” addr=“127.0.0.1:39611/metrics”
time=“2020-04-27T13:51:36Z” level=info msg=“Proxying tunnel requests to http://192.168.1.155”
time=“2020-04-27T13:51:38Z” level=debug msg=“Giving connection its new address” connID=0 function=GetAddr subsystem=edgediscovery
time=“2020-04-27T13:51:53Z” level=error msg=“Tunnel creation failure” connectionID=0 error=“h2mux handshake with edge error: Handshake error: 1000 handshake timeout”
time=“2020-04-27T13:51:53Z” level=error msg=“Quitting due to error” error=“h2mux handshake with edge error: Handshake error: 1000 handshake timeout”
time=“2020-04-27T13:51:53Z” level=info msg=“Metrics server stopped”
I have my suspicions this may be some type of exotic race condition between the sidecar and the service starting and exposing the service. I’m not sure if cloudflared in the sidecar approach is attempting to bind to the service before it has become available?
Any insights with these would be welcomed as I’m sure others will have similar issues going forward.