3 Identical kubernetes clusters, 2 can be accessed, 1 cannot
I have three identical kubernetes clusters which differ only in their name and IP address of the ingress LB that serves them. On 29/7/2020 these services were all accessible, that evening TXT records were added to provide for a new mail service. On 30/7/2020 there was a new TXT record auto-generated for the cluster 0 and access to that route was gone.
$ curl -k https://my.host.name.0/healthcheck curl: (6) Could not resolve host: my.host.name.0
the other services on
my.host.name.2 are still available.
The clusters are all running on GKE with external-dns providing the service discovery, which I can see is sending the same messages it always has to Cloudflare for name resolution. Neither host 1 or 2 have associated TXT records. I deleted the TXT records associated with host 0 and they auto generated, leading me to believe that Cloudflare is creating TXT records on reciept of a name resolution request from the cluster(?). Why this would affect only one service I do not know.
All DNS records are held in the same project, in the same zone. I have checked using CloudFlare diagnostics and this also reports that the hostnames associated with this cluster cannot be resolved.
None of the records are proxied.
I can use kubectl to access the various pods running on the cluster.
I can hit endpoints directly via LB IP address.
(names have been changed to protect the innocent)