Cloudflare and Synology DDNS resolution quagmire

I installed Cloudflare zero trust to test its implementation and encountered a peculiar situation.

My initial test install was done using a Docker container on a Synology NAS (DSM 6.2.4-25556 Update 6) for cloudflared. Test network was on UNIFI setup, a default 1.1.1.1 WAN DNS setting with ports 443, 8080 and 3448 among other ports opened and firewalled to the WAN. The Internet connection was non-NAT, static and used for testing. Port 443 was used to test https access to Synology DDNS reverse proxied apps. I also attempted to examine how Cloudflare handled local DNS resolution by implementing (and later removing) a test pi-hole/unbound server which was providing local DNS resolution services as the UNIFI was responsible for DHCP (24 port switch) and firewall (USG gateway) only.

I got the tunnel running which involved setting up the Zero Trust DNS gateway location etc on the network side of things and did some testing with WARP, Users and Devices policies. Great stuff for those looking to secure pretty much any local app, service, device, browser, user or network connected to the Internet :slight_smile: All was good and accessible via the tunnel and the FQDN Cloudflared website myapp.testdomain.com, the Synology DDNS urls became unaccessable via myapp.synology.me. Great I thought! an easier and more secure alternative to access Synology or other hosted apps and services without the hassle. (Note: Outside of the Docker container, I only changed the gateway DNS default location to the ones provided by Cloudflare on the UNIFI gateway WAN. No changes were made to the firewall rules etc. All certificates were downloaded, installed, verified and worked as is. All configurations were done through the Cloudflare UI on the website. Like I said good stuff :slight_smile: )

Upon completion and happy with how initial testing and deployment went. I revoked access from the devices, removed the devices and certificates, deleted tunnel, removed any split tunnel, domain fallback entries, removed local IP DNS location from Zero Trust and basically returned the zero trust and all participating devices to default. On the network side I reset local DNS ips to what the original network was using 1.1.1.1 (though I later tried 8.8.8.8), reset devices and even deleted the domain from Cloudflare before destroying the docker container as it was for testing and not live deployment.

24 hours later I discover that the test Synology and its DDNS service remain inaccessible via their hostname.synology.me url from external. On the inside it resolves via NSLookup and can ping however from the outside of the network while it can NSlookup and get the IP, no ping is possible.

Figuring I missed something I tried another host which was reverse proxied via the 443 and same result. So I spent the better part of the day rebooting and resetting the various network connections and devices, to ensure I did not overlook something on my end, however ALL devices involved were reset to their previous working and pingable configurations.

On local network nslookup and ping resolve with DNS services and all services are available via the reverse proxy even when using other DNS servers e.g. 8.8.8.8 however all services via the 443, no dice.

While you can remove a website from Cloudflare, there is no reset to the Cloudflare zero trust service and while I could be wrong, it seems that the service has a residual firewall or other entry that is not releasing the WAN IP from Cloudflare.

I am open to and appreciative of ideas on restoring the default synology DDNS resolution as I have already reset the firewall and gateway to their previous backed up configs (just in case they were the issue or any lingering cache entries) and nada. As a “free tier” customer I don’t have the option to ask Cloudflare to review their internal entries to release or reset any offending entries though I attempted via Customer Support :frowning: – Is this a case of Hotel Cloudflare, you can check out anytime you want, but you can never leave…

Solved

Reset ISP connector device, requested a DNS cache/IP table flush and it also turned out that Unifi decided to update device software (firmware) resulting in a corrupted update with no visible alerts. Reset and resolved with Ubiquiti technician.

For users on the latest Unifi OS be advised you may need to add an Echo Request ICMP rule to your WAN LOCAL as new update removes default ping rules. (Old version of software had 3 ICMP rules All, Request and Reply)