I have been using a Cloudflare Zero Trust tunnel on my Synology NAS (running in a Docker container) to successfully expose HTTP services located on the NAS itself via public hostnames mapped to http://localhost:[port] routes This has been working without issue.
I want to use that same tunnel to expose an HTTP service located on a different machine in the same LAN. For example, I want to use a public hostname to route to http://192.168.1.100:3000. Is this possible? I have attempted to do so but I cannot seem to make the connection. Do Cloudflare tunnels have some sort of localhost limitation? As far as I am aware, I do not have any sort of firewall that would be blocking this connection. Could it be that I also need to create a private network for the tunnel that allows connection to the 192.168.1.100 address? Any help or guidance on this configuration would be greatly appreciated.
Not exactly the case, however recently what I’ve managed was connecting my laptop with robotics arm with ethernet cable, which was configured on the IP 172.16.16.2 and the laptop ethernet adapter was 172.16.16.1, while the laptop was connected to the public/work Internet via WiFi network (localhost was 192.168.x.x).
Using the cloudflared tunnel on that particular Windows machine, I exposed the robotcs arm (since it had Nginx and a web interface to mange it) via the particular 2nd network adapter (ethernet, wire) with different IP to control it via Internet sub-domain like robotics-arm.mydomain.com and proteced the access via Cloudflare Access
Happy to use Cloudflare and those great possibilities to achieve awesome results!
Thanks for the help but unfortunately I still cannot connect. My network is as follows. I believe that the issue may lie in the Desktop Workstation trying to act as both a server and a client machine used to access the service that it is hosting. Is there any sort of workaround for this? I can potentially go back to hosting a secondary tunnel on the desktop workstation itself (which I did manage to get working but with some quirks) but that seems like bad practice and it always shows the tunnel status as “degraded”.
Synology NAS: 192.168.1.100
Running cloudflared in a Docker container using the bridge network option.
Public hostname set up to point to an HTTP service on the Desktop Workstation (see below) with a public hostname https://test-service.my-tunnel.com mapped to https://192.168.1.101:3000.
Desktop Workstation: 192.168.1.101
Running the Cloudflare WARP client and logged into my Zero Trust account (this is my daily work PC).
Running a local instance of an HTTP service that I am developing exposed as https://192.168.1.101:3000 (supports HTTPS connections using self-signed cert).
I cannot navigate to https://test-service.my-tunnel.com (fake address for the sake of this conversation) due to a 502 - Bad Gateway error.
I also cannot navigate to https://192.168.1.101:3000 from the desktop workstation or any other machine on my network due to an automatic redirect to the https://test-service.my-tunnel.com public hostname mapping which then fails due to 502 - Bad Gateway.
Hey there, I’m curious with your mileage with using Cloudflare zero trust tunnel vs Synology DDNS. I was investigating the feasibility of similar re the Docker container host. Was successful with the Cloudflare tunnel however wasn’t able to fully test a situation where both the Cloudflare tunnel and the Synology DDNS hostnames played well (Unifi had other ideas ). Synology has the RAM and the resources to handle multiple containers as you do. Do you use the VPN Server feature of the Synology (or other)? Do you let Cloudflare handle all your VPN traffic (if used), all ports (thus no ports exposed via router/firewall) and abandoned Synology DDNS service/Reverse proxy?
I also configured a Split Tunnel on my Warp Client profile (Zero Trust | Settings | Warp Client | Profile).
This way, when I connect to WARP (from my laptop or my Android phone) I have access to my LAN, like a VPN would. This is interesting because I have some services I want to access by IP address, not by a Public Hostname tunnel (NAS shares for example).
Also consider that you can use Zero Trust Access | Applications to put any Public Hostname behind authentication. In my case some of my Public Hostnames will be only accessible if authenticated (via Office 365 SAML) or connected via WARP (Gateway posture).
Ok. Went for it again just a few randos I am encountering in testing, don’t know your mileage on this but can’t hurt to ask lol. I encountered a problem with the Syno Contacts. When attempting to use the CF tunnel public hostname, it seems that while I can see and access the Web app/ Portal of Syno Contacts in the browser fine, adding the CardDAV account in IOS using the CardDav link from Syno Contacts it is unable to verify. Similarly, when using the App Launcher link in the Syno Contacts web portal it wants to reference the *.synology.me:portno configuration for Calendar and Drive e.g. xyz.synology.me:5001/?launchApps=SYNO.SDS.Drive.Application. Do you have this issue?/any recommendations?
I was able to get the Syno Calendar to work and resolve the Contacts/Calendar/Drive App Launcher error by adding the CF domain into the Syno Application Portal (Applications). It didn’t seem to play well with just the HTTPS port. However I am still unable to add the Syno Contacts via MacOS Accounts CardDav. I think I’m overlooking a setting somewhere
Ok, so the CF Applications Access authentication does not play well with Carddav on Synology. Using Application Access with the Identity Provider (default- One-time PIN) did not work even though the app / user / session was authenticated.
Solution: Had to remove the CF Application Access on the subdomain and was able to successfully add the Carddav link no issue on the local client. Not sure if this is the right solution to the problem nor if another option exists to secure the subdomain yet permit access to authorized users. As it stands now the Syno Contacts subdomain is without the additional layer of protection. Didn’t use the CF Applications Access for the Syno Calendar service and may have to not use it for the Syno Contact service until a better solution presents itself. Again I might be missing something but this works for the moment and still protects the host Syno IP using CF.