I’m not really sure how that would be possible due to networking (and to some extent, physics), although if you can do it, I would be… Impressed.
I have 1Gb minimum across my wired network, with a bit of port bonding when I find a connection is bottlenecked. 1G ethernet is a limitation we hit. Wired desktops are all connected at 1Gb/s, laptops and modern iPhones get a comfortable 400Mb/s over Wi-Fi (and I have docking stations, so if you plug in to use desktop monitors and keyboards you already on wired ethernet).
My ISP connection offers me 800Mb down (real speed), but only about 40Mb upstream. And the 800Mb doesn’t matter, if traffic needs to go back out and in again, we’re talking about sharing a 40Mb LAN across the entire internal network.
I’m hitting Cloudflare in a different country, about 40ms away. To compare latency:
rtt min/avg/max/mdev = 0.071/0.165/0.213/0.044 ms
External to Cloudflare:
rtt min/avg/max/mdev = 33.740/41.518/53.519/6.056 ms
Cloudflare does have a DC in my city, but it doesn’t peer with either of the major ISPs, so that doesn’t actually help on a daily basis, and 40ms is a reasonable number. I’m told enough to remember when 40ms didn’t even get me to my ISP’s routers and Gb/s was definitely a typo.
The user experience accessing an internal webapp bouncing through Cloudflare is quite usable much of the time, often nearly identical to being fully internal. Often though, not always. Even at only 40ms to Cloudflare I can spot the difference with some applications, but it is perfectly usable.
But even just basic things like uploading photos from my iPhone to the NAS so that I can work on them from a computer is substantially slower (40Mb instead of 400Mb, which matters when you are holding a device and wait…)
And if we are talking about routing all IP traffic for laptops too and not just HTTPS? We use a bit of SMB2 where 40ms of latency is noticeable, but even on SMB3 where a bit of latency is okay, shared 40Mb instead of 1Gb is… A big downgrade.
Ultimately (and vaguely sarcastically, sorry), unless Warp and cloudflared tunnels can shove 1Gb/s through a 40Mb/s link, I’m going to be using split DNS/routing to keep internal traffic internal.
Now if I could get local clients to realize they’re on the LAN and talk to a local cloudflared instead of running internal traffic out and back in, we might have something usable for mobile devices. This would put everyone on a shared 1Gb/s instead of shared 40Mb/s. Still a downgrade for wired devices used to having dedicated 1Gb to their switch, but nothing on Wi-Fi would notice.
Or if you can brow-beat my internet provider into offering symmetric connectivity at a reasonable price point? This is not your job, but we work with the internet connection we have, not the internet connection we can’t afford.
This works well enough and is the next best thing for anyone that is widely using Warp and tunnels but still needs a fast internal network.
When I was routing all traffic through Warp+ I had issues with iOS connectivity dropping for way too many seconds when switching between networks (every time my iPhone hopped on or off Wi-Fi it was several seconds before I had working data again), to the point that I had to keep it disabled on mobile networks and only had it enabled on foreign Wi-Fi, but that may well have been addressed by now.
I run my own WireGuard VPN and it happens periodically but it is infrequent and definitely not every time I walk in or out of Wi-Fi range. And of course I have it configured to turn off on the SSID at HQ, which is the most common network change.