So I’m debating on making the leap to the biz plan - so that I could leverage railgun for more performance benefits. I have an ecommerce site (rossirovetti.com) based in large part on classic asp (stop laughing) and on IIS+MS Sql Server (again, stop laughing). One thought I had was to add a hyper-v to the server and install a light ubuntu install and add railgun and connect it to a loopback adapter so that it gets dynamic content as fast as possible. But I’m trying to figure out what railgun is actually doing for me to understand if this will help my site… is it like an nginix reverse proxy that has an argo-like tunnel to CF? Most of my heavy content itself is quite cachable, but being an ecommerce site, we have quite a bit of dynamic stuff going on, and I’m trying to figure out ways to speed up this booger even more than it is without breaking the bank. Any thoughts?
Maybe @matteo has dug into this, but it sounds like an always-on connection that caches your site and only sends compressed diffs of your content, giving the visitor what performs like locally cached (at Cloudflare POP) content.
Have you seen their description?
Thanks for the quick reply… their description doesn’t seem to break down the “how” from a technical perspective. In that link, 1/2 way through the description, it’s marketing-speak: “Railgun uses a collection of techniques…” …collection of techniques is a nice giveaway for the person writing this article works in marketing, I understand, most decision makers probably need less techy stuff to make their decision. But I’m hoping to get a little more information on how it actually works, because it’d help me evaluate (based on how my site is architected) whether I’d see a return on an investment by bumping to the tier with which it’s offered. Does that make sense?
What about the help content on the Speed tab on the CF dashboard?
ah, going to the original link already mentioned (which I didn’t bother because you said it’s only marketing) they specifically say what I pasted, and more. Basically a local cache “on-prem”, with delta calculation, sending just the delta, and re-doing the diff on CF’s servers. Basically saving you transferring large stuff to a customer far away, which will get the full response from a nearby server.
If you’re highly cached anyway (full page JS app, for example), and only transfer API requests and their response, there’s probably not much difference; It’s the same data that would need to be transferred. If, however, you always render full (and different) pages for every single user coming to your site, then, the farther your end users are (see Bandwidth-delay product - Wikipedia), the better experience this should give…
P.S. I don’t work for Cloudflare. Just a networking guy…
Ahh! Very helpful; thank you. There’s one aspect though I’m trying to figure out, which is if it saves any work on the part of the web server. It sounds like it may not? For example, if a dynamic page gets built the web server is still working to create that page for the delta to be computed. So while this could speed up the user experience there doesn’t appear to be a reduction in server workload, meaning that there’s still an issue of scaling at the server level. Does that analysis sound right? Thanks for your help!
That sounds correct. Especially if you’re running an NGINX reverse proxy. It’ll hit your ASP/IIS server to generate the page code before it’s diffed and compressed for delivery to the Cloudflare POP.
I do not know who the Railgun guru is here at Cloudflare to call upon. Maybe @cs-cf or @cloonan know.
I believe @eva2000 is using it on his website together with Argo.
I am sure he has stats! He has stats on everything!
Yes I am using Cloudflare Argo + Railgun on my forums and did a write up on Argo experience at https://community.centminmod.com/threads/Cloudflare-argo-smart-routing-in-action.17517/
Pretty much what Railgun does for dynamic non-cacheable pages.
If you’re upgrading to Business plan, enabling Railgun is a no brainer though
This topic was automatically closed after 30 days. New replies are no longer allowed.