HTTP/2 and origin server?

It appears that Cloudflare is not supporting HTTP/2 from the origin server.
While reading this forum I read confirmation of this fact and the response is to use CF’s railgun product instead.

Is the decision to not use HTTP/2 a technical design one or a business one to push users to use railgun?

I am disappointed that H2 is not supported as It is clearly a faster experience in general. Perhaps its a feature that will come one day.

1 Like

I am pretty sure it will, though it probably will be superseded by HTTP 3 I guess

To be honest, I somewhat doubt all that superiority of HTTP 2. Sure, in theory it does perform better and there is a good chance that it will do so as well in synthetic benchmarks, but the real world might be a bit different

Just a quick test loading on HTTP 1 and 2:

I am not saying all the ideas of HTTP 2 are bad, just that it (along with 3) is a tad too hyped for my taste :slight_smile:

As I understand it, HTTP2 also requires some server-side optimizations to get the benefit of the protocol, and not all will be realized by a caching proxy sitting in the middle.

1 Like

You referring to ? Honestly, Railgun and HTTP/2 do different things (though attempt to optimise the connection between CF and origins) so not sure why documentation suggests Railgun as a replacement for CF to Origin communication over HTTP/2 @cloonan ?? Edit ok Railgun’s multiplexed connection like HTTP/2 might be what they’re referring to

Railgun requests are multiplexed onto the same connection and can be handled asynchronously. This means that Railgun is able to handle many, simultaneous requests without blocking and maximizing the use of the TCP connection.

Railgun uses a collection of techniques to accelerate and cache these previously uncacheable web pages so that even when the origin server must be consulted web pages are delivered quickly. And that even works for rapidly changing pages like news sites, or for personalized content.

Railgun works on dynamic uncacheable web pages i.e. wordpress generated html page itself while HTTP/2 would work to accelerate static cacheable assets served via HTTP/2 supported web browser. As CF edge servers aren’t really a web browser, the benefits of HTTP/2 might not be fully utilised between CF edge server and your origin ???

Optimise CF to Origin connections via TLSv1.3

If you want to optimise the CF edge server to origin server connection right now, best to ensure your origin server supports HTTPS over TLSv1.3 and origin server has SSL certificate in place so you can enable Cloudflare Full/Full Strict SSL instead of Flexible so that CF edge servers can connect to your origins over HTTPS via HTTP/1.1 TLSv1.3 instead of HTTP/1.1 TLSv1.2

Using HTTP/1.1 TLSv1.3 on origin side saves 1-RTT per request over HTTP/1.1 On slow 3G mobile connections that 1-RTT can be 300+ milliseconds or more !

However, the only place I am aware where HTTP/2 is used between CF and origins is if you enable Argo and deploy Argo Tunnels

The connection between cloudflared and the Cloudflare edge is a long-lived persistent HTTP2 connection encrypted with TLS. To keep the connection alive, cloudflared sends a heartbeat to the edge in the form of a ping frame over HTTP2. If the connection is dropped, the cloudflared client re-establishes the connection with Cloudflare. cloudflared connects to Cloudflare on port 7844.

For TLS negotiation between Cloudflare and the cloudflared client, the cloudflared client downloads a certificate for the requested domain (*.requested-domain.tld) from Cloudflare before starting. The initial login with your Cloudflare credentials through cloudflared generates this certificate. This certificate is a self-signed certificate from Cloudflare’s Origin CA service.

All packets between Cloudflare and the tunneled web server use stream multiplexing over HTTP2. In HTTP2, each request/response pair is called a Stream and given a unique Stream ID so that these streams can be “multiplexed” or sent asynchronously over the same connection.

FYI, for additional insight here’s Nginx’s reasoning as to why they didn’t implement reverse proxy over HTTP/2 to origins

There is almost no sense to implement it, as the main HTTP/2
benefit is that it allows multiplexing many requests within a
single connection, thus [almost] removing the limit on number of
simalteneous requests - and there is no such limit when talking to
your own backends. Moreover, things may even become worse when
using HTTP/2 to backends, due to single TCP connection being used
instead of multiple ones.

On the other hand, implementing HTTP/2 protocol and request
multiplexing within a single connection in the upstream module
will require major changes to the upstream module.

Due to the above, there are no plans to implement HTTP/2 support
in the upstream module, at least in the foreseeable future. If
you still think that talking to backends via HTTP/2 is something
needed - feel free to provide patches.

Maxim Dounin

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.