Brotli over http not in use

I’ve been in touch with CloudFlare support directly about this already and was recommended I go here to have it discussed.

The issue is that Brotli isn’t being used over http connections, only https connections, and (re-)encoding on the CF edge only occurs if a browser/client is using TLS to connect.
While I am aware that 5 years ago when Brotli was first introduced for web traffic there was an interop issue with some middleware boxes trying to use shared compression tables (SDCH) and not being smart enough to recognize that responses were already compressed with Brotli (or any other non-deflate/gzip encoding), and https being pretty much passthrough for those boxes because it was end to end encrypted, that has long since been deprecated.
Of course in that situation the decision was made to only enable Brotli over https to prevent breakage, but it has since also been used as a vessel to disadvantage plain connections “to further the encrypted web”.
While I have no issue with promoting encryption where desired, I do think there’s no reason whatsoever to continue denying people using http the denser compression and bandwidth savings that Brotli offers.

In my talks with support that went into some depth (thanks , Gabe!) some concerns were brought up by CF engineers but I don’t really see an issue with any of them. I’d like to engage all you fine folks here and see what you think, and hopefully the CF engineers will also take notice and revise their position on it.

  1. There was concern about compression side-channel attacks if Brotli would be enabled on http alongside https traffic, naming specifically CRIME and BREACH. compression side-channel attacks like CRIME and BREACH are equally usable (provided the server-side is set up in a vulnerable way that makes this a concern) regardless of which compression method is being used, so Brotli would take the exact same position as gzip, deflate, or any other compression method, due to how these attacks work by measuring compressed stream sizes for injected data. There is literally no greater or smaller security risk involved in providing brotli-compressed content over HTTP than providing gzipped content over HTTP (which you do). See [1] for a white paper outlining these attacks and the general nature of any compressed response being usable. Any compression, even the most rudimentary decently-effective compression that results in differing compressed sizes for guessed data, may enable exploitation of vulnerable servers. Please note that for these attacks to work, some prerequisites must also be met, like being able to inject data in a user’s requests consistently (both http and https!), and having critical and session-unique data being served by the server in an equally consistent manner. A plaintext MitM will certainly not allow this, and forward secrecy cipher suites on the TLS connection to be attacked will also make this considerably harder.
  2. Brotli is also not significantly different from other compression techniques. Brotli is a general-purpose, lossless data compression algorithm that uses a variant of the LZ77 algorithm, Huffman coding, and second-order context modelling. Zip is also LZ77 based with Huffman encoding and the resulting streams are very similar, albeit that because of the context modelling (leveraged during compression) the data will be more densely packed in the case of Brotli. [2][3]
  3. I’ve also been in touch with Mozilla about this (who were one of the entities pushing Google-sourced Brotli to the web) actually echoing other users who have asked about this. However, Mozilla has clearly indicated that its current continued use of Brotli only over https is very much a way to “prefer advancing [the] encrypted web”[4] by explicitly not providing users any benefit of dense compression when transferred over http. Mozilla has also categorically refused to reconsider their approach for this reason, while users have requested this more than once. There’s even an open bug (submitted by a Mozilla employee, at that) for enabling it in secure context (which would be secure context over plaintext connections) with patch attached that has gone stale. [5][6]
  4. enabling this on CloudFlare’s end will not change anything for end users who use Chrome or Firefox with default settings for content-encoding, exactly because it, like TLS, needs client-server agreement on what is encoded with which compression. As long as clients (browsers) are not indicating they accept Brotli encoding in their request (which is currently the case for Chrome-based browsers and Firefox over http) they will never negotiate the Brotli compression method as acceptable, and nothing will change. But, by refusing the option on CF’s side, CF is enforcing the smallest common set and preventing more efficiency (and bandwidth reduction) in clients that do accept it, as, no matter what a client indicates it supports, it will never be used if there is no agreement from the server side to use br.

Despite these points there still seems to be a lot of resistance to enabling Brotli over http, which I am trying to understand. There is no security issue with doing so that I am aware of (no more than having gzip/deflate enabled which is the case, anyway!), there will be no impact for mainstream browsers because both server and client must agree to use it, and it just seems to be very much arbitrary at this point.
Please help me understand this reluctance to have better performance and efficiency over http.

Thanks for reading to the end. :slight_smile:

[1] (Breach attack (PDF))
[2] RFC 7932, see e.g. the heavy reliance in [3] and that it is a direct
derivative (1.2)
[3] RFC 1951
[4] Bugzilla bug 1719155
[5] Bugzilla bug 1670675
[6] Bugzilla bug 1675054

1 Like

Despite the Cloudflare, to support and provide Brotli on your Web server, your web server needs to support both HTTPS and HTTP/2 as far as I remember on Nginx was it.
Furthermore, nowadays Web browsers push HTTPS-only more and more.

It can be discussed.

I remember few topics about this “compression level” which Cloudflare uses here. Kindly, may I ask you to try to find some more information using :search:

From my opinion, you can disable HTTPS on Cloudflare and go with HTTP only and Brotli option enabled. I haven’t tried that combination yet, so cannot confirm it works.
Regarding compression, nowadays I really do not care that much. Compress it on the end host, not over the network - why bottleneck the network for it?

Not criticising, rather trying to understand more of it.

1 Like

With every line of code there is a security risk. From OpenSSL down, there is a history of security risks being introduced, including in redundant elements of the code base that nobody uses.

There is no browser that supports Brotli that does not also support at least TLS 1.2. The web is predominantly TLS at this stage, and moving slowly towards defaulting to HTTPS.

You have said that neither Chrome nor Firefox support this anyway, and in my brief test Safari does not either. I struggle to see where the users who want Brotli over HTTP are coming from.

I do not think that this is the case. Code always has risk, the difficulty is in identifying and quantifying the scale of the risk. But in this case it does not matter how small the risk is, the benefit is essentially zero.

So you are essentially asking “why will Cloudflare not spend developer resources working on a feature that nobody will use, and accept the unknown risks associated with doing that”. I think the whole industry has agreed on the answer.


This has nothing to do with the link between CF and the origin server. This is purely about the connections between the edge and web clients.

Also, CF simply doesn’t offer brotli from the edge. Nor does it offer brotli for origin requests (they could if they wanted to…) so disabling https will do nothing, as CF will just use gzip everywhere (if the client offers it). CF recompresses to brotli on the cloud edge, but only for https. And that is my issue with it here.

it can be discussed

Absolutely! I’ve already researched this at length because I’m well-versed in network security and wanted to know if there was any problem with brotli streams from a sec point of view – and i have not been able to find any. gzip and brotli are both LZ77+Huffman compressed streams and their taxonomy is almost identical. Please provide me with reference material showing how gzip streams are secure where brotli streams are not and I’ll be happy to revise my stance.

Generic broad stroke assertions about security versus code size really isn’t the question here. Brotli is open source, has been audited and approved by Google (who created it) and Mozilla (who uses it) in the broadest sense. Your issue seems to be about the security of the brotli compression library which is completely irrelevant here (unless you think that compressing or decompressing the same data over different encapsulating network protocols is somehow significantly different…?).

The question here is about network security, not binary vulnerabilities in program using the compression – which they are already over https anyway) so it’s really besides the point.

The question is also not that browsers that support Brotli also don’t support https - I’m not even sure why you think that is relevant. To the best of my knowledge all browsers on the market that support brotli and tls 1.2 already have all the plumbing and capabilities to support brotli over http (I know this for a fact about Firefox at least which simply needs a single preference change to enable it – you can use this right now)
On top, CF also doesn’t have to invest anything in terms of developer resources because they already use nginx’s brotli encoding capability. There too it will only be a configuration change to enable it on http. I know this for a fact because I have specifically tested this with an installation of nginx built with brotli support and both Firefox and the fork I am developing where employing Brotli over http was a matter of a few minutes of editing the configuration and checking the results.

“The whole industry” clearly hasn’t agreed on the answer, but the answer will be enforced if the biggest market players and the biggest CDNs make using brotli over http already impossible and a non-starter.
What I am asking of CF is to enable the possibility of using brotli compression over http. Maybe Chrome and Firefox will follow, maybe they won’t, but from all the information I have available to me there is no reason besides arbitrary decisions (that are not in the least made out of agendas rather than technological reasoning) to have this segregation between http and https when it comes to compression efficiency of data.
If the server-side doesn’t allow it, then it becomes indeed difficult to see where users will be coming from, because it’s hard-blocked on that side. However, I am certainly planning to enable it by default on the client side in my own general use browsers (estimated a million active users or so) because I see no reason to pass on brotli whenever a server-side offers it. Any other application (browsers or otherwise) that uses http transport could easily do the same if they support brotli.

How widespread the use of https is is also not really relevant. I can just look at my own CF zone analytics to see that of the traffic flowing through CF (which is not everything) to my websites, still about 15-20% is not using https, regardless of me employing upgrade-insecure-request and similar tactics to enable secure connections where desired. That is not nothing. So yes, all that traffic is currently being denied easily up to 20% bandwidth savings.

That is not the point I was trying to make. My point is that adding any code to an application might introduce a vulnerability. In this case there are no browsers who support this feature, and as a result you would invest resources in something that would get no traffic.

Because at present, if Cloudflare did Brotli over HTTP no browser would use it. But I think you are looking from the perspective that if you can get one of the major players to flip the switch then that argument would disappear. Which is true, but as HTTP is on a significant downward path, the feature would have limited life in any case.

Changing the shade of blue on the dashboard requires a non-zero amount of resources. It all needs to be tested, data gathered etc.

While Cloudflare have used Nginx heavily, that dependancy is reducing over time. I cannot find a reference at this point, but their CTO has stated it several times.

All the major browsers exhibit the same behaviour, and without them nothing can happen on the server side. (And vice versa). The Mozilla bugs you linked earlier seem to say that the use case is developers working on server side Brotli implementations as the target use-case.

Cloudflare Radar currently says HTTP is standing at 16%,and Google says about 4% of the requests they see are HTTP so the figures seem to vary significantly. But the trend is clear regardless of the source or the numbers.

Looking at Can I Use, upgrade-insecure-requests appears to align closely with Brotli support, so most of your users who don’t process your CSP directive are not using br anyway. Perhaps try “Always Use HTTPS” to get that 20% moving in the right direction.


Nobody has to add any code. The code is already there; it is already being used over https.

Be that as it may, there’s no reason at all not to use what is, for all intents and purposes, a free win for the people being served (and who actually pay CF for their service).

Oh, you mean Chrome, Chrome, chrome chrome, chrome and chrome? Oh and the Google-funded controlled opposition (Mozilla)? Well, yeah. But CF is also serving other browsers and applications.
In a monopoly-dominated market you really shouldn’t be looking at what the largest marketshare holders do as if it is the determining factor what options you should have, since then you’re just staring yourself blind on serving only the largest stakeholders. But that’s actually a discussion for a different thread.

My concern here is that there is demand for this feature (or this discussion wouldn’t exist) and there is no technical reason why not to enable it.

When you’re talking about a feature that is already enabled and in use, doesn’t require UX considerations, and doesn’t require data gathering, this kind of reasoning really doesn’t apply. But yeah, sure, it’s non-zero. Someone actually does have to make the config change and test that it’s working as intended.

This actually clearly underlines my point, thank you, that Brotli was introduced at the time when the push for https was made in earnest, and in fact was and still is being used as a discriminating factor against plain http by making it part of the “exclusive https club”. :stuck_out_tongue_winking_eye:
The point is, once again, that users who aren’t using https despite the directive are doing so by choice, and should be able to use br, but because of CF’s decision to refuse it, can’t.

This entire discussion needs some further clarification, as there are 3 places that Brotli can be used:

  1. Brotli can be used as stream compression in the TLS transport. As we are discussing HTTP, I am assuming that we are not talking about transport.

  2. Brotli can be used for HTTP transfer-encoding. This is message body encoding. This is a hop-to-hop header, and is up to any middleboxes to implement and negotiate along the way. Browsers likely shouldnt be using this, unless they can guarantee they are talking directly to the server. Cloudflare can do whatever they want with this.

  3. Brotli can be used for HTTP content-encoding (not to be confused with content-type!). This is also message body encoding. This is a negotiation between the server and the client. Cloudflare shouldn’t be messing with this!

Which is the case in discussion? If its 1/2, then there is no problem. If its 3, then thats effectively request tampering.

@matt89 This is in fact about 3, content-encoding. (the http accept-encoding/content-encoding negotiation)

For CF use, the CF edge acts as a server to the web clients and is therefore free to use different content encoding between itself and the connecting client than between itself and the origin server (where CF acts as a client). It will not change the nature nor the content of the request served if it simply swaps one compression scheme out for another.

If you consider any changes to the request and response to be tampering, then yes, it is, and CF does this by design by being a sanctioned man-in-the-middle.

In that case there is no reason why it should be disabled for http. If anything, the security issues were around using content/transfer encoding in conjunction with TLS. I.e the security issue is for https not http.

I’d also be interested in cloudflares basis for this. Maybe they only use a common (e.g.chunked) content-encoding to prevent storing same assets multiple times per encoding?

What if the file is excluded from caching? What is the content encoding header sent by cloudflare to the origin? It’s definitely weird for them to do this - especially if they support it on https (where the security issue is)

Maybe they only use a common (e.g.chunked) content-encoding to prevent storing same assets multiple times per encoding?

Then they also shouldn’t be doing this for https, but they clearly do. Client to CF-Edge will encode uncompressed, deflated, gzipped or br (https only) depending on what the client indicates, and that’s completely disconnected from the requests CF Edge makes to the origin server which negotiate their own content-encoding. I also don’t think that the caching would be a cache-miss if the accept-encoding for a request would be different from a web client (recompressing to an accepted encoding would be cheaper than making an origin request) but even so that shouldn’t be a reason.
CF support already indicated they never use br for origin pulls, so any br encoded content would already have to be re-encoded at the edge anyway.

Sounds even more like they intended to disable brotli for https (given the BREACH vuln) and inadvertently disabled it for http instead… oops :slight_smile:

Had another thought - so the reason was to do with proxies/middleboxes not supporting the content encoding. WHY is this an issue? because they are inspecting content! These middleboxes/proxies are basically doing deep packet/content inspection (otherwise they wouldnt even be looking at the body and they wouldnt have a problem).

By disabling the content encoding that makes these snooping agents choke, cloudflare are effectively facilitating the snooping on non-tls requests. I wonder if a government agency requested them to do this. The baseless reasoning given seems to suggest something underhand, IMHO.

If CF wanted to give people access to data they could do so directly, as CF has full access to all data flowing through them (http and https alike).
But I don’t think it’d be very productive here to wildly speculate about why CF has made this decision or why there is so much resistance to what to me seems a clear win for everyone.

Let’s focus on the technical details instead of potential conspiracies, shall we?

I’m 100% sure this wasn’t a mistake. Especially since it’s been fervently defended by their engineering team as a deliberate choice.

No - CF specifically says they dont do that in their T&C. They even go so far as to say that they simply cannot access the requests as they pass through the blackbox. This is their “guarantee” that they aren’t snooping on our traffic.

The technical details ONLY point to facilitating MITM content inspection of non-TLS endpoints. They even state that in their email response to you.

So either cloudflare engineers are idiots, or they have their hands tied. Given how awesome they are with everything else, im expecting the latter.

Don’t confuse “legally cannot” with “technically cannot”, please. The fact that CF is a sanctioned MitM and there is no end-to-end connection (nor ecryption for that matter) from the client to the origin server means they have access to all data flowing through the CF network. Period.

But, like I said, the reasoning you’re poking at is irrelevant here. That kind of speculation is nice but really has no bearing at all on the objective reasons I brought up to enable brotli. If “their hands are tied” by some unknown overseeing entity forcing inspection of their traffic (https or not) then CF’s team can be as willing as they want but I could still not trust them to the level I need to because there’s no transparency.

So far I’ve yet to see any technical and objective reason why we can’t have brotli enabled over http. They aren’t considering it right now and will not tell me what circumstances would have them re-evaluate it; aside from “staying informed of the community discussions and discussions about it on the broader web”.
For the latter I don’t know what else I can do because I’m not a media influencer, Mozilla has already slammed the door in my face and Google certainly won’t be listening to anything I have to say.

I cannot agree on that one due to:

Full (strict) ensures a secure connection between both the visitor and your Cloudflare domain and between Cloudflare and your origin web server. Full (strict) support SSL hostname validation against CNAME targets.


Strict (SSL-Only Origin Pull) instructs Cloudflare’s network to always connect to your origin web server using SSL/TLS encryption (HTTPS).

And of course, there is a good proportion of people still not having an SSL certificate and established HTTPS connection to their origin, meaning Cloudflare’s only way to connect to the origin host/server is via HTTP, while the requests to the visitor are representet as they are “secure” over HTTPS (that’s the consumer’s responsibillity whom controls this option via the Cloudflare dashboard → SSL/TLS tab for his/her domain).

Cool but, NSA and other “over the ocean” doesn’t? :grinning_face_with_smiling_eyes:
We can sniff it out too, but would end up in encrypted and not understandable data, if so. Could be I am wrong.

Patience, and maybe in some time you could get the right answer from Cloudflare team about this.

What about an web server with Apache and a “defalate”?

For example, user-agent accepts like Accept-Encoding: gzip, deflate, brotli

Should Cloudflare send the three request to the origin, and then cache that file in all three types due to different Web browsers and user-agents?

I also think there should be something in corelation with the client’s user agent - again, here we are in a story about Web browsers - which makes the request containing Accept-Encoding type.

What about Content-Encoding and Content-Type?
If someone tries to abuse this one, send in one MIME format, then browsers “recognize it” as is, while it’s the other, and some malware or malicious code, even over HTTP?

Would you serve the from your web server as a static file or leave it for the end-user to decide, or not?

Again, do you have an example of running and serving Brotli over HTTP without being proxied via Cloudflare?
Is it working?

I am sorry, but I have to warn you to please be kind and stay in the good tone due to the ToS and Community violation rules which are being applied here:

I see you already posted it here:

If you are having trust issues as stated in the above topic, then just do not use Cloudflare.

For being a community regular, you seem to have very little understanding of how CF actually works.
Full(strict) means that CF’s origin pulls need to have a fully verifiable authenticated TLS connection to the origin server (i.e. using a publicly accepted, CA-signed certificate chain).
It does not mean that the client connects directly to the origin server in any way. That’s not how CF works and CF would not be able to perform its function that way.

In the tls/tls situation? Not in decrypted form, no. But CF does.
CF makes a request from an origin server. That data is sent to CF servers (either over http or https), and CF will have the decrypted form of that data available to serve to any client requesting it from the cloud edge. Only one origin request is necessary for it to be cached in the cloud edge. From the edge it will then be served to end-users (with any choice of compression over http or https, depending on how it’s requested). CF is a sanctioned man-in-the-middle. It request from the origin as a client, and serves to the end-user as a server. Inside CF, that data would be accessible.

This is also why certificates on the origin server and on the cloud edge can be different as data will be re-encrypted by the cloud edge to be served over https (or not, depending on the client request). Similarly, it will be served with a content-encoding that the end user specifies it supports and CF agrees on using.

Nope, UA sniffing isn’t needed. See http accept-encoding header spec.

They are 2 entirely different things.

Absolutely on both accounts. Here, poke at a test server if you insist. test server link that will serve br encoded over http (remember to enable it in your browser first if you use e.g. Firefox). That’s a bog standard nginx with br enabled.
If that same site would be served through CF, the most you’d get is gzip (over http).

Please watch your tone. While you might not be factually wrong and Cloudflare does not offer end-to-end encryption (and the whole SSL topic on Cloudflare is a joke anyhow, so much for making the Internet more secure), it still is not necessary to phrase criticism in that way. Particularly towards a forum regular - as you rightly observed - who volunteers his free time here on the forum to help others.

There was most likely a misunderstanding and @fritexvz was rather pointing to the fact that it is technically possible (even if it is only a fraction of setups) to have a properly secure setup where all transport is encrypted.

Yes, (unless you are on Spectrum) there is an obligatory decryption taking place on the proxies, and that’s where I agree with you, but it’s not only what you say but also how you say it.

Anyhow, that thread seems to have gone a tad off-topic from a question why a certain compression algorithm requires SSL to an a bit more tin foilish if Cloudflare has blacksites where it interrogates each byte :wink:


It was not my intention to offend. It was just an observation and my expression of genuine surprise.
As for volunteering free time to help others, I commend that, absolutely, but if they do, I do expect them to at least understand the basics of the service – or otherwise the well-intended help will just be misinformation that helps nobody.

But yes, I’m all for please returning to the topic at hand and not get bogged down in touchy subjects about data integrity or speculations around it.