A little history
We've been (very) early adopters of CF, and actually helped them grow and become known through positive posts, blog entries and the likes. We've been with them since pretty much the beginning and seen/helped them grow to become one of the largest CDNs and the Internet traffic behemoth they are today. A few nits aside, primarily dealing with their "firewall" and the necessary balance they must strike to provide the advertised protection while not reducing functionality for proxied websites, cooperation has always been favourable. Unfortunately the past few years, they've started focusing more on "trendy" technologies pushed primarily by the largest players in the market (like DoH and other strongly-centralizing technologies) and they seem to have lost sight of the many thousands of smaller clients they have and what they need, regardless if they were instrumental to their growth in the past. We've butted heads a few times before (e.g. about the Camellia cipher) but this topic at hand here was the final straw that broke the camel's back.
What is Brotli and why is it important?
Now, compression on the Web is nothing new and has been an important part of content delivery for many years. Compression happens on a per-request basis so each transferred file from a server to a browser can be compressed or uncompressed, depending on if it would be economical to do so (more on that later).
The methods used have been the old compress (obsolete), deflate and gzip for many years. Brotli is extremely similar to gzip but provides significantly denser compression thanks to what is called "context modelling" (I'll spare you the technical details here, feel free to look it up!). It was introduced as an optimal compression for downloadable fonts in the woff2 font format. After that it was adopted in 2015 by many browsers (including us) for compression of web content. You can see why it would be important for the web to have better compression with very little drawbacks, I hope, and why Brotli is important in that context to (significantly) save on bandwidth used for download web content.
When Brotli was first introduced, it conflicted with some (rarely-used, primarily by Google) middleware boxes that attempted to perform transparent compression of data outside of the servers that served the content with a method called SDCH ("Sandwich"). Unfortunately the middleware was not smart enough to recognise the new compression method and would cause breakage by "double-compressing" the data that was already Brotli-compressed. Of course Google, wanting to prevent breakage on their own sites, would have to deal with this. So, it was restricted to data that could not be compressed by those middleware boxes anyway, i.e. on https (the end-to-end encryption of https would prevent the boxes from touching the data). This all makes good sense, of course, as a temporary measure, and this reason has been used (but as the definitive reason!) as to why Brotli-over-http would not be usable. So yes, instead of fixing the middleware to recognise Brotli, the compression was limited to https, instead. Of note, SDCH (which also needs support from the browser) has been deprecated and removed from all web clients in the meantime, so these boxes are effectively RIP.
Now, why didn't that open Brotli up to http, you might ask? Cue the push for the "encrypted web". At the time Brotli was introduced, there was also this push in earnest to try and get https everywhere for everything. Of course, if https has better compression natively, it looks extremely good for promoting things like http/2 and other "exclusive https club" features that were being developed. I'm pretty sure a lot of the comparative studies displaying "faster https" when comparing to http would not have considered disabling Brotli on https for an apples-to-apples comparison. In my recent contact with Mozilla it became painfully clear that "advancing [the] encrypted web" is currently the only reason for Brotli to not be enabled in Firefox for http.
I'm pretty sure that Google has the same approach for Chrome, if not just for the fact that they tend to never go back to established implementations to change them.
Of note: "market leading" browsers explicitly don't indicate they support Brotli over HTTP so there would be no difference to them if it was enabled by a server, since the server would have to pick from the compression methods the browser indicates it supports, which would exclude Brotli.
Now, I don't want to make this a discussion about http vs https, because that is completely besides the point and has been fully discussed multiple times elsewhere. The point is that in this implementation, http is unnecessarily disadvantaged compared to https in terms of efficient use of your bandwidth. I did my research here when I figured out there was this unnecessarily split also in our implementation (inherited from Mozilla when we forked UXP) which has led to our current browser releases enabling Brotli over http when servers support it (which is pretty easy to set up, actually).
How does CF handle Brotli?
Compression of content is handled on a request-by-request basis, per the HTTP specification. For each request, a browser indicates what compression schemes it supports for the request, and the server determines which of the options to use to deliver the content from the options presented. If there is no match, the content will be sent without compression.
This allows a server to be very efficient in choosing what, if any, compression to use for each file requested, and all a browser has to do is indicate what it supports for the server to "take its pick" from.
CF has started offering Brotli as a compression method on their cloud edge. The way they offer it is that they request data from origin servers and then (re-)compress it on their edge servers to serve brotli-compressed web content to web clients. They do not request brotli-compressed data from origin servers (for unknown reasons) and only at most use gzip for that. What CF however doesn't do is offer that same compression to clients on http connections, which was the source of my inquiry to them to enable that, as it seemed like a mistake.
So, CF decides which compression is being used for content served to actual users. They decide not to respond to browsers indicating they support Brotli if the connection is HTTP, but do respond if the browsers indicate they support Brotli when it's HTTPS.
What I asked is to also enable the existing compression (already in use!) on http. This should be as straightforward as simply enabling already-existing plumbing for http connections. Maybe even a one-liner in configuration.
The discussion that ensued initially came back with just "it breaks the web because middleware boxes" with links to old announcements - ignoring the fact that those statements were from 2015 when yes, indeed, in the initial phases it was a problem. I asked them to take current data into account, i.e. the fact that nobody uses SDCH and the boxes talked about have long since been retired. Apparently this got the attention of their engineering team as I got a more elaborate forwarded response citing several things that seemed to want to end the discussion there and then, like "security issues" (initially not detailing anything, so had to ask for specifics) -- citing compression side-channel attacks if compression is offered over http as well as https (things like BREACH). But I had already done my research in that respect (people who somewhat know me know that web security tends to always be at the forefront of my considerations when considering features) and these kinds of attack (when a server is vulnerable) would be effective regardless of which compression would be in use, including gzip which everyone supports everywhere. In other words, it's not specific to Brotli, and when I asked in what way Brotli would be more susceptible I was asked to provide evidence: "Can you help me prove that Brotli over HTTP does not pose a greater risk of content decryption?". As anyone knows, it's not possible to conclusively prove that something doesn't happen. However, I still provided research literature (which was also requested) clearly proving the very similar position Brotli takes to gzip in all terms, including security.
At that point I was directed to the "community" to discuss it. Unfortunately a common tactic for big companies when dealing with "difficult questions" that they don't really want to admit they are wrong in (Mozilla also had a hand in that forwarding to Google Groups where those topics were pretty much sent to fizzle out). So.. despite my better judgement (and keeping the direct ticket open) I started this discussion:
Please feel free to go and get involved in the discussion. CF has indicated they "continue to monitor the discussions on the Cloudflare community as well as the broader web" to "prioritize improvements" so it would be important to keep this topic alive (CF community discussions will be closed after 2 weeks of nobody posting. Pretty small window if you ask me).
Being shoved off
In the support ticket after this community discussion got to the point where I clearly made very good and solid arguments why Brotli should be enabled (my last reply as of this writing) I was subsequently basically shoved off with the same old argument they started out with:
To which I repliedElla, CloudFlare wrote:We escalated this to our engineering team and they reviewed the request - the feeling is that there is a tradeoff here. The tradeoff between bandwidth efficiency and the potential for request failures due to middle boxes. Since those devices are outside the control of HTTP endpoints and very difficult to measure, observe and debug at Internet scale, we lack strong data to inform the tradeoff. These are the kinds of factors taken into consideration when prioritising development and support efforts.
...which I think isn't an unreasonable request to make; the scope of this would impact all their clients so if they feel they need to research this more before they can enable it, then that's fine.Moonchild wrote:I'm really disappointed (but not entirely surprised...) that your engineering team keeps clinging to their initial statement. The risk for middleware boxes causing issues is effectively nil, because SDCH compression was removed from Google Chrome, and other Chromium products, in version 59 (2017-06-05). 
So if your engineering team is going by statements from 2015, they are basing your conclusions on technology that is no longer present in any web client. We don't support it, Mozilla doesn't, Google doesn't, Apple doesn't. So the tradeoff is between bandwidth efficiency and a non-existing use case, I wouldn't call that much of a trade-off, at all.
If you feel more research is necessary then please perform the research.
 "Intent to Unship: SDCH".
https://groups.google.com/a/chromium.or ... Ql0ORHy7sw
However, the response to that was basically a flat-out refusal to even consider it, regardless of my valid points.
andElla, CloudFlare wrote:While we acknowledge you make valid points we unfortunately can't commit to supporting Brotli on HTTP at this point in time: we have to weigh the pros and cons of implementation, risk of breakage with the potential benefits. As it stands we will not support Brotli in the near-term but we will continue to actively follow any discussion on this topic in the community. If circumstances change we are happy to revisit in the future.
So why is that? Why hold off on this and basically tell me to go away? What is the real (undisclosed to me) reason they won't consider even the possibility of Brotli being enabled for clients who support it? Remember, if browsers don't indicate their support - and neither Chrome nor Firefox do so - then it won't be in use even if the server has that support.Unfortunately we can't comment on exactly which circumstances would make us re-evaluate the priority of brotli for HTTP.
I can only conclude there's some agenda here that I'm being kept in the dark about, if they aren't even considering looking into this concern at all...
Conclusion: trust issues
This leads to my final conclusion of sorts here of this very long post (sorry if you feel it's too much but I wanted to get all this out and I'm still being rather concise. See the CF community discussion for more detail). CF, being in the position they are as a sanctioned MitM "reverse proxy", will have access to all traffic flowing through palemoon.org hosts that have CF enabled (including https). CF also has full control over our DNS zones which is pivotal to being reachable on the web, as well as the key cornerstone for many security measures like CAA, SPF, DKIM, and our e-mail for the domain. Thankfully I've been smart enough to not also use them as my registrar or CA for my SSL certificates.
That is a position that requires a massive amount of trust in CF that they are doing everything above board. A position where they can effectively make or break the project if they decided to.
That level of trust requires that no secrets are kept. It requires a lot of transparency from them that they are clearly unwilling to provide any longer (CF used to be extremely transparent in their operations, but clearly that is no longer the case).
As a result, I no longer feel confident that CF can be trusted with control over the project's web presence (or any of my owned domains, for that matter), and we're leaving them.
While some people dislike CF for various (socio-)political reasons, as you can see this had nothing to do with my decision here. It's a simple matter of being treated poorly and eroding my trust to the point where I simply cannot justify continuing to use them in such an essential and pivotal position to the project.
Yes, this will likely result in less performance (e.g. when downloading a new release update). So be it.
CF has refused to consider our request to enable Brotli over http despite overwhelming (and undisputed) evidence it should be perfectly safe and beneficial for everyone on the Internet as a whole to do so. What's more, the way this was handled displayed severe lack of objective insight and clear favouritism for private agendas over technological advancement.
The end result was a loss of trust for a company that requires full trust from their clients due to the nature of the services they provide, and we'll stop using them.