Compression makes some files bigger

I use CloudFlare on all of my sites and now I started optimizing images (jpg/png/gif/webp) by myself to gain the best results.

I messured the output and saw that CloudFlare compresses every File. No matter if compression makes the Files bigger or smaller.

If it makes the File smaller I like it.

If it makes the File bigger I think that people who are optimizing their sites natively are getting worse results with CloudFlare then without.

Little example of what I’m speaking about:
CF%20Compression

This File (the one on top) is uncompressed 2.132 bytes (100%) big
The same File is compressed now 2.215 bytes (103.893%) big
Means compression made this File 3.893% bigger.

This happens to all the .png Files which I already optimized at its maximum.
This also happens at all .woff2 Files as they are also already compressed at the maximum.

I think for every File CloudFlare compresses it should make a simple check if the File now is smaller or not.

These are the options:

  1. smaller
  2. same
  3. bigger

Case 1
If (brotli) compression makes it smaller use the smaller (compressed) one.
Result: cache the compressed one

Case 2
If (brotli) compression does not gain or lose and bytes use the uncompressed one as it takes less power (on client-side) to use uncompress files
Result: cache the UNcompressed one

Case 3
If (brotli) compression makes it bigger use the original (uncompressed) one anyway.
Result: cache the UNcompressed one

I hope CloudFlare will implement this as it would make every page as fast as possible when it comes to compression.


#EDIT:

another feature would be:
If Brotli (level 9) cant compress with any benefit do this:

  1. Try GZIP (level 9) if no benefit:
  2. Try Brotli (Level 11) if no benefit:
  3. Cache the origin File!
1 Like

Your examples are files in the 1-2K range, and it’s less than 100 bytes bigger when compressed. Is this happening on larger files too?

This was one example I will provide some more.
And yes this is also happening at bigger Files (70kb woff2 Fonts)

But no matter how big/smal the difference is, if it cant compress it with a benefit, just use the origin file!? Also its not just up on how smal/big the file is but on the percentage it makes things better/worse.
Also comparing more then one compressing method (brotli & gzip) and using the better one on file base would be good.

I just made some more tests.
Turns out: if you optimized your Images befor (JPE?G, PNG, WEBP) Cloudflare will compress them and add 1-2% to EVERY single File. This is not related to the size they already have but to the level of optimization of the origin File.

If you guys want I cann append somd (20-30) screenshots which will show this.

On the other hand:
when it comes to compression of text (html, js, css, svg, xml) it performs very well and makes files about 300% smaller then (unoptimized) text Files.

I preffer optimizing my static assets (images) by myself and let CF optimize the dynamic generated Assets (HTML, but also css and JS for me)
For me CSS and JS are also dynamic generated assets (but statically stored on my Server) as they change if I add some more Content Elements to my site. So I dont have to optimize them CF does a very good job but fucks it up at (optimized) Images

Yes especially at small files it sometimes looses so hard it makes them three times as big!? Why do I optimize them… am I a joke to you? :joy::man_shrugging:

Screenshot%20(22)
125bytes = 100%
236bytes = 188.8%

Screenshot%20(23)
43bytes = 100%
137bytes = 318.60%

I’m more awake now…Are you seeing a content-encoding header in those files? My css and js files are being compressed, with either gzip or Brotli, and they’re shrinking. Cloudflare doesn’t compress images unless you’re using Mirage or Polish. If you are using one of those two features, there will be a special header, such as “cf-polished qual=85, origFmt=jpeg, origSize=205167

There’s a support article that covers compression and links to another article or two on the subject:

1 Like

I do not use Mirage or Polish.

I see content-encoding:

Example (FavIcon.ico)
/favicon.ico
content-encoding: br
Original size: 125bytes
Tranfered: 236bytes

CF Cookie: 52bytes

125 + 52 = 177bytes (not 236bytes!)
Where does this extra 59bytes (47.2%) comes from?

This btw happens at every file.
FYI: the reported are screened from GTmetrix

Little storry:

When I generated my FavIcon (128x128) it was about 1.4KB big.
Then I redesigned it with the background of having whole pixles and just 100%transparency or 100% color (even if PNG supports much more) then I exported it (1.4KB) and optimized it down to (lossless!!) 125bytes.

Now CF makes 236bytes out of it. I know its just some bytes but this happens to every Image. CF makes them somehow bigger?
When I load it from my origin Server the file is 125bytes big

If you do not use mirage or polish, then Cloudflare aren’t touching or optimising your images AFAIK. But yes I tested my site in gtmetrix and it does report compressed files being larger than uncompressed for my CF free plan wordpress blog at https://servermanager.guide/

Check gtmetrix reporting compressed size as if you drill in with + sign to expand the image and inspect content length it reports the smaller size.

i.e. here 3388 bytes is uncompressed and compressed response header’s reported content length matches at 3388 bytes

image

now to verify use webpagetest.org to test https://www.webpagetest.org/result/191113_PH_232ab4a5522f3a61455ef1154a2319cd/1/details/#waterfall_view_step1

response header content length = 3388 bytes

image

Check in Opera browser network devtools and same 3.3kb sized image but 3.5kb transferred over network due to headers and cookies attached to the request

image

gtmetrix is reporting compressed transfer size including cookies + headers it seems ?

I wrote a guide on using webpagetest.org at https://community.centminmod.com/threads/how-to-use-webpagetest-org-for-page-load-speed-testing.13859/ :slight_smile:

edit: double check with h2load HTTP/2 test for my image file to see break down of data transfers

uncompressed h2load request

h2load https://servermanager.guide/wp-content/uploads/2019/07/servermagaer-feature1.png
starting benchmark...
spawning thread #0: 1 total client(s). 1 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-ECDSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 100% done

finished in 100.83ms, 9.92 req/s, 37.70KB/s
requests: 1 total, 1 started, 1 done, 1 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 3.80KB (3893) total, 429B (429) headers (space savings 32.33%), 3.31KB (3388) data
                     min         max         mean         sd        +/- sd
time for request:    51.84ms     51.84ms     51.84ms         0us   100.00%
time for connect:    48.84ms     48.84ms     48.84ms         0us   100.00%
time to 1st byte:   100.64ms    100.64ms    100.64ms         0us   100.00%
req/s           :       9.93        9.93        9.93        0.00   100.00%

particular traffic sent

traffic: 3.80KB (3893) total, 429B (429) headers (space savings 32.33%), 3.31KB (3388) data

versus compressed request

h2load -H 'Accept-Encoding:gzip' https://servermanager.guide/wp-content/uploads/2019/07/servermagaer-feature1.png
starting benchmark...
spawning thread #0: 1 total client(s). 1 total requests
TLS Protocol: TLSv1.2
Cipher: ECDHE-ECDSA-AES128-GCM-SHA256
Server Temp Key: ECDH P-256 256 bits
Application protocol: h2
progress: 100% done

finished in 73.00ms, 13.70 req/s, 61.63KB/s
requests: 1 total, 1 started, 1 done, 1 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 4.50KB (4607) total, 428B (428) headers (space savings 32.49%), 4.01KB (4103) data
                     min         max         mean         sd        +/- sd
time for request:    41.65ms     41.65ms     41.65ms         0us   100.00%
time for connect:    31.22ms     31.22ms     31.22ms         0us   100.00%
time to 1st byte:    72.81ms     72.81ms     72.81ms         0us   100.00%
req/s           :      13.71       13.71       13.71        0.00   100.00%
traffic: 4.50KB (4607) total, 428B (428) headers (space savings 32.49%), 4.01KB (4103) data

does look larger hmm

  • uncompressed request = traffic: 3.80KB (3893) total, 429B (429) headers (space savings 32.33%), 3.31KB (3388) data
  • compressed request = traffic: 4.50KB (4607) total, 428B (428) headers (space savings 32.49%), 4.01KB (4103) data

@ryan @cloonan curious why data size is larger at 4103 bytes vs 3388 bytes though browser, gtmetrix and webpagetest shouldn’t be doing compressed requests so h2load compressed request wouldn’t happen in real life

so uncompressed h2load request has 429 bytes being headers + 3388 bytes data so in theory compressed headers + cookies + uncompressed 3388 data bytes would equate to gtmetrix and opera browser reported 3.5KB transferred over the network

Note this is same for network transfer size and gtmetrix compressed vs uncompressed for non-cloudflare sites too so it should be just normal network transfer overhead/cookies/headers which gtmetrix is reporting in compressed size numbers ???

3 Likes

Thanks, that proofs cloudflare makes files bigger by compressing them.

Also I dont get why we ‘have to’ set cookies at static ressources like images/fonts.
Yes I red https://support.cloudflare.com/hc/en-us/articles/200170156 but I dont need it. Actually this is just possible for Enterprise?

Would love to use as less cookies for ressources like:

.png, .jpg, jpeg, webp, .jp, .woff, .woff2, .ttf, .otf, .eot, .svg

Why do I want this: I do not have any benefit from setting cookies there. Yes seems to be more secure but at static ressources I dont want securtity over performance.
Also on .js and .css I want to set cookies as they are even smaller with cookies and compression then without.

Also this Thread about EU Law ( Disable cfduid cookie for EU law compliance ) states that just cookies which are

“strictly necessary for the delivery of a service requested by the user”

Are allowed to be set without asking for. Now as Enterprise Plan useres can disable it and are still able to run their sites they cant be “strictly necassary”!! They are necassary as CloudFlare wants and made them necassary for eveeryone but indeed they are not.

I would love to spend a PageRule to solve this problem like this:

Match:
domain.tld/*(.jpe?g|.png|.webp|.woff|.woff2)*

Rule:
Cookies: forbidden

But we sadly do not have the option as CloudFlare does not want us to set this.
Indeed this is another topic but this makes all static assets bigger then they originaly are so I think this still belongs to performance.

CloudFlare itself says: ( https://support.cloudflare.com/hc/en-us/articles/200170156 )

While some speed recommendations suggest eliminating cookies for static resources, the performance implications are minimal.

I dont think so!
As I do not use Mirage/Polish the only difference between the original File and the filesize getting transferred over the network are CF Cookies! Nothing else. No other Cookies are set

So for me the difference is between 200bytes and 1.2kb
Lets take the middle 700bytes and calculate.
I you do have a site CF makes your site at least bigger by

n * 0.7KB (n standarf for the amount of requests getting delivered over CF)

So for 50 requests your site will bigger 35KB becasue of CF.
My site itself is just 125KB big where 35KB are not “just 35KB” they are still 28%

And now think if the example @eva2000 showed. When you use Mirage/Polish the filesize will even grow! so additional 700bytes will add, not you load 70KB more then necessary.

I know that compression makes the TTFB slower but in the end is faster (beacause of the smaller filesize it have to deliver!) but now here on CF we have compression which will definitely impact the TTFB but leads to a bigger filesize!? For me this makes no sense.

At least we all should have an option to disable it if we dont want this.

you misread my last sentence clarifies the size is just normal network overhead for non-cloudflare sites too

Note this is same for network transfer size and gtmetrix compressed vs uncompressed for non-cloudflare sites too so it should be just normal network transfer overhead/cookies/headers which gtmetrix is reporting in compressed size numbers ???

no my example is with cf free plan so mirage/polish is not enabled.

my examples show that no one is requesting compressed image files - the gtmetrix compressed value means compressed header/cookies (via HTTP/2 HPACK compression) + uncompressed image file size AFAIK. To be sure you can ask gtmetrix folks

No there’s also header sizes too in that network transfer

So how can we minimize it?

Sorry missunderstood this

Ok I see but how to solve this? I mean Cookies could be disabled and headers should be minimized. But how actually do this with CF?
GTmetrix clearly recommends static ressources without Cookies

headers are needed in HTTP requests to be able to serve requests and dictate parameters for browsers to render your assets properly. Other than ensuring only the basic headers are added and you don’t add any extra headers yourself.

using my above blog image again

inspecting header reponse for the image using custom curl binary I built for HTTP/3 support

test HTTP/2 request’s response headers

curl-http3 --http2 -I https://servermanager.guide/wp-content/uploads/2019/07/servermagaer-feature1.png                                      
HTTP/2 200 
date: Thu, 14 Nov 2019 02:38:38 GMT
content-type: image/webp
content-length: 3388
set-cookie: __cfduid=df1aad8e22c50eef41c7ab06be6c43d361573699118; expires=Fri, 13-Nov-20 02:38:38 GMT; path=/; domain=.servermanager.guide; HttpOnly; Secure
cf-ray: 53559c424f40ea96-IAD
cf-cache-status: HIT
cache-control: public, max-age=2592000, no-transform
accept-ranges: bytes
age: 27738
etag: "5d317853-d3c"
expires: Fri, 13 Dec 2019 18:56:20 GMT
last-modified: Fri, 19 Jul 2019 07:59:15 GMT
vary: Accept-Encoding
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
x-powered-by: centminmod
alt-svc: h3-23=":443"; ma=86400
server: cloudflare

HTTP/2 requests broken down by timings and header, request and download size

curl-http3 --http2 -w "@curl-latency.ini" -o /dev/null -s https://servermanager.guide/wp-content/uploads/2019/07/servermagaer-feature1.png   
HTTP Version:                     HTTP/2
Header Size:                      708
Request Size:                     137
Download Size:                    3388
DNS Lookup (time_namelookup):     0.014420
TCP Connect (time_connect):       0.028101
SSL Handshake (time_appconnect):  0.097012
Pre-Transfer (time_pretransfer):  0.097099
Redirect Time (time_redirect):    0.000000
TTFB (time_starttransfer):        0.188326
Time Total:                       0.188476

HTTP/3 QUIC over UDP curl request has better header compression via QPACK though curl doesn’t have measurement for connect/handshake time yet https://github.com/curl/curl/issues/4516

curl-http3 --http3 -w "@curl-latency.ini" -o /dev/null -s https://servermanager.guide/wp-content/uploads/2019/07/servermagaer-feature1.png 
HTTP Version:                     HTTP/3
Header Size:                      689
Request Size:                     137
Download Size:                    3388
DNS Lookup (time_namelookup):     0.024538
TCP Connect (time_connect):       0.000000
SSL Handshake (time_appconnect):  0.000000
Pre-Transfer (time_pretransfer):  0.051446
Redirect Time (time_redirect):    0.000000
TTFB (time_starttransfer):        0.104541
Time Total:                       0.104598

FYI, Cloudflare uses HTTP/2 with a full HPACK header encoding/compression which basically means the more requests you get, the more savings you get for headers served as you don’t need to serve the full header again on subsequent requests.

Example below using nghttp’s h2load HTTP/2 tool which can inspect header space savings and also traffic sent for headers + data

url=https://servermanager.guide/wp-content/uploads/2019/07/servermagaer-feature1.png

for i in $(seq 1 10); do echo "h2load run $i"; h2load $url -n $i | tail -6 | head -1; done 
h2load run 1
traffic: 3.80KB (3894) total, 430B (430) headers (space savings 32.28%), 3.31KB (3388) data
h2load run 2
traffic: 8.57KB (8773) total, 464B (464) headers (space savings 63.46%), 8.01KB (8206) data
h2load run 3
traffic: 10.54KB (10794) total, 500B (500) headers (space savings 73.75%), 9.93KB (10164) data
h2load run 4
traffic: 13.91KB (14244) total, 535B (535) headers (space savings 78.94%), 13.23KB (13552) data
h2load run 5
traffic: 20.77KB (21268) total, 569B (569) headers (space savings 82.08%), 20.03KB (20515) data
h2load run 6
traffic: 20.65KB (21144) total, 605B (605) headers (space savings 84.12%), 19.85KB (20328) data
h2load run 7
traffic: 24.02KB (24594) total, 640B (640) headers (space savings 85.60%), 23.16KB (23716) data
h2load run 8
traffic: 32.97KB (33763) total, 674B (674) headers (space savings 86.73%), 32.05KB (32824) data
h2load run 9
traffic: 37.14KB (38034) total, 815B (815) headers (space savings 85.74%), 36.06KB (36927) data
h2load run 10
traffic: 34.13KB (34944) total, 745B (745) headers (space savings 88.27%), 33.09KB (33880) data

You see after 10 requests for the image file, header space savings due to HTTP/2 HPACK encoding compression/savings is at 88.27%

Thanks for that! I already have disabled all Headers which I could disable, no Webserver Headers not Panel Header, X-Powered and so on

In CloudFlare (and my Server) HTTP3 is anabled aswell. But as most Test tools do not use Chrome Canary with --enable-quic --quic-version=h3-23 there is no benefit (in test tools) yet.

So HTTP/3 makes the Header size smaller. Didnt know this.

May I know why the DNS Lookup at HTTP/3 took longer? About twice as long.

Is there an option to prevent CF adding so many header/cookies? Or are already just the necessary ones set?

Does this apply to a normal Website call? I do have 17 requests, all of them are getting served from CloudFlare but they have different headers.
Or does this just apply if you call the same File 10 times?

To read up on HTTP/2 HPACK encoding https://blog.cloudflare.com/hpack-the-silent-killer-feature-of-http-2/

The more requests are being processed, the bigger the dynamic table becomes, and more headers can be matched, leading to increased compression ratio.

it can vary between runs of the command

example

curl-http3 --http3 -w "@curl-latency.ini" -o /dev/null -s https://servermanager.guide/wp-content/uploads/2019/07/servermagaer-feature1.png
HTTP Version:                     HTTP/3
Header Size:                      689
Request Size:                     137
Download Size:                    3388
DNS Lookup (time_namelookup):     0.014098
TCP Connect (time_connect):       0.000000
SSL Handshake (time_appconnect):  0.000000
Pre-Transfer (time_pretransfer):  0.036634
Redirect Time (time_redirect):    0.000000
TTFB (time_starttransfer):        0.085743
Time Total:                       0.085799
1 Like

Just to clarify though, HTTP/2 HPACK encoding header savings is just for Cloudflare network side savings, visitors still get the full headers transfered in the 'over network transfers; reported by web browser. That is how internet and HTTP requests work with network overhead,