-
-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blocklists are sent uncompressed by Cloudflare reverse proxy #570
Comments
Hm, now that I recall... it got tricky to serve Either way, try On my pc, a
|
Btw, there's no server serving these downloads, Cloudflare Worker streams blob from AWS CloudFront backed by Amazon S3 (soon moving to Cloudflare R2). |
Works for me, I see brotli compression (at the protocol level). If changing the mimetype doesn't cause any issues I say go for it.
This "on the fly" compression always gets a worse ratio than statically compressing every time, because you have to worry about compression time. Nevertheless, download size is reduced from 63.4 MB to 32.3 MB, which is almost 50%. I'd say that's substantial savings for those on metered connections. (Here in the US, 60 MB would cost me $0.60 over LTE, making it cost prohibitive to download regular blocklist updates - say, every 24 hours.) Statically, you can do much better:
So brotli for example goes from 32M to 24M, a pretty huge win for static compression. All of them (except xz, included only for reference) decompress quite fast. Zstd is the fastest by a fair margin. |
For some reason, the download size remains the same for me. There's no change... How are you testing this? I tried looking up data transfer in Firefox's Network console and with
We'd have to uncompress these files on a myraid of Android's...which is what worries me (even if misplaced). |
I think you are reading the outputs incorrectly. Testing in Firefox, I get a compressed file. My internet is only 100 Mbps, so that's slow enough to see clearly what's happening. When the file is compressed on the fly, the You mention My preferred testing method is
If you enable protocol level compression ("content encoding"), you pay that price anyway, as the HTTP library on the client is doing the decompression for you. Zstd supports streaming decompression so you can just decompress the bytes as they're downloaded over the wire. If anything I'd expect it to be faster than HTTP with gzip streaming compression across basically all devices. |
100Mbps ought to be enough for anybody ;)
Yep. I can't remember why but I believe that absence of Thanks. I see the compressed output with wget --compression=auto "https://download.rethinkdns.com/trie?compressed=true" -a /tmp/wget.log && less /tmp/wget.log
-2022-09-22 01:42:07-- https://download.rethinkdns.com/trie?compressed
HTTP request sent, awaiting response... 200 OK
Length: 63409270 (60M) [application/wasm]
Saving to: 'trie?compressed'
...
...
...
31750K .......... .......... .......... .... 10.8M=8.0s
2022-09-22 21:45:01 (3.87 MB/s) - 'trie?compressed' saved [63409270]
The problem is the native I'll switch to downloading from the |
Part of #573 Blocklists will be downloaded from |
Following up on an issue I originally pointed out here: #564 (comment)
The lack of compression means users on metered connections have to download more than twice as much data as would otherwise be required for blocklists.
Test:
Copying the blocklist file to a domain I control with gzip enabled in the Nginx config shows that the command works correctly.
The issue seems to be Cloudflare's process for deciding which files to compress, which relies on mimetype. Since the blocklist file (correctly) doesn't have any mimetype indicating compressibility, Cloudflare will decompress the file sent from the origin server and send it to the client uncompressed.
Some possible workarounds:
If all RethinkDNS versions and platforms support HTTP compression (likely), then you can set
cache-control: no-transform
on your origin server and Cloudflare should cache the gzipped file unaltered. You might also look at something like gzip_static for Nginx which would allow you to get the best compression ratios with e.g. Zopfli.You could rely on decompression in the client application rather than at the protocol level. You might consider using something like zstd, which would give better compression ratios than gzip and might actually be faster than doing it at the protocol level due to extremely fast decompression for zstd.
The text was updated successfully, but these errors were encountered: