-
Notifications
You must be signed in to change notification settings - Fork 13
Compress toolchains in parallel fashion via gzp
crate
#127
Conversation
By default it seems that |
Here's a breakdown:
We measure dist-test runs because for each It might be also useful to measure size differences to further compare compression levels 3 and 6 at some point but it's clear that savings are there for the same compression level of 6. I've read that due to the nature of DEFLATE format, the parallel decompression doesn't scale up with number of threads as nicely as compression, so I've only touched the compression code but I'd need to investigate further whether decompressing in parallel will really not give us a meaningful speedups. @sylvestre would you be interested in this patch for mozilla/sccache? |
@Xanewok why not ? :) |
One thing that immediately comes to mind is thread saturation - by default it uses |
Since the entire compilation will block on it, I'd recommend to use the full CPU set |
Would it be possible to add a test to verify that it works accordingly ? |
maybe it is overkill but maybe an argument in sccache could be added to control this? |
If the toolchain will not be packaged and compressed correctly, the existing dist test suite will fail because we'll fall back to local compilation (something that's checked already) |
Maybe conditional compilation/a feature flag would be better in this case? So if |
This might the path to go for |
No description provided.