This summary records compression methods that get rid of the non-differentiable compression operation in training.
- Relaxed Quantization for Discretized Neural Networks
- Learning sparse neural networks through L0 regularization
- ProxQuant: Quantized Neural Networks via Proximal Operators