Skip to content

Electronicshelf/clustering-kernels

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Clustering Convolutional Kernels to Compress Deep Neural Networks

This repository is an official PyTorch implementation of the Paper "Clustering Convolutional Kernels to Compress Deep Neural Networks" from ECCV 2018 (poster).

  • Note: We use the term 'kernels' to refer spatial convolutional kernels in CNNs which usually have 3x3 elements.

Our method can compress 1.2M 3x3 kernels of ResNet-18 into 64 representative centroids. The compressed model achieves 11.33% (+0.43%) Top-1 error on ImageNet classification task.

You can see how those centroids look like:

The most common centroid, which replaces over 60k kernels, appear at the top left. The least frequent one, which replaces about 5k kernels, appear at bottom right.

Reference

If you find our work useful in your research of publication, please cite our work:

[1] Sanghyun Son, Seungjun Nah, Kyoung Mu Lee, "Clustering Convolutional Kernels to Compress Deep Neural Networks," In ECCV 2018. [PDF] [Poster]

@inproceedings{son2018clustering,
  author = {Son, Sanghyun and Nah, Seungjun and Lee, Kyoung Mu},
  title = {Clustering Convolutional Kernels to Compress Deep Neural Networks},
  booktitle = {ECCV},
  month = {September},
  year = {2018}
}

We provide scripts to reproduce every experiment from our paper. With some additional coding, you can compress your pre-trained model, too.

Currently, some scripts are under revising. They will be included in demo.sh after revision.

What we provide

  • Codes for training baseline/compressed models
  • Centroids visualization
  • GPU-accelerated k-means (transform invariant) clustering algorithm
  • Non-official implementations of existing network quantization methods
  • Functions to save/load compressed models (Currently disabled due to the compatibility issue. Will be included in future.)

What we do not provide

  • CUDA kernels to accelerate the proposed algorithm (We only report theoretical speed-up in the paper.)

Dependencies

  • Python 3.6
  • PyTorch >= 0.4.1
  • numpy
  • matplotlib
  • tqdm

Quick start

Clone this repository into any place you want.

git clone https://github.com/thstkdgus35/clustering-kernels
cd clustering-kernels/src

If you would like to do some experiments about ImageNet classification, follow this link to prepare the dataset. Your dataset directory should be like below:

[some_path]/ILSVRC2012/train/[1000_many_folders]
[some_path]/ILSVRC2012/val/[1000_many_folders]

Then, you have to specify the directory with --dir_data [some_path]. It is also possible to fix a default argument in options.py directly.

We recommend you to download our pre-trained models (with --pretrained download), but it is also possible to train them from scratch. In that case, remove --pretrained download --test_only from demo.sh and specify a save directory with --save [directory_you_want].

After you prepare all the prerequisites, uncomment a line you want to execute and type sh demo.sh!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.8%
  • Shell 1.2%