By Zilong Huang, Xinggang Wang, Jiasi Wang, Wenyu Liu and Jingdong Wang.
This code is a implementation of the weakly-supervised semantic segmentation experiments in the paper DSRG. The code is developed based on the Caffe framework.
Overview of the proposed approach. The Deep Seeded Region Growing module takes the seed cues and segmentation map as input to produces latent pixel-wise supervision which is more accurate and more complete than seed cues. Our method iterates between refining pixel-wise supervision and optimizing the parameters of a segmentation network.
DSRG is released under the MIT License (refer to the LICENSE file for details).
If you find DSRG useful in your research, please consider citing:
@inproceedings{huang2018dsrg,
title={Weakly-Supervised Semantic Segmentation Network with Deep Seeded Region Growing},
author={Huang, Zilong and Wang, Xinggang and Wang, Jiasi and Liu, Wenyu and Wang, Jingdong},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={7014--7023},
year={2018}
}
- Python packages:
$ pip install -r python-dependencies.txt
-
caffe (deeplabv2 version): deeplabv2 caffe installation instructions are available at
https://bitbucket.org/aquariusjay/deeplab-public-ver2
. Note, you need to compile caffe with python wrapper and support for python layers. Then add the caffe python path into training/tools/findcaffe.py. -
Fully connected CRF wrapper (requires the Eigen3 package).
$ pip install CRF/
- Go into the training directory:
$ cd training
$ mkdir localization_cues
-
Download the initial VGG16 model pretrained on Imagenet and put it in training/ folder.
-
Download CAM seed and put it in training/localization_cues folder. We use CAM for localizing the foreground seed classes and utilize the saliency detection technology DRFI for localizing background seed. We provide the python interface to DRFI here for convenience if you want to generate the seed by yourself.
$ cd training/experiment/seed_mc
$ mkdir models
-
Set root_folder parameter in train-s.prototxt, train-f.prototxt and PASCAL_DIR in run-s.sh to the directory with PASCAL VOC 2012 images
-
Run:
$ bash run.sh
The trained model will be created in models
This code is heavily borrowed from SEC.