This code implement the DARTS paper in Tensorflow
DARTS: Differentiable Architecture Search
Hanxiao Liu, Karen Simonyan, Yiming Yang.
arXiv:1806.09055.
To carry out architecture search using 2nd-order approximation, run
python cnn/train_search.py
NOTE:
- Only implement the Cifar10 experiments
- Bacht size is set to only 16 (64 in the pytorch Ver) for the OOM issue using one single 1080 Ti GPU
- Only implement the 2nd-order approximation, but you can simply modify the code if you want 1st-order one
- Didn't implement the PATH DROP operation on test code for the unclear motivation
Here shows the details training pocess
Fig. The Train loss and Valid loss of the searching process
The train loss is decreased steadly during the searching pocess, which is more stable than the RL based method. Note that the valid loss refer to the 2nd-order loss for architecture params.
Fig. The final normal cell
Fig. The final reduction cell
The test error finally decreased to around 5.8 after training 415 epoches, while the best results in pytorch version is 2.76 but trained with 600 epoches. Training more epoches can narrow the gap of performance drop. Besides that, repeating the architecture search process with different seed to choose the best structure can avoid local minimun.
Fig. The test accuracy of the searched architecture
- This implementation is based on the original Torch implementation quark0/darts