PyTorch implementation of Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning with DDP (DistributedDataParallel) and Apex Amp (Automatic Mixed Precision).
python>=3.6.9
pytorch>=1.4.0
opencv-python==4.2.0.34
pyyaml==5.3.1
apex
This repo supposes using torch.distributed.launch
to start training, for example:
python -m torch.distributed.launch --nproc_per_node=2 --nnodes=2 --node_rank=0 --master_addr="" --master_port=12345 byol_main.py
There are a lot of redundant code for OSS loading/saving checkpoint/log files. You can simply them to local storage.
- Use
apex
orpytorch>=1.4.0
forSyncBatchNorm
- Pay attention to the data augmentations, which are slightly different from those in SimCLR, especially the probability of applying
GaussianBlur
andSolarization
for different views (see Table 6 of the paper) - In both training and evaluation, they normalize color channels by subtracting the average color and dividing by the standard deviation, computed on ImageNet, after applying the augmentations (even with the specially designed augmentations)
- Increase target model momentum factor with a cosine rule
- Exclude
biases
andbatch normalization
parameters from bothLARS adaptation
andweight decay
- The correct order for model wrapping:
convert_syncbn
->cuda
->amp.initialize
->DDP
Here we post our reproduced results with hyper parameters in train_config.yaml using 32x Nvidia V100 (32GB) GPU cards, indicating a global batch size of 4096.
Under this setup, reference accuracies for 300 epochs are 72.5% (top-1) and 90.8% (top-5), as reported in Section F of the paper.
Train Epoch | Classifier Train Epoch | Classifier LR | Top-1 ACC | Top-5 ACC |
---|---|---|---|---|
100 | 120 | [1., 0.05]/Cosine | 63.2% | 85.1% |
150 | 120 | [1., 0.05]/Cosine | 66.6% | 87.6% |
200 | 120 | [1., 0.05]/Cosine | 68.8% | 89.1% |
250 | 120 | [1., 0.05]/Cosine | 70.9% | 90.2% |
300 | 120 | [1., 0.05]/Cosine | 71.7% | 90.8% |