This repository contains the code to evaluate the adversarial robustness of classification models. We extend the original benchmark Benchmarking Adversarial Robustness on Image Classification to 19 attacks and 65 models. The well-known timm project is used as the default classification library.
-
Built on PyTorch and Support of timm
- Most classification models from timm can be used to conduct adversarial training. You can easily obtain robust models with different model architectures.
-
Support many attacks in various threat models.
-
Provide ready-to-use pre-trained baseline models (55 on ImageNet & 10 on CIFAR10).
-
Provide efficient & easy-to-use tools for evaluating classification models.
Dataset
- Support ImageNet and Cifar10 datasets for evaluation. For custom datasets, users should define their
torch.utils.data.Dataset
class and correspondingtransform
.
Classification Model
- Train classification models using timm or from your own model class.
-
Modify attack config files
- We provide some common settings for all the adversarial attacks in a config file attack_configs.py. Modify
attack_configs
dictionary according to your needs. - Define custom
torch.utils.data.Dataset
andtransform
and replace the original ones in run_attack.py if a new dataset is evaluated.
- We provide some common settings for all the adversarial attacks in a config file attack_configs.py. Modify
-
We provide a command line interface to run adversarial robustness evaluation. For example, you can evaluate an adversarially trained ResNet50 in our model zoo with PGD attack:
python run_attack.py --gpu 0 --crop_pct 0.875 --input_size 224 --interpolation 'bilinear' --data_dir DATA_PATH --label_file LABEL_PATH --batchsize 20 --num_workers 16 --model_name 'resnet50_at' --attack_name 'pgd'
Many thanks to these excellent open-source projects: