PyTorch implementation of the paper Towards Robust Neural Networks Via Orthogonal Diversity accepted by Pattern Recognition. journal, arxiv.
If our work is helpful for your research, please consider citing:
@article{fang2024towards,
title={Towards robust neural networks via orthogonal diversity},
author={Fang, Kun and Tao, Qinghua and Wu, Yingwen and Li, Tao and Cai, Jia and Cai, Feipeng and Huang, Xiaolin and Yang, Jie},
journal={Pattern Recognition},
pages={110281},
year={2024},
publisher={Elsevier}
}
Regarding that current adversairal defenses rely on the data augmentation effect from adversarial examples, we propose a novel defense method, namely DIO, that explores model properties to improve DNNs' robustness. The key points of DIO are listed.
- Multiple paths augment DNNs for diverse features adaptive to adversarial inputs
- An orthogonality loss and a margin-maximization loss jointly contribute to DNNs’ diversity
- DIO outperforms the non-data-augmented adversarial defenses and is compatible with the data-augmented techniques, leading to improved robustness.
Dependencies mainly include:
- Python (miniconda)
- PyTorch
- AdverTorch
- AutoAttack
For more specific dependency, please refer to the environment.yml.
A brief description for the files is listed below:
train.sh\py
training scriptsattack_dio.sh\py
attack scriptsadapt_attack*
adaptive attack scriptsmodel/dio_*.py
DIO model definitions
sh train.sh
Detailed training settings (model, data set and whether to perform adversarial training) and hyper-parameters (train.sh
script.
A complete list of the chosen hyper-parameters for different models could be found in the Table 6 in the appendix of the paper.
sh attack_dio.sh
sh adapt_attack.sh
DIO is a model-augmented adversarial defense and could cooperate with other data-augmented defenses together to even boost the adversarial robustness.
In this work, several representative data-augmented defenses are considered:
- PGD-based adversarial training (AT)
- TRADES: ICML'19 paper, codes
- AWP: NeurIPS'20 paper, codes
- LBGAT: CVPR'21 paper, codes
- GAIRAT: ICLR'20 paper, codes
We reproduce these defenses based on their source codes and equip DIO with these carefully-designed data augmentation techniques. The training and attack codes are also provided in the corresponding folders in this repo:
Run
sh train_*_dio.sh
and
sh attack_*_dio.sh
in these folders to train and attack the corresponding equipped DIO models respectively.
If u have problems about the code or paper, u could contact me (fanghenshao@sjtu.edu.cn) or raise issues here.
If u find the code useful, welcome to fork and ⭐ this repo and cite our paper! :)
Lots of thanks from REAL DIO!!!