Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis at IJCNN 2022
Attacking Distance-aware Attack: Semi-targeted Model Poisoning on Federated Learning at IEEE Transactions on Artificial Intelligence 2023
Attacking Distance-aware Attack (ADA) enhances a poisoning attack by finding the optimized target class in the feature space.
This instruction describes how to mount the semi-targeted ADA attack and other baseline attacks on five different benchmark datasets, i.e., MNIST, Fashion-MNIST, CIFAR-10, CIFAR-100, and ImageNet. There are three model architectures that can be applied for federated learning, that is, 2-layer CNNs, VGG16, and VGG19.
Tensorflow 2
Python 3.8
Attack was mounted when the federated learning was converged. Please first download the pretrained model weights below.
Google Drive: Pretrained model datasets
To run the algorithm with the optimized target class that was prepared beforehand:
python main.py --dataset cifar10 --model vgg16 --seed 0 --epsilon 0.1
python main.py --dataset cifar10 --model vgg16 --seed 0 --epsilon 0.1 --ada
python main.py --dataset cifar10 --model vgg16 --seed 0 --epsilon 0.1 --ada --scale
In case, you would like to use a different model architecture and train the model from scratch:
python pretraining.py --dataset cifar100 --model vgg16 --epoch 5 --batch 128 --sample 500
where you can choose the dataset, model architecture, local training epoch, and local batch size that you want to use. The learned model weights will be saved for mounting the ADA attack.
In case that you would like to compute from scratch the optimized target class using FLAME:
python main.py --dataset cifar10 --model vgg16 --seed 0 --epsilon 0.1 --flame
Four types of defense methods are avaliable: "NDC", "Krum", "TrimmedMean", and "DP" (default: None):
python main.py --dataset cifar10 --model vgg16 --seed 0 --epsilon 0.1 --defense NDC
If this repository is helpful for your research or you want to refer the provided results in this work, you could cite the work using the following BibTeX entry:
@article{sun2022semitarget,
author = {Yuwei Sun and Hideya Ochiai and Jun Sakuma},
title = {Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis},
journal = {International Joint Conference on Neural Networks (IJCNN)},
year = {2022}
}
@article{sun2023attacking,
author = {Yuwei Sun and Hideya Ochiai and Jun Sakuma},
title = {Attacking Distance-aware Attack: Semi-targeted Model Poisoning on Federated Learning},
journal = {IEEE Transactions on Artificial Intelligence},
year = {2023}
}
We have a survey paper on decentralized deep learning regarding security and communication efficiency, published in IEEE Transactions on Artificial Intelligence, December 2022.