Implementation of the Deep Gradient Leakage privacy attack for federated learning (introduced in Deep Leakage from Gradients) in FLEXible and Pytorch.
The attack works by optimizing a random noise so it produces the same gradient as the gradient leaked. This leads to the noise being close to the original data in many cases.
Improved Deep Gradient Leakage is an improvement of the DGL which uses analytical knowledge about the derivative of the cross entropy in order to properly extract the label and thus improve the stability of the attack. More information in iDLG: Improved Deep Leakage from Gradients.