Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CW efficiency improvement and bug fix, add CW binary search version, early stop PGD version, support L0 and Linf for CW and CWBS, rewrite FAB attack, fix MI-FGSM bug, rewrite JSMA. #168

Open
wants to merge 66 commits into
base: master
Choose a base branch
from

Conversation

rikonaka
Copy link
Contributor

@rikonaka rikonaka commented Nov 12, 2023

PR Type and Checklist

What kind of change does this PR introduce?

CW attack fix

There is an obscure bug in the original CW attack code F function.

In CW original code from Carlini, the real is calculate as

other

https://github.com/carlini/nn_robust_attacks/blob/c6b8f6a254e82a79a52cfbc673b632cad5ea1ab1/l2_attack.py#L96

It was a sum, but in torchattacks, it become max, I discovered this problem accidentally 😋.

real = torch.max(one_hot_labels * outputs, dim=1)[0]

I also reduced the large number of tensor detech() operations and view() operations in the original code, instead used index to assign tensors, its more simple and efficiency.

At the same time, I also added the binary search version of CW (CWBS), issues #167 . Binary search can indeed significantly reduce the size of the perturbations. The red line is the value of best_L2.

best_L2

I tested three cw attack algorithms L0, L2 and Linf and found that 100% attack success rate can be achieved on 50 test images.

attack rate

And its pertubations is still invisible.

show

FAB attack fix

The original FAB code was too complicated and difficult to maintain, so I rewritten the FAB attack and split L1, L2 attacks into separate files, and I found that previous FAB code when the user specifies a target label, it does not work good with the target attack.

The old FAB code is rename as AFAB so that it could be used in autoattack.

In the FAB code forward() function

def forward(self, images, labels):

There are no parameters for the target label, in contrast, the FAB target attack requires both labels, one for the original label and the other for the target label.

def get_diff_logits_grads_batch_targeted(self, imgs, la, la_target):

But there is only one label entered in the entire code. If the user wants to specify the target label to be used for the attack, since there is only one label input, the computation of the code related to the target attack will actually be meaningless.

diffy = -(y[u, la] - y[u, la_target])

For example, here la=la_target, then diffy here is meaningless.

I'll try to fix this, but don't have any clue at the moment because we need to enter two labels for the attack, which conflicts with the existing framework. So first submitted the FAB attack without the target attack version now.

FAB target attack has been completed.

…lation of the F function of the CW attack, and add CW attack binary search version
@rikonaka rikonaka changed the title Improve the efficiency of the Original CW attack and fix an error in the calculation of the F function of the CW attack algorithm, and add CW binary search version. CW efficiency improvement and bug fix, add CW binary search version. Nov 12, 2023
@codecov-commenter
Copy link

codecov-commenter commented Nov 12, 2023

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

Attention: Patch coverage is 82.13115% with 327 lines in your changes missing coverage. Please review.

Project coverage is 76.89%. Comparing base (936e86d) to head (2a5b04c).
Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
torchattacks/attacks/afab.py 57.20% 156 Missing and 31 partials ⚠️
torchattacks/attacks/fabl2.py 80.89% 29 Missing and 5 partials ⚠️
torchattacks/attacks/fab.py 79.87% 28 Missing and 4 partials ⚠️
torchattacks/attacks/fabl1.py 82.08% 28 Missing and 3 partials ⚠️
code_coverage/script/resnet.py 71.42% 24 Missing ⚠️
torchattacks/attack.py 77.14% 8 Missing ⚠️
torchattacks/attacks/jsma.py 93.67% 3 Missing and 2 partials ⚠️
torchattacks/attacks/cwbsl0.py 98.97% 0 Missing and 1 partial ⚠️
torchattacks/attacks/cwbslinf.py 98.95% 0 Missing and 1 partial ⚠️
torchattacks/attacks/cwlinf.py 98.52% 0 Missing and 1 partial ⚠️
... and 3 more

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #168      +/-   ##
==========================================
+ Coverage   73.37%   76.89%   +3.52%     
==========================================
  Files          44       54      +10     
  Lines        3827     4926    +1099     
  Branches      578      586       +8     
==========================================
+ Hits         2808     3788     +980     
- Misses        862      972     +110     
- Partials      157      166       +9     
Files with missing lines Coverage Δ
code_coverage/test_atks.py 100.00% <100.00%> (+6.89%) ⬆️
torchattacks/__init__.py 100.00% <100.00%> (ø)
torchattacks/attacks/autoattack.py 80.64% <100.00%> (ø)
torchattacks/attacks/cw.py 100.00% <100.00%> (ø)
torchattacks/attacks/cwbs.py 100.00% <100.00%> (ø)
torchattacks/attacks/cwl0.py 100.00% <100.00%> (ø)
torchattacks/attacks/mifgsm.py 100.00% <100.00%> (ø)
torchattacks/attacks/cwbsl0.py 98.97% <98.97%> (ø)
torchattacks/attacks/cwbslinf.py 98.95% <98.95%> (ø)
torchattacks/attacks/cwlinf.py 98.52% <98.52%> (ø)
... and 10 more

... and 3 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 23620a6...2a5b04c. Read the comment docs.

@rikonaka rikonaka changed the title CW efficiency improvement and bug fix, add CW binary search version. CW efficiency improvement and bug fix, add CW binary search version, support L0 and Linf for CW and CWBS. Nov 19, 2023
@ZaberKo
Copy link

ZaberKo commented Nov 23, 2023

I think the calculation of other is still incorrect, which neglects that output logits could be negative numbers.
cwl2.py#L146

@rikonaka
Copy link
Contributor Author

I think the calculation of other is still incorrect, which neglects that output logits could be negative numbers. cwl2.py#L146

Thank you very much for your advice, but this other calculation is actually translated from carlini's code (Tensorflow to Pytorch). You can check it out 😁.

https://github.com/carlini/nn_robust_attacks/blob/c6b8f6a254e82a79a52cfbc673b632cad5ea1ab1/l2_attack.py#L97

And you mentioned that logits may be negative, the original author's code also directly used the value before softmax.

https://github.com/carlini/nn_robust_attacks/blob/c6b8f6a254e82a79a52cfbc673b632cad5ea1ab1/l2_attack.py#L90C33-L90C33

real

So this should be correct 😉.

@ZaberKo
Copy link

ZaberKo commented Nov 23, 2023

I think the calculation of other is still incorrect, which neglects that output logits could be negative numbers. cwl2.py#L146

Thank you very much for your advice, but this other calculation is actually translated from carlini's code (Tensorflow to Pytorch). You can check it out 😁.

https://github.com/carlini/nn_robust_attacks/blob/c6b8f6a254e82a79a52cfbc673b632cad5ea1ab1/l2_attack.py#L97

And you mentioned that logits may be negative, the original author's code also directly used the value before softmax.

https://github.com/carlini/nn_robust_attacks/blob/c6b8f6a254e82a79a52cfbc673b632cad5ea1ab1/l2_attack.py#L90C33-L90C33

real

So this should be correct 😉.

Thanks for the quick response. I think you misunderstand the issue. A quick fix of cwl2.py#L146 would be like:

other = torch.max((1 - one_hot_labels) * outputs - one_hot_labels*10000., dim=1)[0]

@rikonaka
Copy link
Contributor Author

rikonaka commented Nov 23, 2023

Thanks for the quick response. I think you misunderstand the issue. A quick fix of cwl2.py#L146 would be like:

other = torch.max((1 - one_hot_labels) * outputs - one_hot_labels*10000., dim=1)[0]

Good question, well, in here we will pick the maximum value of the logits except true label, so if here we only have 1 images, the outputs will be

[
[x1, x2, x3, x4]
]

Then we used the one_hot_labels to mask one positon (suppose the x3), we will got

[
[x1, x2, 0, x4]
]

So the torch.max will caculater the max value of x1, x2, 0 and x3.

In Tensorflow, the original author subtracts that value (one_hot_labels*10000) to prevent the all logits are negative (I haven't used tensorflow for a long time 🤣), this is a point that can be improved. But In pytroch, and in here logits is greater than 0.

logits

So the situation where all logits are negative that you are worried about will not happen 😉.

@ZaberKo
Copy link

ZaberKo commented Nov 24, 2023

Thanks for the quick response. I think you misunderstand the issue. A quick fix of cwl2.py#L146 would be like:

other = torch.max((1 - one_hot_labels) * outputs - one_hot_labels*10000., dim=1)[0]

Good question, well, in here we will pick the maximum value of the logits except true label, so if here we only have 1 images, the outputs will be

[
[x1, x2, x3, x4]
]

Then we used the one_hot_labels to mask one positon (suppose the x3), we will got

[
[x1, x2, 0, x4]
]

So the torch.max will caculater the max value of x1, x2, 0 and x3.

In Tensorflow, the original author subtracts that value (one_hot_labels*10000) to prevent the all logits are negative (I haven't used tensorflow for a long time 🤣), this is a point that can be improved. But In pytroch, and in here logits is greater than 0.

logits

So the situation where all logits are negative that you are worried about will not happen 😉.

However, there is no such guarantee that the output logits must be non-negtive in pytorch, for arbitrary models under any training methods.

@rikonaka
Copy link
Contributor Author

rikonaka commented Nov 24, 2023

However, there is no such guarantee that the output logits must be non-negtive in pytorch, for arbitrary models under any training methods.

😵‍💫 The same, there is also no such guarantee that the output logits must be negative in pytorch, for arbitrary models under any training methods. If you can provide any evidence that the logits output of some model is all negative, it may be able to further support your argument.

@ZaberKo
Copy link

ZaberKo commented Nov 24, 2023

However, there is no such guarantee that the output logits must be non-negtive in pytorch, for arbitrary models under any training methods.

😵‍💫 The same, there is also no such guarantee that the output logits must be negative in pytorch, for arbitrary models under any training methods. If you can provide any evidence that the logits output of some model is all negative, it may be able to further support your argument.

That is not the point. The point here is that we need to cover all cases, even though some of them are rare. Here are some other implementations of CW f_func in pytorch for reference:

@rikonaka
Copy link
Contributor Author

rikonaka commented Nov 24, 2023

However, there is no such guarantee that the output logits must be non-negtive in pytorch, for arbitrary models under any training methods.

😵‍💫 The same, there is also no such guarantee that the output logits must be negative in pytorch, for arbitrary models under any training methods. If you can provide any evidence that the logits output of some model is all negative, it may be able to further support your argument.

That is not the point. The point here is that we need to cover all cases, even though some of them are rare. Here are some other implementations of CW f_func in pytorch for reference:

* [imrahulr/adversarial_robustness_pytorch](https://github.com/imrahulr/adversarial_robustness_pytorch/blob/6df6a8f0cd49cf6d18507a4b574c004ab6eedf49/core/attacks/utils.py#L212)

* [thu-ml/ares](https://github.com/thu-ml/ares/blob/306e35fe4309d791f9252bb6aab51198d2b9b511/ares/attack/cw.py#L133)

Thanks for your suggestion 👍, I will rewrite this f function quickly. Next time, please provide detailed information directly from the beginning, instead of wasting other people's time by making people guess and misunderstand of your short information.

…s of the graph are

 freed when you call .backward() or autograd.grad().`
@Adversarian
Copy link

Adversarian commented Nov 29, 2023

Thanks for the effort you made to improve the implementation of CW in this library. I had one suggestion, and correct me if it is not feasible to implement, but wouldn't it be better if you aliased one of the variants of CW (e.g. CWL0 or CWLinf etc.) as CW so that this version doesn't introduce a breaking change for torchattacks.CW to preserve backward compatibility?

You could use the version of CW that was previously used (I believe CWL2 in the current implementation) as an alias to remediate this (as easily as something like CW = CWL2 for instance).

@rikonaka
Copy link
Contributor Author

Thanks for the effort you made to improve the implementation of CW in this library. I had one suggestion, and correct me if it is not feasible to implement, but wouldn't it be better if you aliased one of the variants of CW (e.g. CWL0 or CWLinf etc.) as CW so that this version doesn't introduce a breaking change for torchattacks.CW to preserve backward compatibility?

You could use the version of CW that was previously used (I believe CWL2 in the current implementation) as an alias to remediate this (as easily as something like CW = CWL2 for instance).

Thank you very much for your suggestion. I will move CWL2 to CW now. 😉

@rikonaka rikonaka changed the title CW efficiency improvement and bug fix, add CW binary search version, support L0 and Linf for CW and CWBS. CW efficiency improvement and bug fix, add CW binary search version, early stop PGD version, support L0 and Linf for CW and CWBS, rewrite FAB attack. Mar 31, 2024
@rikonaka rikonaka changed the title CW efficiency improvement and bug fix, add CW binary search version, early stop PGD version, support L0 and Linf for CW and CWBS, rewrite FAB attack. CW efficiency improvement and bug fix, add CW binary search version, early stop PGD version, support L0 and Linf for CW and CWBS, rewrite FAB attack, fix MI-FGSM bug, rewrite JSMA. Jun 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants