Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Backward Smoothing #26

Open
jinghuichen opened this issue Oct 6, 2020 · 3 comments
Open

Add Backward Smoothing #26

jinghuichen opened this issue Oct 6, 2020 · 3 comments

Comments

@jinghuichen
Copy link

Paper: Efficient Robust Training via Backward Smoothing https://arxiv.org/abs/2010.01278

Venue: {if applicable, the venue where the paper appeared}

Dataset and threat model: CIFAR-10/CIFAR100, Linf, 8/255

Code: https://github.com/jinghuichen/AutoAttackEval

Pre-trained model: https://drive.google.com/file/d/1lvMa2rbMrIVkAqsyrs_YXLBhewZBfdkP/view?usp=sharing (CIFAR10)
https://drive.google.com/file/d/1xNhK4w5ZuUSfbD_WR4xFKTprojaVux1A/view?usp=sharing (CIFAR100)

Log file: {link to log file of the evaluation}

Additional data: no

Clean and robust accuracy: CIFAR10 clean 85.32 robust 54.94 CIFAR100 clean 62.15 robust 31.92

Architecture: {wideresnet-34-10}

Description of the model/defense: Efficient robust training via backward smoothing

Thanks

@fra31
Copy link
Owner

fra31 commented Oct 7, 2020

Hi,

thanks for the submission! I ran the evaluation with Linf-bound eps=8/255 and got

CIFAR-10
clean accuracy: 85.32%
robust accuracy 51.12%

CIFAR-100
clean accuracy: 62.15%
robust accuracy 26.94%

which seem to me in line with what reported in the paper. If this is the case, I'd be happy to add them!

@jinghuichen
Copy link
Author

The numbers are correct. Thank you very much!

@fra31
Copy link
Owner

fra31 commented Oct 7, 2020

Added, thanks again for the submissions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants