Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference in robust accuracies for Autoattack #188

Open
iamsh4shank opened this issue Jun 13, 2024 · 1 comment
Open

Difference in robust accuracies for Autoattack #188

iamsh4shank opened this issue Jun 13, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@iamsh4shank
Copy link

iamsh4shank commented Jun 13, 2024

❔ Any questions

I am trying to run the Autoattack using this repo and the one present here. I am getting different results from the same. The robust accuracy using torchattack is 1.92% while the one with the actual autoattack is around 2.52%.

I am currently testing it on Imagenet100 dataset (it has 100 classes). To keep it consistent it with the autoattack I have used the targeted class as 9

The code that is used is something like this -

The normalization I have applied is something like this -

model = nn.Sequential(Normalize(mean = dataset_mean, std = dataset_std), model)

torchattack

atk = torchattacks.AutoAttack(model, norm='Linf', eps=4/255, version='standard', n_classes=10, seed=0, verbose=False)
        atk.set_normalization_used(mean=dataset_mean, std=dataset_std)
        #adv_images = attack(x_test, y_test)
        scaler = GradScaler()
        correct = 0
        total = 0
        success = 0
        for _, (clean_x_test, clean_y_test) in enumerate(data_loader_val):
            clean_x_test = clean_x_test.to(device)
            clean_y_test = clean_y_test.to(device)
            
            adv_images = atk(clean_x_test, clean_y_test)
            outputs = model(adv_images)
            _, predicted = torch.max(outputs.data, 1)
            
            total += clean_y_test.size(0)
            correct += (predicted == clean_y_test).sum().item()

        robust_accuracy = 100 * float(correct) / total
        print('Robust Accuracy for AutoAttack: %f %%' % robust_accuracy)

AutoAttack

l = [x for (x, y) in data_loader_val]
        x_test = torch.cat(l, 0)
        print (x_test.shape)
        l = [y for (x, y) in data_loader_val]
        y_test = torch.cat(l, 0)

        
        
        aa_state_path = None
        device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
        accuracy = clean_accuracy(model,
                                x_test,
                                y_test,
                                batch_size=args.batch_size,
                                device=device)
        print(f'Clean accuracy: {accuracy:.2%}')

        adversary = AutoAttack(model,
                                norm=args.norm,
                                eps=args.epsilon,
                                version='standard',
                                device=device,
                                seed = 0,
                                log_path=None)
        x_adv = adversary.run_standard_evaluation(x_test,
                                                y_test,
                                                bs=args.batch_size,
                                                state_path=aa_state_path)
        adv_accuracy = clean_accuracy(model,
                                        x_adv,
                                        y_test,
                                        batch_size=args.batch_size,
                                        device=device)
@iamsh4shank iamsh4shank added the enhancement New feature or request label Jun 13, 2024
@iamsh4shank
Copy link
Author

@Harry24k

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant