-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model predictions incorrect -> possible dataloader issue? #3
Comments
@rtaori I think I know the problem. Can you please go to https://github.com/Hadisalman/smoothing-adversarial/blob/master/code/architectures.py
instead of using (comment)
For some of the early models in the repo, I was using In short, during certificaiton/predicition, make sure that you use the same normalization layer as the one used during training. Please let me know if this solves your issue. |
I see. Could you let me know which models you used which setting for? smoothing-adversarial/code/datasets.py Line 190 in d2a71bf
Thanks |
Specifically, I am looking at these imagenet models: If you could tell me which ones need this change and which ones don't that would be very helpful. Thanks |
The All the models you mentioned use |
I see, great thanks |
So when you download our trained models, you will find these folders imagenet32/ --> imagenet/--> Hope this helps. I will include these details in the README as well. Thanks for catching this! |
Thanks Hadi! |
Hi,
I ran
code/predict.py
with the PGD_1step/eps_512/noise_0.25 noise model and the predictions seem to be always wrong (the "correct" column in the output is always 0). Upon further inspection, it seems that the predictions are agreeing, just that the label index is wrong (for example instead of prediction index 0, it predicts 828).To confirm this, I ran the baseline noise_0.25 model from https://github.com/locuslab/smoothing, but with the code in this repo. The predictions are correct, ie the "correct" column is almost always 1.
I think probably the way your models were trained did not use the standard imagenet directories, and so the sort order was different, causing the labels to be different as well.
If possible, could you investigate this and let me know which standard imagenet indices correspond to the indices which the model outputs?
Thanks,
Rohan
The text was updated successfully, but these errors were encountered: