-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can you add the inverse/backward for batch norm layer #2
Comments
Hi, thanks for your interest in our work. Actually, we use a simple strategy for batch norm layers: just freeze all variables in BN layers during both training and test. The ResNet-18 result reported in our paper is produced with this method. The core code about this part should be something like
|
But, in many cases, we need to use the batch norm when other methods use it. So I'm interested in the implementation like below: For example, I use an MLP like: how to write the InverseMLP and its forward (based on your MLP model). class InverseMLP(nn.Module):
Thank you! |
Hi, sorry for the late reply. I've added an example with batch norm. deepdefense.pytorch/models/mnist.py Lines 258 to 337 in 48621f7
You can find the download url for reference model of this example in README. Thanks. |
HI, Thank you for your excellent work!
I have an issue. You have implemented the inverse layers for Conv/Linear/Dropout/Pool layers, but I found you forgot the batch norm layer which is used widely in NN too...
So can you add an NN example with batch norm layers?
The text was updated successfully, but these errors were encountered: