You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I will add to this issue, since my question is also related to the code, I hope you don't mind @xqyzjl! :)
I have a question regarding the global average pooling mentioned in Section 3.1, as it is not clear to me which dimensions should be pooled. I would have expected A (n x h x w x c) to be pooled to (n x c) where (n ... samples, h ... height, w ... width, c ... channels).
This is the corresponding source code:
# if the activations have shape (n_samples, height, width, n_channels),
# apply average pooling
if len(activations.shape) == 4:
activations = torch.mean(activations, dim=(2, 3))
If I understand that correctly (and I tried it to make sure), this reduces the activations to n x h, since width and n_channels are the 2nd and 3rd dimensions. Could you please help me understand where I am wrong? :)
I've carefully read your paper and tried craft on my own data. It's a wonderful work!
But I have some questions of the code:
I really want to follow your work and do some further explorations. I would appreciate it if you could provide me with the above code!
The text was updated successfully, but these errors were encountered: