Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the code "out_interp = self.interpolation1(out_middle_2r_blocks) + out_down_residual_blocks1" #23

Open
ZachZou-logs opened this issue Aug 27, 2019 · 4 comments

Comments

@ZachZou-logs
Copy link

hello ,thank you for your code!
But I have a question about your code.The episode in your code seems to be no such operation in the paper and in the soft mask branch only skip connection have addition operation.Could you help me solve this question?
out_interp = self.interpolation1(out_middle_2r_blocks) + out_down_residual_blocks1

@tengshaofeng
Copy link
Owner

I refer to the caffe version. u can consider it as a trick.

@daijiahai
Copy link

hello dude, now I have the same question too, have you solved it yet now? If so, could you share the experience with me?

@jorie-peng
Copy link

hello dude, now I have the same question too, have you solved it yet now? If so, could you share the experience with me?

I change to another model, it works

@jorie-peng
Copy link

jorie-peng commented Jan 28, 2021

I refer to the caffe version. u can consider it as a trick.

hi, thanks for your code, when I use 'ResidualAttentionModel_92_32input_update', I will have question as this issus described because of 'AttentionModule_stage1_cifar', but change model to 'ResidualAttentionModel_92', it work, but I cannot pretrained gived model because of lots of mismatch, do you have a good way to load gived model? or is any other pretrained model can use? Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants