You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. Thanks for your great work.
I have some questions about the code,what is the meaning of with_embed,Why is loss_contrast not used?
1.if with_embed is True:
return loss + self.loss_weight * loss_contrast
return loss + 0 * loss_contrast # just a trick to avoid errors in distributed training
if is_distributed():
import torch.distributed as dist
def reduce_tensor(inp):
"""
Reduce the loss from all processes so that
process with rank 0 has the averaged results.
"""
world_size = get_world_size()
if world_size < 2:
return inp
with torch.no_grad():
reduced_inp = inp
dist.reduce(reduced_inp, dst=0)
return reduced_inp
loss = self.pixel_loss(outputs, targets, with_embed=with_embed)
backward_loss = loss
display_loss = reduce_tensor(backward_loss) / get_world_size()
else:
backward_loss = display_loss = self.pixel_loss(outputs, targets)
Looking forword to your reply!
The text was updated successfully, but these errors were encountered:
Hello. Thanks for your great work.
I have some questions about the code,what is the meaning of with_embed,Why is loss_contrast not used?
1.if with_embed is True:
return loss + self.loss_weight * loss_contrast
Looking forword to your reply!
The text was updated successfully, but these errors were encountered: