You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
my initial heatmap loss is 1e+5 and converges to 300+ in the first epoch. When I switch to SGD optimizer, with a higher learning rate, the loss is constant at 9.21 at the second epochs. I viewed the network output and found that the outputs are all at around -1e+12 and this causes the clipped sigmoid output to be 1e-4. What is the possible cause for this?
With the Adam optimizes and learning rate as in the paper, the loss converges to 2. However, mAP is still around 0.0001 and does not even converge.
The text was updated successfully, but these errors were encountered:
my initial heatmap loss is 1e+5 and converges to 300+ in the first epoch. When I switch to SGD optimizer, with a higher learning rate, the loss is constant at 9.21 at the second epochs. I viewed the network output and found that the outputs are all at around -1e+12 and this causes the clipped sigmoid output to be 1e-4. What is the possible cause for this?
With the Adam optimizes and learning rate as in the paper, the loss converges to 2. However, mAP is still around 0.0001 and does not even converge.
The text was updated successfully, but these errors were encountered: