You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm working on a paper, after reading the python code, it looks like the "gradient of loss" is calculated in: fb = 1. / (1. + exp(-dot(l1, l2b.T))) gb = (model.neg_labels - fb) * alpha
I want to be able to get the loss of an input and not the gradient of it, is there anyway to get that? is there any function that does so?
thanks for sharing
The text was updated successfully, but these errors were encountered:
Hi, I'm working on a paper, after reading the python code, it looks like the "gradient of loss" is calculated in:
fb = 1. / (1. + exp(-dot(l1, l2b.T)))
gb = (model.neg_labels - fb) * alpha
I want to be able to get the loss of an input and not the gradient of it, is there anyway to get that? is there any function that does so?
thanks for sharing
The text was updated successfully, but these errors were encountered: