-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom huber loss in LightGBM #3532
Comments
It seems your implementation is different with LightGBM: LightGBM/src/objective/regression_objective.hpp Lines 310 to 335 in da6c6ea
|
Thanks for your reply.
|
In lgb |
Tried this approach but still can't get same results.
|
by default, the huber loss is boosted from average label, you can set |
Unfortunatelly it didn't help. |
I mean set boost_from_average=false to lightgbm built-in huber loss, not your custom loss. |
It's getting closer.
What else can I do? |
I think your customed eval is different with LightGBM built-in one. LightGBM/src/metric/regression_metric.hpp Lines 191 to 198 in e9f5169
updated: |
So what more shoud I do to get identical results with custom huber loss function and eval? |
can you try to use custom huber objective + built-in huber metric, and built-in huber objective + customer huber metric? |
Good idea!
|
okay, your objective function is still different, please check following code.
|
Still nothing...
|
did you check the type of |
and you should use |
@guolinke thanks for your seggestions but I've double checked code before previous comments...
Also I've cheked difference before np.abs and default python abs functions for same array - it does not give any difference for evaluatin function.
Just a few comments above we have figured out that problem is in loss function. |
@dishkakrauch
So it should be like, and you missing def huber_custom_train(preds, data):
y_true = data.get_label()
y_pred = preds
residual = (y_pred - y_true).astype("float")
alpha = .9
# It should compare to abs(residual).
grad = np.where(np.abs(residual) <= alpha, residual, np.sign(residual) * alpha)
hess = np.where(residual < 0, 1. * 1., 1. * 1.)
return grad, he's |
@guolinke thanks for patience, I've read your comments, read cpp code. Now it's okay and custom objective (train loss) with custom eval metric (valid loss) gives the same results as default huber objective and huber metric. default:
custom:
@guolinke thanks again! |
LightGBM/src/metric/regression_metric.hpp Lines 191 to 198 in e9f5169
Wanna ask one more question about evaluation metric. Was:
Should be:
Am I right? |
refer to:
I think |
Look like I've overworked and didn't take into account brackets and power function. |
This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this. |
How you are using LightGBM?
LightGBM component:
Environment info
Operating System: Windows 10
CPU/GPU model: NVIDIA 1060
C++ compiler version: Visual Studio 2019
CMake version: 3.17.2
Java version:
Python version: 3.7
R version:
Other:
LightGBM version or commit hash: 3.0.0.99
Error message and / or logs
Reproducible example(s)
Steps to reproduce
I've been looking for my own train and valid loss functions based on my job task and unfortunatelly couldn't reproduce LightGBM 'huber' objective and 'huber' metric functions by my own code.
You can find that fitting LightGBM model on soure 'mse' loss function and 'mse' metric gives exactly same results as my own code in the end of script above (mse_custom_train and mse_custom_eval functions are used as argumets for fobj nad feval arguments).
I've been trying to reproduce huber objective and huber metric for evaluation and didn't get correct resulsts.
Main reason that I've been developing my own custom train and valid loss is finding asymetric loss function for giving more penatly to underpredicted values.
Could you please help with this?
Thank you!
The text was updated successfully, but these errors were encountered: