Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom huber loss in LightGBM #3532

Closed
dishkakrauch opened this issue Nov 6, 2020 · 23 comments
Closed

Custom huber loss in LightGBM #3532

dishkakrauch opened this issue Nov 6, 2020 · 23 comments

Comments

@dishkakrauch
Copy link

dishkakrauch commented Nov 6, 2020

How you are using LightGBM?

LightGBM component:

Environment info

Operating System: Windows 10

CPU/GPU model: NVIDIA 1060

C++ compiler version: Visual Studio 2019

CMake version: 3.17.2

Java version:

Python version: 3.7

R version:

Other:

LightGBM version or commit hash: 3.0.0.99

Error message and / or logs

Reproducible example(s)

import numpy as np
import pandas as pd
import sklearn
from sklearn import *
import lightgbm as lgbm

np.random.seed(0)

df = sklearn.datasets.make_regression(10000)

X, y = df[0], df[1]
X_train, X_valid, y_train, y_valid = sklearn.model_selection.train_test_split(X, y, test_size=.25)

X_train_lgbm = lgbm.Dataset(
    data=X_train,
    label=y_train,
    reference=None,
    weight=None,
    group=None,
    init_score=None,
    silent=False,
    feature_name='auto',
    categorical_feature='auto',
    params=None,
    free_raw_data=False,
)

X_valid_lgbm = lgbm.Dataset(
    data=X_valid,
    label=y_valid,
    reference=None,
    weight=None,
    group=None,
    init_score=None,
    silent=False,
    feature_name='auto',
    categorical_feature='auto',
    params=None,
    free_raw_data=False,
)

params = {
    'objective': 'mse',
    'metric': {''},
    'learning_rate': .1,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=1000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=None,
    feval=None,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

def mse_custom_train(preds, data):
    
    y_true = data.get_label()
    y_pred = preds
    residual = (y_true - y_pred).astype("float")
    
    grad = np.where(residual < 0, -1. * residual, -1. * residual)
    hess = np.where(residual < 0, 1. * 1., 1. * 1.)
    
    return grad, hess

def mse_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_true - y_pred).astype("float")
    loss = np.where(residual < 0, (residual**2) * 1., (residual ** 2) * 1.) 
    
    return "mse_custom", np.mean(loss), False

params = {
    'objective': 'mse',
    'metric': {''},
    'learning_rate': .1,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=1000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=mse_custom_train,
    feval=mse_custom_eval,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

LightGBM mse
LightGBM custom mse

Steps to reproduce

  1. Install any LightGBM version in your environment
  2. Run code above
  3. Done

I've been looking for my own train and valid loss functions based on my job task and unfortunatelly couldn't reproduce LightGBM 'huber' objective and 'huber' metric functions by my own code.

You can find that fitting LightGBM model on soure 'mse' loss function and 'mse' metric gives exactly same results as my own code in the end of script above (mse_custom_train and mse_custom_eval functions are used as argumets for fobj nad feval arguments).

I've been trying to reproduce huber objective and huber metric for evaluation and didn't get correct resulsts.

params = {
    'objective': 'huber',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=1000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=None,
    feval=None,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

def huber_custom_train(preds, data):
    
    y_true = data.get_label()
    y_pred = preds
    residual = (y_true - y_pred).astype("float")
    
    h = .9
    scale = 1 + (residual / h) ** 2
    scale_sqrt = np.sqrt(scale)
    
    grad = residual / scale_sqrt
    hess = 1 / scale / scale_sqrt
    
    return grad, hess

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_true - y_pred).astype("float")
    h = .9
    loss = np.where(np.abs(residual) < h , .5 * ((residual) ** 2), h * np.abs(residual) - .5 * (h ** 2))
    
    return "huber_custom", np.sum(loss), False

params = {
    'objective': 'mse',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=1000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=huber_custom_train,
    feval=huber_custom_eval,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

LightGBM huber
LightGBM custom huber

Main reason that I've been developing my own custom train and valid loss is finding asymetric loss function for giving more penatly to underpredicted values.
Could you please help with this?

Thank you!

@guolinke
Copy link
Collaborator

guolinke commented Nov 7, 2020

It seems your implementation is different with LightGBM:

void GetGradients(const double* score, score_t* gradients,
score_t* hessians) const override {
if (weights_ == nullptr) {
#pragma omp parallel for schedule(static)
for (data_size_t i = 0; i < num_data_; ++i) {
const double diff = score[i] - label_[i];
if (std::abs(diff) <= alpha_) {
gradients[i] = static_cast<score_t>(diff);
} else {
gradients[i] = static_cast<score_t>(Common::Sign(diff) * alpha_);
}
hessians[i] = 1.0f;
}
} else {
#pragma omp parallel for schedule(static)
for (data_size_t i = 0; i < num_data_; ++i) {
const double diff = score[i] - label_[i];
if (std::abs(diff) <= alpha_) {
gradients[i] = static_cast<score_t>(diff * weights_[i]);
} else {
gradients[i] = static_cast<score_t>(Common::Sign(diff) * weights_[i] * alpha_);
}
hessians[i] = static_cast<score_t>(weights_[i]);
}
}
}

@dishkakrauch
Copy link
Author

It seems your implementation is different with LightGBM:

void GetGradients(const double* score, score_t* gradients,
score_t* hessians) const override {
if (weights_ == nullptr) {
#pragma omp parallel for schedule(static)
for (data_size_t i = 0; i < num_data_; ++i) {
const double diff = score[i] - label_[i];
if (std::abs(diff) <= alpha_) {
gradients[i] = static_cast<score_t>(diff);
} else {
gradients[i] = static_cast<score_t>(Common::Sign(diff) * alpha_);
}
hessians[i] = 1.0f;
}
} else {
#pragma omp parallel for schedule(static)
for (data_size_t i = 0; i < num_data_; ++i) {
const double diff = score[i] - label_[i];
if (std::abs(diff) <= alpha_) {
gradients[i] = static_cast<score_t>(diff * weights_[i]);
} else {
gradients[i] = static_cast<score_t>(Common::Sign(diff) * weights_[i] * alpha_);
}
hessians[i] = static_cast<score_t>(weights_[i]);
}
}
}

Thanks for your reply.
I've changed code according Lightgbm c++ source.
I't getting closer to default 'huber' objective, where am I getting still wrong?

def huber_custom_train(preds, data):
    
    y_true = data.get_label()
    y_pred = preds
    residual = (y_true - y_pred).astype("float")
    
    alpha = .9
    
    grad = np.where(residual <= alpha, residual, residual * alpha)
    hess = np.where(residual < 0, 1. * 1., 1. * 1.)
    
    return grad, hess

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_true - y_pred).astype("float")
    alpha = .9
    loss = np.where(np.abs(residual) <= alpha , .5 * ((residual) ** 2), alpha * np.abs(residual) - .5 * (alpha ** 2))
    
    return "huber_custom", np.mean(loss), False

params = {
    'objective': 'huber',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=1000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=huber_custom_train,
    feval=huber_custom_eval,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

Lightgbm huber custom +

@guolinke
Copy link
Collaborator

guolinke commented Nov 8, 2020

In lgb diff = score[i] - label_[i]; , and you seems use y_true - y_pred.

@dishkakrauch
Copy link
Author

In lgb diff = score[i] - label_[i]; , and you seems use y_true - y_pred.

Tried this approach but still can't get same results.

def huber_custom_train(preds, data):
    
    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    
    alpha = .9
    
    grad = np.where(residual <= alpha, residual, residual * alpha)
    hess = np.where(residual < 0, 1. * 1., 1. * 1.)
    
    return grad, hess

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    alpha = .9
    loss = np.where(np.abs(residual) <= alpha , .5 * ((residual) ** 2), alpha * np.abs(residual) - .5 * (alpha ** 2))
    
    return "huber_custom", np.mean(loss), False

params = {
    'objective': 'huber',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=1000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=huber_custom_train,
    feval=huber_custom_eval,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

Lightgbm huber custom +

@guolinke
Copy link
Collaborator

guolinke commented Nov 9, 2020

by default, the huber loss is boosted from average label, you can set boost_from_average=false for lightgbm built-in huber loss.

@dishkakrauch
Copy link
Author

Unfortunatelly it didn't help.

@guolinke
Copy link
Collaborator

guolinke commented Nov 9, 2020

I mean set boost_from_average=false to lightgbm built-in huber loss, not your custom loss.

@dishkakrauch
Copy link
Author

It's getting closer.

params = {
    'objective': 'huber',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
    'boost_from_average': False,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=100000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=None,
    feval=None,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

Lightgbm huber custom ++

What else can I do?

@guolinke
Copy link
Collaborator

guolinke commented Nov 11, 2020

I think your customed eval is different with LightGBM built-in one.

inline static double LossOnPoint(label_t label, double score, const Config& config) {
const double diff = score - label;
if (std::abs(diff) <= config.alpha) {
return 0.5f * diff * diff;
} else {
return config.alpha * (std::abs(diff) - 0.5f * config.alpha);
}
}

updated:
sorry, I misread, it is identical.

@dishkakrauch
Copy link
Author

So what more shoud I do to get identical results with custom huber loss function and eval?

@guolinke
Copy link
Collaborator

can you try to use custom huber objective + built-in huber metric, and built-in huber objective + customer huber metric?
This can debug which part has problems.

@dishkakrauch
Copy link
Author

dishkakrauch commented Nov 11, 2020

Good idea!
I found out problem is in custom loss, cause custom eval function gives same results as default huber metric.

def huber_custom_train(preds, data):
    
    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    
    alpha = .9
    
    grad = np.where(residual <= alpha, residual, residual * alpha)
    hess = np.where(residual < 0, 1. * 1., 1. * 1.)
    
    return grad, hess

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    alpha = .9
    loss = np.where(np.abs(residual) <= alpha , .5 * ((residual) ** 2), alpha * np.abs(residual) - .5 * (alpha ** 2))
    
    return "huber_custom", np.mean(loss), False

params = {
    'objective': 'huber',
    'metric': {'huber'},
    'learning_rate': .1,
    'alpha': .9,
    'boost_from_average': False,
    'num_threads': 35,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=100000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=huber_custom_train,
    # feval=huber_custom_eval,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

LightGBM huber custom objective and default eval

def huber_custom_train(preds, data):
    
    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    
    alpha = .9
    
    grad = np.where(residual <= alpha, residual, residual * alpha)
    hess = np.where(residual < 0, 1. * 1., 1. * 1.)
    
    return grad, hess

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    alpha = .9
    loss = np.where(np.abs(residual) <= alpha , .5 * ((residual) ** 2), alpha * np.abs(residual) - .5 * (alpha ** 2))
    
    return "huber_custom", np.mean(loss), False

params = {
    'objective': 'huber',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
    'boost_from_average': False,
    'num_threads': 35,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=100000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    # fobj=huber_custom_train,
    feval=huber_custom_eval,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

LightGBM huber default objective and custom eval

@guolinke
Copy link
Collaborator

guolinke commented Nov 11, 2020

okay, your objective function is still different, please check following code.

       if (std::abs(diff) <= alpha_) { 
         gradients[i] = static_cast<score_t>(diff); 
       } else { 
         gradients[i] = static_cast<score_t>(Common::Sign(diff) * alpha_); 
       } 

grad = np.where(residual <= alpha, residual, residual * alpha) is wrong, it should be grad = np.where(residual <= alpha, residual, np.sign(residual) * alpha)

@dishkakrauch
Copy link
Author

Still nothing...

params = {
    'objective': 'huber',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
    'boost_from_average': False,
    'num_threads': num_threads,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=100000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=None,
    feval=None,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

LightgGBM huber default loss and default eval

def huber_custom_train(preds, data):
    
    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    
    alpha = .9
    
    grad = grad = np.where(residual <= alpha, residual, np.sign(residual) * alpha)
    hess = np.where(residual < 0, 1. * 1., 1. * 1.)
    
    return grad, hess

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    alpha = .9
    loss = np.where(np.abs(residual) <= alpha , .5 * ((residual) ** 2), alpha * np.abs(residual) - .5 * (alpha ** 2))
    
    return "huber_custom", np.mean(loss), False

params = {
    'objective': 'huber',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
    'boost_from_average': False,
    'num_threads': num_threads,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=100000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=huber_custom_train,
    feval=huber_custom_eval,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

LightgGBM huber custom loss and custom eval

@guolinke
Copy link
Collaborator

did you check the type of np.sign(residual), maybe you need to convert it to float.
and you can check the grad/hess manually for debug, instead of run it end-to-end.

@guolinke
Copy link
Collaborator

guolinke commented Nov 12, 2020

and you should use abs(residual) for comparison, refer to std::abs(diff) <= alpha_.
you should be more careful about your code...

@dishkakrauch
Copy link
Author

@guolinke thanks for your seggestions but I've double checked code before previous comments...
There is convertation to float before getting sign of residual array - it's okay.

def huber_custom_train(preds, data):
    
    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    
    alpha = .9

    grad = grad = np.where(residual <= alpha, residual, np.sign(residual) * alpha)
    hess = np.where(residual < 0, 1. * 1., 1. * 1.)
    
    return grad, hess

Also I've cheked difference before np.abs and default python abs functions for same array - it does not give any difference for evaluatin function.

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    alpha = .9
    loss = np.where(np.abs(residual) <= alpha , .5 * ((residual) ** 2), alpha * np.abs(residual) - .5 * (alpha ** 2))
    
    return "huber_custom", np.mean(loss), False

Just a few comments above we have figured out that problem is in loss function.
Sorry, I do not have C++ compilation experience and that's why I'm here asking for your help...

@guolinke
Copy link
Collaborator

@dishkakrauch
I was saying for the objective, not the eval.
refer to cpp code:

       if (std::abs(diff) <= alpha_) { 
         gradients[i] = static_cast<score_t>(diff); 
       } else { 
         gradients[i] = static_cast<score_t>(Common::Sign(diff) * alpha_); 
       } 
       hessians[i] = 1.0f; 

So it should be like, and you missing

def huber_custom_train(preds, data):
    
    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    
    alpha = .9
    # It should compare to abs(residual).
    grad = np.where(np.abs(residual) <= alpha, residual, np.sign(residual) * alpha)
    hess = np.where(residual < 0, 1. * 1., 1. * 1.)
    
    return grad, he's

@dishkakrauch
Copy link
Author

@guolinke thanks for patience, I've read your comments, read cpp code. Now it's okay and custom objective (train loss) with custom eval metric (valid loss) gives the same results as default huber objective and huber metric.
I gonna leave here code for somebody who will face this problem too.

default:

params = {
    'objective': 'huber',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
    'boost_from_average': False,
    'num_threads': 35,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=100000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=None,
    feval=None,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

custom:

def huber_custom_train(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")

    alpha = .9

    grad = np.where(np.abs(residual) <= alpha, residual, np.sign(residual) * alpha)
    hess = np.where(residual < 0, 1. * 1., 1. * 1.)

    return grad, hess

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    alpha = .9
    loss = np.where(np.abs(residual) <= alpha , .5 * ((residual) ** 2), alpha * np.abs(residual) - .5 * (alpha ** 2))
    
    return "huber_custom", np.mean(loss), False

params = {
    'objective': 'huber',
    'metric': {''},
    'learning_rate': .1,
    'alpha': .9,
    'boost_from_average': False,
    'num_threads': 35,
}

model = lgbm.train(
    params=params,
    train_set=X_train_lgbm,
    num_boost_round=100000,
    valid_sets=[X_train_lgbm, X_valid_lgbm, ],
    valid_names=None,
    fobj=huber_custom_train,
    feval=huber_custom_eval,
    init_model=None,
    # feature_name='auto',
    # categorical_feature='auto',
    early_stopping_rounds=100,
    evals_result=None,
    verbose_eval=100,
    learning_rates=None,
    keep_training_booster=False,
    callbacks=None,
)

@guolinke thanks again!

@dishkakrauch
Copy link
Author

dishkakrauch commented Nov 12, 2020

inline static double LossOnPoint(label_t label, double score, const Config& config) {
const double diff = score - label;
if (std::abs(diff) <= config.alpha) {
return 0.5f * diff * diff;
} else {
return config.alpha * (std::abs(diff) - 0.5f * config.alpha);
}
}

Wanna ask one more question about evaluation metric.
Looks like it's gonna be different and I misread cpp code before.

Was:

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    alpha = .9
    loss = np.where(np.abs(residual) <= alpha , .5 * ((residual) ** 2), alpha * np.abs(residual) - .5 * (alpha ** 2))
    
    return "huber_custom", np.mean(loss), False

Should be:

def huber_custom_eval(preds, data):

    y_true = data.get_label()
    y_pred = preds
    residual = (y_pred - y_true).astype("float")
    alpha = .9
    loss = np.where(np.abs(residual) <= alpha , .5 * ((residual) ** 2), alpha * (np.abs(residual) - .5 * alpha))
    
    return "huber_custom", np.mean(loss), False

Am I right?

@guolinke
Copy link
Collaborator

guolinke commented Nov 12, 2020

refer to:

   if (std::abs(diff) <= config.alpha) { 
     return 0.5f * diff * diff; 
   } else { 
     return config.alpha * (std::abs(diff) - 0.5f * config.alpha); 
   } 

I think alpha * (np.abs(residual) - .5 * alpha) equals to alpha * np.abs(residual) - .5 * (alpha ** 2).

@dishkakrauch
Copy link
Author

Look like I've overworked and didn't take into account brackets and power function.
Therefore python code is exactly following to cpp code, thanks again!

@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants