Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update code to support sklearn v1.2 #50

Merged
merged 9 commits into from
Feb 27, 2023
Merged

update code to support sklearn v1.2 #50

merged 9 commits into from
Feb 27, 2023

Conversation

OkonSamuel
Copy link
Member

@OkonSamuel OkonSamuel commented Feb 20, 2023

At the time of opening the PR the latest version of python's scikitlearn is v1.2.1. In line with this new version the following changes have been made.

  1. The normalize parameter have been removed from the following linear models
  • ARDRegressor
  • BayesianRidgeRegressor
  • ElasticNetRegressor
  • ElasticNetCVRegressor
  • LassoRegressor
  • LassoCVRegressor
  • LinearRegressor
  • MultiTaskElasticNetRegressor
  • MultiTaskElasticNetCVRegressor
  • MultiTaskLassoRegressor
  • MultiTaskLassoCVRegressor
  • RidgeCVClassifier
  • RidgeClassifier
  1. The normalize parameter of the following linear models has been set as false, but will be removed when python scikitlearn v1.4 is released.
  • OrthogonalMatchingPursuitRegressor
  • OrthogonalMatchingPursuitCVRegressor
  • LarsRegressor
  • LarsCVRegressor
  • LassoLarsCVRegressor
  • LassoLarsICRegressor
  1. The base_estimator parameter in AdaBoostRegressor, AdaboostClassifier, BaggingRegressor, BaggingClassifier and RANSACRegressor models has been renamed to estimator.

  2. The absolute_loss and squared_loss losses has been replaced by the equivalent
    absolute_error and squared_error respectively.

  3. Depreciate the auto option for max_features in RandomForestRegressor, RandomForestClassifier, ExtraTreesRegressor and ExtraTreesClassifier models. The new default for max_features has been set to
    sqrt.

  4. Replace the ls loss with the equivalent squared_loss error. Set the default value of loss parameter in the GradientBoostingRegressor model to squared_loss

  5. Default value for criterion in RandomForestRegressorand ExtrasTreesRegressor models has been set to squared_loss. New losses such as friedman_mse and poisson has been added to the list of possible options for criterion

  6. log_loss has been added to the list of possible option for the criterion parameter in RandomForestClassifier and ExtrasTreesClassifier models.

  7. The lad loss in the GradientBoostingRegressor model has been replaced by the equivalent absolute_error loss
    Additionally

  • The mae loss was removed from the possible options for the criterionparameter
  • The mse was replaced with the equivaled squared_error and,
  • The friedman_mse was added to the possible options for the criterion parameter.
  1. Add the "log_loss" loss as one of the possible options to the loss parameter in the ProbalisticSGDClassifierand SGDClassifier models.
  2. Change the default value of loss parameter in ProbalisticSGDClassifier model from "log" to "log_loss".
  3. Change the default value of algorithm parameter in KMeans model from "auto" to "lloyd".

see https://scikit-learn.org/stable/whats_new/v1.0.html#changes-1-0 and
https://scikit-learn.org/stable/whats_new/v1.0.html

@ablaom
Copy link
Member

ablaom commented Feb 20, 2023

@tylerjthomas9 I am pinging you here to make sure we avoid any duplication of effort on this front, as I know you are also working on a fork of this repo.

Copy link
Member

@ablaom ablaom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me. Let's be sure to:

  • Check CI log for any persistent warnings

once that is passing.

@OkonSamuel OkonSamuel mentioned this pull request Feb 20, 2023
@OkonSamuel OkonSamuel closed this Feb 20, 2023
@OkonSamuel OkonSamuel reopened this Feb 20, 2023
@OkonSamuel
Copy link
Member Author

OkonSamuel commented Feb 21, 2023

I have opened a PR at cstjean/ScikitLearn.jl#119. This PR should pass once that PR is merged.

@ablaom ablaom closed this Feb 24, 2023
@ablaom ablaom reopened this Feb 24, 2023
@codecov-commenter
Copy link

codecov-commenter commented Feb 27, 2023

Codecov Report

Merging #50 (08fbab6) into dev (6dd8696) will not change coverage.
The diff coverage is n/a.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

@@           Coverage Diff           @@
##              dev      #50   +/-   ##
=======================================
  Coverage   73.30%   73.30%           
=======================================
  Files          12       12           
  Lines         206      206           
=======================================
  Hits          151      151           
  Misses         55       55           
Impacted Files Coverage Δ
src/models/clustering.jl 80.00% <ø> (ø)
src/models/ensemble.jl 83.33% <ø> (ø)
src/models/linear-classifiers.jl 100.00% <ø> (ø)
src/models/linear-regressors-multi.jl 66.66% <ø> (ø)
src/models/linear-regressors.jl 82.60% <ø> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@OkonSamuel
Copy link
Member Author

Looks good to me. Let's be sure to:

  • Check CI log for any persistent warnings

once that is passing.

Some warning in the ci can't be fixed at the moment except I remove the normalize argument completely. But the thing is that python sklearn won't remove this argument till v1.4 comes out.

@ablaom
Copy link
Member

ablaom commented Feb 27, 2023

The CI log is showing some deprecation warnings. @OkonSamuel Can you please comment on these.

@ablaom
Copy link
Member

ablaom commented Feb 27, 2023

Okay, as per zoom call, we'll not address those dep warnings yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants