-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
iRprop- local optimiser. #1459
iRprop- local optimiser. #1459
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1459 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 95 96 +1
Lines 9300 9370 +70
=========================================
+ Hits 9300 9370 +70
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
8873783
to
b889d91
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks very good to me, @MichaelClerx
I agree with not exposing the hyperparameters for now, given that according to the literature the algorithm appears to be quite robust to changes in them.
I just have minor comments. Happy for this to be merged
Thanks @DavAug ! |
See #1105
Local gradient using optimiser with adaptive step size.
Seems to work pretty well
https://doi.org/10.1016/S0925-2312(01)00700-7
The easiest to read psuedo code for this algorithm is in a figure in the paper above.
Note that the
< 0
/> 0
is deliberate: the step size is not updated for= 0
.