You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The custom PolyLRScheduler scheduler implementation is missing the get_last_lr() function, which is essential for retrieving the last computed learning rate for each parameter group. This function is present in PyTorch’s built-in schedulers and is useful for logging, debugging, and maintaining API consistency. Adding this function will improve usability and integration with standard PyTorch workflows.
One practical example is integrating nnUNet training in a MONAI Bundle standard (i.e. using PolyLRScheduler in the LearningRateSchedulerHandler ): currently, in order to do so, the PolyLRScheduler needs to be modified as follows:
fromtorch.optim.lr_schedulerimport_LRSchedulerclassPolyLRScheduler(_LRScheduler):
def__init__(self, optimizer, initial_lr: float, max_steps: int, exponent: float=0.9, current_step: int=None):
self.optimizer=optimizerself.initial_lr=initial_lrself.max_steps=max_stepsself.exponent=exponentself.ctr=0super().__init__(optimizer, current_stepifcurrent_stepisnotNoneelse-1, False)
defstep(self, current_step=None):
ifcurrent_stepisNoneorcurrent_step==-1:
current_step=self.ctrself.ctr+=1new_lr=self.initial_lr* (1-current_step/self.max_steps) **self.exponentforparam_groupinself.optimizer.param_groups:
param_group['lr'] =new_lrself._last_lr= [group['lr'] forgroupinself.optimizer.param_groups] # 🚀 Missing definitiondefget_last_lr(self): # 🚀 Missing function that should be includedreturnself._last_lr
The text was updated successfully, but these errors were encountered:
SimoneBendazzoli93
changed the title
PolyLR Scheduler fix for
Missing get_last_lr() in Custom PolyLR Scheduler
Feb 7, 2025
The custom
PolyLRScheduler
scheduler implementation is missing the get_last_lr() function, which is essential for retrieving the last computed learning rate for each parameter group. This function is present in PyTorch’s built-in schedulers and is useful for logging, debugging, and maintaining API consistency. Adding this function will improve usability and integration with standard PyTorch workflows.One practical example is integrating nnUNet training in a MONAI Bundle standard (i.e. using
PolyLRScheduler
in theLearningRateSchedulerHandler
): currently, in order to do so, thePolyLRScheduler
needs to be modified as follows:The text was updated successfully, but these errors were encountered: