Releases: RoySadaka/lpd
Releases · RoySadaka/lpd
New metric - TopKCategoricalAccuracy
- Added metric TopKCategoricalAccuracy
- Added Predictor.from_trainer() method to Predictor class
- Fixed loading predictor from_checkpoint if the checkpoint is not Full Trainer
- Added unittest for TopKCategoricalAccuracy
Verbosity
- Added verbosity support in Trainer.train(verbose=1) and Trainer.evaluate(verbose=1)
- StatsPrint validation bug fix
- Added unittest for StatsPrint validation
Predictor class
- Added Predictor class !! predicting was never easier, see README for more details
- Added example for train/save/load/predict using the new Predictor
- Added unittests for predictor
- Metrics optimizations
Predictor class
- Added Predictor class !! predicting was never easier, see README for more details
- Added example for train/save/load/predict using the new Predictor
- Added unittests for predictor
ModelCheckpoint and EarlyStopping using CallbackMonitors
- Added trainer validation for metric_name_to_func
- ModelCheckpoint args changed to accept CallbackMonitor
- EarlyStopping args changed to accept CallbackMonitor
- Adjusted examples and tests
- Added more unittests
num_epochs to Trainer.train(), better StatsPrint and unittest
- Added test for trainer save and load
- Moved LossOptimizerHandlerBase validation to Trainer.train() instead if Trainer.init
- Added name property to CallbackMonitorResult
- StatsPrint now accept list of monitors arguments for metrics
- StatsPrint will make validations on provided monitors
- Removed num_epochs from Trainer arguments, now its in Trainer.train(num_epochs)
- Adjusted all examples
Loss and Optimizer handling via callbacks
- Added predict_sample and predict_data_loader methods to Trainer
- Added LossOptimizerHandler and LossOptimizerHandlerBase to callbacks
- Trainer must have at least one callback of type LossOptimizerHandlerBase
- Removed optimizer_step_and_zero_grad_criteria argument from Trainer (use LossOptimizerHandler callback instead)
- Added optimizer, scheduler, and train_last_loss to CallbackContext properties for easier access
- CollectOutputs's arguments now must be explicitly provided
- CallbackBase will raise an exception if call not implemented
- Trainer has callbacks validation upon initialization, more validations will be added
- SchedulerStep's scheduler_parameters_func should accept CallbackContext instead of Trainer
- Added copy_model_weights to lpd.utils.torch_utils, (as requested, thank you for using lpd 🥳)
- Adjusted all examples
Predicting
- Added predict_batch
- Adjusted predict example
- Added threshold argument to BinaryAccuracy and BinaryAccuracyWithLogits
Metrics
- Custom metrics are now in a separate module
- Custom metrics now classes instead of functions
- Added unittests for metrics
Predicting
- Added predict method to trainer
- Added State.PREDICT enum
- Added Phase.PREDICT_BEGIN and Phase.PREDICT_END enums
- Added CollectOutputs callback
- Added predict example
- Moving some elements from nn.functional to nn
- Added sample count to save/load model