You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From @justusschock's comments in #10890
Refactor accelerator X_steps()'s signature, to support both positional and keyword arguments
Instead of unrolling step_kwarg dictionary in accelerator steps, unroll them in caller side.
This will unblock #10648 move functions from accelerator to strategies part
Update to : def X_step(self, *args, **kwargs) -> Optional[STEP_OUTPUT]:
Which not align with what torch model training step and model() signature.
In caller side, eg: optimizer_loop.py. Instead of passing dictionary step_kwargs, pass *step_kwargs.values()
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
four4fish
changed the title
Unroll dict input before call Accelerator X_steps and update function typing
Unroll dict input before call Accelerator X_steps and update function type
Dec 3, 2021
Proposed refactor
From @justusschock's comments in #10890
Refactor accelerator X_steps()'s signature, to support both positional and keyword arguments
Instead of unrolling step_kwarg dictionary in accelerator steps, unroll them in caller side.
This will unblock #10648 move functions from accelerator to strategies part
Motivation
Keep flexibilities and improving function typing
Pitch
Current Accelerator X_step typing signature is
https://github.com/four4fish/pytorch-lightning/blob/master/pytorch_lightning/accelerators/accelerator.py#L124-L148
Update to :
def X_step(self, *args, **kwargs) -> Optional[STEP_OUTPUT]:
Which not align with what torch model training step and model() signature.
In caller side, eg: optimizer_loop.py. Instead of passing dictionary
step_kwargs
, pass*step_kwargs.values()
Additional context
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
cc @justusschock @awaelchli @akihironitta
The text was updated successfully, but these errors were encountered: