You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, when training type plugin is passed with Accelerators, attributes such as parallel_devices, cluster_environment and sync_batchnorm are not set to the training plugin and leads to errors.
Expected behavior
trainer=Trainer(accelerator=GPUAccelerator(precision_plugin=PrecisionPlugin(), training_type_plugin=DDPPlugin()), gpus=4)
# should be equivalent totraining_type_plugin.parallel_devices== [torch.device("cuda", i) foriinself.parallel_device_ids]
training_type_plugin.cluster_environment==LightningEnvironment()
training_type_plugin.sync_batchnorm==False
The text was updated successfully, but these errors were encountered:
Thanks @kaushikb11
This fix would only go into 1.5.x and partially apply to master, because #10416 would change the requirements slightly here as things get inverted.
🐛 Bug
To Reproduce
At the moment, when training type plugin is passed with Accelerators, attributes such as
parallel_devices
,cluster_environment
andsync_batchnorm
are not set to the training plugin and leads to errors.Expected behavior
The text was updated successfully, but these errors were encountered: