You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encountered an error while running a multi-GPU training program using accelerate. The `accelerate launch scripts.py` raise an NotImplementException
Native multi-GPU training requires pytorch>=1.9.1
Have the developers abandoned support for PyTorch with version lower than that?
If there are any workarounds that can be used to continue using PyTorch 1.8 with accelerate to use multi-GPU training.
torch 1.8.1+cu101
accelerate 0.16.0
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [ ] My own task or dataset (give details below)
### Reproduction
In any environment with pytorch version lower than 1.9.1
### Expected behavior
```Shell
Run without any exception.
The text was updated successfully, but these errors were encountered:
System Info
The text was updated successfully, but these errors were encountered: