Training on SLURM with multiple GPUs #17963
Unanswered
matval
asked this question in
DDP / multi-GPU / multi-node
Replies: 2 comments
-
Shouldn't you have the option to specify |
Beta Was this translation helpful? Give feedback.
0 replies
-
Add |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I'm trying to train a model using Pytorch Lightining version 1.9.5 with DDPStrategy and use 2x V100.
If I run sbatch as:
The code runs fine, but it's using only two CPUs for data loading, which makes the training too slow. What is the correct way to enable the training on multiple GPUs with a different number of CPUs?
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions