You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In Slurm, users can specify their GPUs for their batch jobs:
#SBATCH --account=def-someuser
#SBATCH --gres=gpu:1 # Number of GPUs (per node)
#SBATCH --mem=4000M # memory (per node)
#SBATCH --time=0-03:00 # time (DD-HH:MM)
./program # you can use 'nvidia-smi' for a test
For Sky, the Slurm syntax is equivalent to V100:8. While this feature is available for the CLI (e.g. sky exec mycluster --gpus V100:1 -d -- python train.py --lr 1e-3), the YAML does not support Slurm syntax, in particular, the resources/accelerators field. This example should work below once SLURM syntax is implemented,
resources:
cloud: aws
accelerators: 'V100:8'
The text was updated successfully, but these errors were encountered:
In Slurm, users can specify their GPUs for their batch jobs:
For Sky, the Slurm syntax is equivalent to
V100:8
. While this feature is available for the CLI (e.g.sky exec mycluster --gpus V100:1 -d -- python train.py --lr 1e-3
), the YAML does not support Slurm syntax, in particular, theresources/accelerators
field. This example should work below once SLURM syntax is implemented,The text was updated successfully, but these errors were encountered: