You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I opened an Ask Cyclecloud thread on the topic of node allocation logic with SLURM
SLURM can be configured to allow multiple jobs to share the same node.
For example, 2 jobs can run on 60 cores each of a HB120 VM.
When the VM is already allocated before the jobs are submitted, then SLURM will happily place the two jobs on the same node.
However, when the nodes are not yet allocated from Azure, SLURM (via CycleCloud) will end up pulling two HB120 VMs from Azure, one for each job.
and Kai Neuffer said
a customer of mine has a similar issue but for me it works fine with CC 8.4.0 and the Slurm 3.0.1 template.
So this may be another reason to try this upgrade
xpillons
changed the title
Update cyclecloud-slurm version to 3.0.1
Update cyclecloud-slurm version to 3.0.4
Oct 10, 2023
xpillons
changed the title
Update cyclecloud-slurm version to 3.0.4
Update cyclecloud-slurm version to 3.x
Oct 10, 2023
Describe the feature
This means :
Document how to migrate from 20 to 22
Migrate from 2.7.x to 3.0.1 see https://github.com/Azure/cyclecloud-slurm/tree/3.0.1
The text was updated successfully, but these errors were encountered: