-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathjob_params_v3
147 lines (146 loc) · 5.06 KB
/
job_params_v3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
num_cores:
widget: "number_field"
label: "Number of cores"
value: 1
help: "Maximum number of CPU cores on notchpeak-shared-short is 16, see [cluster help pages](https://www.chpc.utah.edu/resources/HPC_Clusters.php) for other cluster's node counts."
min: 1
step: 1
bc_num_hours:
value: 1
min: 1
step: 1
help: "Maximum wall time on notchpeak-shared-short is 8 hours, general nodes 72 hours, owner nodes 14 days."
bc_email_on_started:
widget: "check_box"
label: "I would like to receive an email when the session starts"
help: "If you do not receive the email, check your [Profile](https://www.chpc.utah.edu/role/user/edit_profile.php) for correct address."
bc_vnc_resolution:
required: true
advanced_options:
widget: select
label: "Advanced options (memory, GPU)"
value: "false"
options:
- [
'Hide','false',
data-hide-memtask: true,
data-hide-gpu-type: true,
data-hide-nodelist: true
]
- [ 'Show', 'true' ]
memtask:
widget: "number_field"
value: 0
min: 0
max: 64
step: 1
label: "Memory per job in GB"
help: |
- **0** - Use default, 2 or 4 GB per task, depending on partition.
- **128** - Use 128 GB, this is the maximum on notchpeak-shared-short.
gpu_type:
label: "GPU type"
widget: select
value: "none"
options:
- [
'none',
data-hide-gpu-count: true
]
- [
'any','any',
data-option-for-cluster-ash: false,
]
- [
'Nvidia GTX 1080 Ti, SP, general, owner','1080ti',
data-option-for-cluster-kingspeak: false
]
- [
'Nvidia GTX Titan X, SP, owner','titanx',
data-option-for-cluster-ash: false,
data-option-for-cluster-notchpeak: false,
data-option-for-cluster-lonepeak: false
]
- [
'Nvidia P100, DP, owner','p100',
data-option-for-cluster-ash: false,
data-option-for-cluster-notchpeak: false,
data-option-for-cluster-lonepeak: false
]
- [
'Nvidia V100, DP, general, owner','v100',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
- [
'Nvidia P40, SP, general','p40',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
- [
'Nvidia A40, SP, owner','a40',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
- [
'Nvidia A100, DP, general, owner','a100',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
- [
'Nvidia T4, SP, general, owner','t4',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
- [
'Nvidia V, SP, owner','titanv',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
- [
'Nvidia RTX 2080 Ti, SP, general, owner','2080ti',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
- [
'Nvidia RTX 3090, SP, general, owner','3090',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
- [
'Nvidia RTX A6000, SP, owner','a6000',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
- [
'AMD MI100, SP, evaluation','mi100',
data-option-for-cluster-ash: false,
data-option-for-cluster-lonepeak: false,
data-option-for-cluster-kingspeak: false
]
help: |
- choose **none** if no GPU.
- **any** available GPU for the given cluster and partition.
- SP - Single Precision, DP - Double Precision.
- Choose the correct account and partition for the selected GPUs. General nodes use __*cluster*-gpu:*cluster*-gpu__ and owner nodes **owner-gpu-guest:*cluster*-gpu-guest**.
- See [GPU node list](https://chpc.utah.edu/documentation/guides/gpus-accelerators.php#gnl) for details on GPU features and counts per node.
gpu_count:
label: "GPU count"
widget: "number_field"
value: 1
min: 1
max: 8
step: 1
nodelist:
widget: text_area
label: Nodelist
help: "Enter a list of nodes to run on (will be passed to the sbatch -nodelist option). Leave empty if you want to include all available nodes."