-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
get all available accounts for a user from the scheduler #783
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See inline comments for requested changes
Co-authored-by: treydock <tdockendorf@osc.edu>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
I don't believe this will work as is. We're likely going to need at least With some sites,
|
If you use As far as whether this works, I think it's a good starting place. Having a list of available accounts irregardless of partition is good. However I don't think you will need to query QOS since an association in the database is either "cluster + account + user" or "cluster + account + user + partition". The QOS is an attribute on the association. I think it's a bridge too far for OOD to try and work out how a QOS is used because a QOS can be used to limit resources or just do things like set priority or preemption. A QOS is not typically how you grant someone access to resources, that's usually done through the association in one of the two defining ways previously mentioned. |
Thanks for the reivew, I've added a bit. Namely the AccountInfo class so it can hold extra information for all the use cases we need. Check out this Utah sacctmgr output. Some accounts only exists on 1 cluster (smithp-guest on ahs or dtn on notchpeak). So for this to be useful every account has to be sort of cluster aware so that upper layers can hide/show options as needed.
I'm quite sure this is not enough as the upper layers (OOD) is going to have to stitch a lot of information together. But as an initial luanch maybe it can just pull In any case, I know for sure these are going to be more complex than just a string, so we may as well start with objects. |
@@ -325,7 +342,7 @@ def call(cmd, *args, env: {}, stdin: "") | |||
cmd = OodCore::Job::Adapters::Helper.bin_path(cmd, bin, bin_overrides) | |||
|
|||
args = args.map(&:to_s) | |||
args.concat ["-M", cluster] if cluster | |||
args.concat ["-M", cluster] if cluster && cmd != 'sacctmgr' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately I can't seem to get sacctmgr
to query per cluster, which is going to be troublesome for upper layers as each adapter object is meant to represent 1 cluster, but here we are returning information for all clusters.
Fixes upstream ood issue below. The idea here is that the scheduler (Slurm in this case) knows all the valid accounts that a user is able to use. So instead of pulling accounts from Unix file systems, we can query the scheduler itself. A Unix group may or may not be a valid account as the account can suspended/closed due to budgets but the Unix group still exists.
OSC/ondemand#1970
┆Issue is synchronized with this Asana task by Unito