-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fsl.check_first(): Second attempt at refinement #2609
base: master
Are you sure you want to change the base?
Conversation
This didn't work and I don't know enought python to fix it:
|
Whoops sorry, that was me ruining correspondence between code tested and code pushed. 18e5910 should at least resolve that. On my own system |
I get the same. I wasn't aware of any fsl_sub halting functionality.
You could submit a job with a hold on the last FIRST JobID that, for example, creates a file indicating that everything has completed and then check for the presence of that file if you don't want to interact with qstat directly. |
The thought had crossed my mind. Just wary of the extent to which this is becoming increasingly clumsy. Decided to avoid |
That worked. Thanks for your efforts here. |
Conflicts: lib/mrtrix3/fsl.py
Response to suggestion by @glasserm in #2597.
fsl_sub
seemingly has a built-in functionality for halting execution until all asynchronous jobs have been completed. Hopefully grabbing the job ID of the final executed job inrun_first_all
and waiting on that will be sufficient.What I don't know is the point in time at which this functionality was added to
fsl_sub
, and therefore whether this change will only work on some minimum version of FSL. But failures of this test should be caught andcheck_first()
will revert to prior behaviour.Requires testing on a functioning SGE cluster.