-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sei predictor does not use up full GPU available on the machine #13
Comments
Hi Yi, Thanks for your interest in using the tool! A general way to use 4 GPUs with 4 jobs in shell script would be
Alternatively, I believe you can also ask Selene to use 4 GPUs by adding
Jian |
Thanks a lot. The |
Great! |
Hi,
Thanks for the great tool.
We are trying to use Sei to predict chromatin profiles over hg19 genome. We used a server with 4 GTX1080 cards with 12Gb memory. The problem is when we submit the jobs it only consumed up to 1 GPU card but no more. Parallel running of 4 jobs on the server resulted in CUDA memory full error.
Is there a way to tune the setup in sei.py?
Thank you very much in ahead,
Yi
The text was updated successfully, but these errors were encountered: