Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sei predictor does not use up full GPU available on the machine #13

Closed
MagpiePKU opened this issue Nov 5, 2022 · 3 comments
Closed

Comments

@MagpiePKU
Copy link

Hi,

Thanks for the great tool.

We are trying to use Sei to predict chromatin profiles over hg19 genome. We used a server with 4 GTX1080 cards with 12Gb memory. The problem is when we submit the jobs it only consumed up to 1 GPU card but no more. Parallel running of 4 jobs on the server resulted in CUDA memory full error.

Is there a way to tune the setup in sei.py?

Thank you very much in ahead,
Yi

@jzthree
Copy link
Collaborator

jzthree commented Nov 5, 2022

Hi Yi,

Thanks for your interest in using the tool! A general way to use 4 GPUs with 4 jobs in shell script would be

CUDA_VISIBLE_DEVICES=0  command 0 & 
CUDA_VISIBLE_DEVICES=1  command 1 &
CUDA_VISIBLE_DEVICES=2  command 2 &
CUDA_VISIBLE_DEVICES=3  command 3 &
wait

Alternatively, I believe you can also ask Selene to use 4 GPUs by adding data_parallel: True to the config file (yml files under https://github.com/FunctionLab/sei-framework/tree/main/model) (@kathyxchen can correct me if I am wrong). Turning on data_parallel will distribute one batch to 4 GPUs. Also if there is any memory issue, you may possibly reduce the batch size.

analyze_sequences: !obj:selene_sdk.predict.AnalyzeSequences {
    sequence_length: 4096,
    batch_size: 128,
    trained_model_path: <PATH>/model/sei.pth,
    features: !obj:selene_sdk.utils.load_features_list {
        input_path: <PATH>/model/target.names
    },
    data_parallel: True,
    write_mem_limit: 1000
}

Jian

@MagpiePKU
Copy link
Author

Thanks a lot. The data_parallel: True charm solved the problem.

@jzthree
Copy link
Collaborator

jzthree commented Nov 7, 2022

Great!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants