You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
Hello, thanks a lot for sharing the code for the paper. I was trying to train electra base model from scratch but the CPU RAM usage is increasing with every iteration, and eventually the process is getting killed due to the CPU RAM being full. The GPU RAM usage is constant across training. I am using a system with 64GB CPU RAM. Can any of the authors (or anyone who has trained or fine-tuned the QA model) share the exact version of pytorch used for the experiments, and did they face any similar issue while training the model?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello, thanks a lot for sharing the code for the paper. I was trying to train electra base model from scratch but the CPU RAM usage is increasing with every iteration, and eventually the process is getting killed due to the CPU RAM being full. The GPU RAM usage is constant across training. I am using a system with 64GB CPU RAM. Can any of the authors (or anyone who has trained or fine-tuned the QA model) share the exact version of pytorch used for the experiments, and did they face any similar issue while training the model?
Thanks in advance.
The text was updated successfully, but these errors were encountered: