Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

precompute then offload latents and text encodes as well as VAE and Text Encoder #62

Open
Thomas-MMJ opened this issue Dec 18, 2022 · 1 comment
Labels
enhancement New feature or request

Comments

@Thomas-MMJ
Copy link

the latents and text vectors can be precomputed and then stored in RAM instead of VRAM, and then offload the text encoder and VAE if they aren't being trained, would allow for significant time and VRAM savings.

@cloneofsimo
Copy link
Owner

This is great, I am currently refactoring the training script, I will use this trick in it

@cloneofsimo cloneofsimo added the enhancement New feature or request label Jan 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants