-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Asking for questions about evaluation #6
Comments
The most likely reason is that the |
Thanks a lot! Would you please tell me how to fix this error? |
1、try |
|
At line 106 in |
Thanks a lot! I will check it again. |
I'm having a similar issue. I use the CUB dataset and modify the smaller batch_size for the 2-stage training. train_token I use the default float32. After 250step training, an error is reported in my inference. |
Since CLIP(text encoder) is frozen all the time, it seems that there is a problem with the representative embeddings trained in stage 1. Does the model you trained in stage 1 nan? Besides, the model requires a large batch size for stage 2 training. If your machine does not have enough memory, using large gradient accumulation can be fine. |
Thank you for your reply! I'd like to confirm that I have one more question, Thanks again for your reply, I'll read the source code carefully again and try to train. |
1、 2、VQGAN is an improved version of VAE, and they are similar in structure. |
Thanks for your great work! There is an issue during testing.
When using python main.py --function test --config configs/cub_stage2.yml --opt "{'test': {'load_token_path': 'ckpts/cub983/tokens/', 'load_unet_path': 'ckpts/cub983/unet/', 'save_log_path': 'ckpts/cub983/log.txt'}}" for evaluation, I found that self.step_store、self. attention_store and self.attention_maps are all empty. Would you please tell me where is wrong?
Looking forward to your reply!
The text was updated successfully, but these errors were encountered: