Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reproduce the 384x384 result in 'README.md'? #10

Open
dypromise opened this issue Jul 24, 2019 · 8 comments
Open

How to reproduce the 384x384 result in 'README.md'? #10

dypromise opened this issue Jul 24, 2019 · 8 comments

Comments

@dypromise
Copy link

Hi, csmliu!
Thanks for your excellent work which helps me l lot!
But when i try to reproduce 384x384 result , I always can not to get such good effect in your readme.md file. for example, to old and to remove bangs. Could you tell me some train tricks of 384x384?,I follow the LynnHo/HD-CelebA-Cropper to crop 384 image from celeba origin image. and the train scipt is:
python ../../train.py
--dataroot
--list_att_file new_list_attr_celeba_addhair.txt
--batch_size 16
--gpu 0
--experiment_name 384
--img_size 384
--enc_dim 48
--dec_dim 48
--dis_dim 48
--dis_fc_dim 512
--n_sample 24
--use_cropped_img \

Could you help me for how to reproduce result in ReadMe.md? and i test my model of epoc35, and after epoc40, model didnt change better. Waiting for your apply~

@csmliu
Copy link
Owner

csmliu commented Jul 24, 2019

Hi dypromise,

I forgot the parameters to create the 384x384 images, so I didn't release the 384x384 model, and the HD model will be given in the future PyTorch version.

As for the training problem, the HD model is harder to train and less stable, and the hyper-parameters were tuned during the training process. Honestly speaking, the HD model works worse than the 128x128 model, and there are more failure cases. We would try to promote the training stability and the performance in future works.

Here are some items (though not used in my model) to promote the performance, 1) using part discriminators (e.g., a smaller discriminator for the eye region); 2) increasing the model width; 3) better not decrease the dis_fc_dim, in my experiments, decreasing the dis_fc_dim would affect the performance.

By the way, epoch 35 and 40 are not enough (in my experiments) for the 384x384 model, and I noticed that your attribute file is named new_list_attr_celeba_addhair.txt, but the current model performs worse on adding hair than other attributes.

@dypromise
Copy link
Author

Thank you so much about tips. I'll try it

@ewrfcas
Copy link

ewrfcas commented Aug 14, 2019

@csmliu Why the 384x384 model uses less channels in the convolutions? Is it limited by the gpu-memory?

@csmliu
Copy link
Owner

csmliu commented Aug 14, 2019

Yes, the experiments were conducted on a single GTX 1080Ti GPU with 11GB memory, so we had to use less channels than the 128x128 model.

@ewrfcas
Copy link

ewrfcas commented Aug 14, 2019

@csmliu Thanks for the quick reply. When the pytorch version will be released? I think maybe this work can be further improved with better machinces and tricks.

@csmliu
Copy link
Owner

csmliu commented Aug 14, 2019

So sorry that I currently have no time to work on the pytorch version. I'll update immediately when the pytorch version is available.

@ewrfcas
Copy link

ewrfcas commented Aug 14, 2019

@csmliu Thanks! I‘m looking forward to that.

@zhoushiwei
Copy link

@csmliu I‘m looking forward to that too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants