-
Notifications
You must be signed in to change notification settings - Fork 6.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large batch size and multiple GPUs #137
Comments
We observe that batchSize=1 with single gpu gives us the best results so far. |
Yes. with --norm instance, it worked. Specify the number of images per GPU? Is there an option or is it simple for changing your code? |
I guess it will be |
If it is batchSize/#gpus, then norm still need to be "instance" for successful training. I have tested this. |
When I train pix2pix model with batchSize > 1, norm = batch, and multiple GPU, the results seem wrong/strange.
When I train pix2pix model with batchSize > 1, norm = batch, and single GPU, the results are correct.
Could this be solved?
Thank you.
The text was updated successfully, but these errors were encountered: