-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training time of CULane is too long #8
Comments
30-40 epochs are usually enough for convergence on CULane. If you are interested in speeding up the training, you could definitely try increasing the batch size if you have enough memory. Unfortunately, we do not have multi-gpu support for now. This may be a feature we work on in the future. Another reason for slow training times might be to do with the fact that we generate affinity field on the fly. We have to do this to support random transformations during training. A faster CPU and/or larger |
Hi, @ztjsw |
@qinjian623 Thank you |
@ztjsw |
Hey @qinjian623, thanks for opening up a PR for multi-GPU training! Are there any known issues with your script? If not, I would be very happy to use it |
@andy-96 If any issue, you can send me message about the error, I will fix it. |
hello, could you please tell me you torch and torchvision version,there is some error, when i make the make.sh file? |
@zjsun7 |
Hello there,
The training time of
train_culane.py
is too long. I have trained 5 five days, it only runs at 30epoch. (Nvidia 1080ti)Can we enlarge batch size or add multi-gpu training? Will it influence the performance?
Thanks.
The text was updated successfully, but these errors were encountered: