-
-
Notifications
You must be signed in to change notification settings - Fork 16.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update train.py #13497
base: master
Are you sure you want to change the base?
Update train.py #13497
Conversation
updated deprecated call Signed-off-by: Shubham Phapale <94707673+ShubhamPhapale@users.noreply.github.com>
All Contributors have signed the CLA. β
|
π Hello @ShubhamPhapale, thank you for submitting a
For more guidance, please refer to our Contributing Guide. Donβt hesitate to leave a comment if you have any questions. Thank you for contributing to Ultralytics! π π οΈ NotesIt looks like your PR updates AMP autocast usage for compatibility with PyTorch 2.0+, which is an important improvement. If applicable, please include a minimum reproducible example (MRE) so we can fully understand and test the impact of this change. For example, providing specific training scenarios where the prior implementation failed due to autograd issues with PyTorch 2.0+ would help validate this fix. An Ultralytics engineer will also review this PR shortly. Stay tuned for additional feedback! π Made with β€οΈ by Ultralytics Actions |
I have read the CLA Document and I sign the CLA |
updated deprecated call
π οΈ PR Summary
Made with β€οΈ by Ultralytics Actions
π Summary
Updated AMP (Automatic Mixed Precision) autocast usage for compatibility with PyTorch 2.0+.
π Key Changes
torch.cuda.amp.autocast(amp)
withtorch.amp.autocast('cuda', amp)
in the training script.π― Purpose & Impact