We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hello, I run the fp16 mode on P40 when used tensor RT and it can not speed up. maybe tesla P40 does not support FP16? thks
The text was updated successfully, but these errors were encountered:
Did you got the answer?
Sorry, something went wrong.
Did you got the answer? @DoiiarX @jlygit
你得到答案了吗?
don't support. no error but no speed up.
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html
https://developer.nvidia.com/cuda-gpus
Tesla P40 is only 6.2 less than 7.0 which is supported at min.
No branches or pull requests
hello,
I run the fp16 mode on P40 when used tensor RT and it can not speed up.
maybe tesla P40 does not support FP16?
thks
The text was updated successfully, but these errors were encountered: