Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tesla P40 does not support FP16? #56

Closed
jlygit opened this issue Jul 31, 2019 · 4 comments
Closed

tesla P40 does not support FP16? #56

jlygit opened this issue Jul 31, 2019 · 4 comments

Comments

@jlygit
Copy link

jlygit commented Jul 31, 2019

hello,
I run the fp16 mode on P40 when used tensor RT and it can not speed up.
maybe tesla P40 does not support FP16?
thks

@jlygit jlygit closed this as completed Aug 5, 2019
@DoiiarX
Copy link

DoiiarX commented May 6, 2024

Did you got the answer?

@fbldp
Copy link

fbldp commented May 22, 2024

Did you got the answer? @DoiiarX @jlygit

@DoiiarX
Copy link

DoiiarX commented May 22, 2024

你得到答案了吗?

don't support. no error but no speed up.

@DoiiarX
Copy link

DoiiarX commented May 22, 2024

Did you got the answer? @DoiiarX @jlygit

https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html

https://developer.nvidia.com/cuda-gpus

Tesla P40 is only 6.2 less than 7.0 which is supported at min.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants