Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Windows] Cuda shows disabled using TORCH_CUDA_VERSION=cu101 #252

Closed
asilvas opened this issue Sep 26, 2020 · 5 comments
Closed

[Windows] Cuda shows disabled using TORCH_CUDA_VERSION=cu101 #252

asilvas opened this issue Sep 26, 2020 · 5 comments

Comments

@asilvas
Copy link

asilvas commented Sep 26, 2020

Install works, and verified in the build output of torch-sys that it's building dummy_cuda_dependency.cpp which indicates it detected Cuda. But for some reason that I've not yet determined both tch::Cuda::is_available() and tch::Cuda::cudnn_is_available() return false. Everything functions otherwise (in CPU mode), just need it to get this working with GPU.

To be clear, this isn't an issue with pytorch. I've verified it's working correctly.

>>> import torch
>>> torch.cuda.current_device()
0
>>> torch.cuda.device(0)
<torch.cuda.device object at 0x0000018746452390>
>>> torch.cuda.device_count()
1
>>> torch.cuda.get_device_name(0)
'GeForce GTX 1080'
>>> torch.cuda.is_available()
True
@LaurentMazare
Copy link
Owner

I don't have a windows box at hand to try so won't be of much help here.
What I would suggest doing is a manual install of libtorch to see if it works better (details here).
On linux, I would use ldd to check the shared library that the executable uses and check if it gets the cuda version, seems that the windows equivalent is dumpbin /dependents.

@asilvas
Copy link
Author

asilvas commented Sep 28, 2020

I've used pip, libtorch prebuilt, and the torch-sys install methods. I can install fine, but as indicated above the bindings are reporting Cuda unavailable despite manually verifying via python Cuda is available. Hoping someone else had a similar issue and found a workaround.. I don't get how bindings would report different information despite verifying that python is loading the same version of pytorch.

@rookboom
Copy link
Contributor

I see the same on windows using TORCH_CUDA_VERSION=cu110, but only for --release, CUDA works fine for debug builds.

@LaurentMazare
Copy link
Owner

If it's only for the release version, this may be closer to #291. This one is likely to get away once some cargo feature has landed (I think it will be part of the 1.50 release although I'm not sure whether this will be in stable at this point).

@LaurentMazare
Copy link
Owner

Closing old issue, feel free to re-open if it's still a problem to you (I just tested it on linux and it worked fine, also sadly the new cargo feature from 1.50 is not general enough to handle this very use case).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants