-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Structure S3-Hosted Wheels as PyPI Repository #7494
Comments
Since you use the CPU as the device, you can pass |
I think the main issue is caused by you having probably installing the CPU version of DGL instead of CUDA. Can you tell us what is your installed DGL version? You can report the version by pip. |
@reesehyde pls refer to this page for DGL installation. This is the official page you should refer to only. As for pip packages, we host them on AWS S3 by our own. We only uploaded CPU versions to PyPI only and we stop uploading since DGL 2.2.0. So please always fetch pip packages from AWS S3. |
Ah apologies, the problem was indeed using the CPU version! I just had plain old I managed to install this by downloading the correct wheel manually but have to fetch packages through a PyPI proxy. Would the team consider setting up the S3 bucket to be indexable by pip? I don't know exactly what that entails but looking through torch's bucket setup and testing some index urls, just hosting the |
Maybe you can update the issue title now that we know what is going wrong. |
set_max_uva_threads
Pytorch Op Not Found
Thanks @mfbalin, updated the title to reflect the new request. I read up a bit more on hosting a simple PyPI repository and it does look like simply hosting an index file at the I'd be happy to create a PR for the update if someone could point me towards the S3-publish logic. I searched around in the repo for "repo.html" and "s3" but only found the CI/CD report and log uploads. |
@Rhett-Ying What do you think? I don't understand much from PyPI or pip. |
@reesehyde could you show me the use case you want and the blocker? why current install command |
Thanks @Rhett-Ying, I hadn't tried that command but you're right that it does the trick in But I'm using Poetry rather than
Poetry's Single Page Link Source forces |
🐛 Bug
When trying to construct a
dgl.graphbolt.DataLoader
in an environment supporting CUDA, the call totorch.ops.graphbolt.set_max_uva_threads()
fails with an AttributeErrorTo Reproduce
From the environment described below, attempt to create a Graphbolt datapipe per the Node Classification with Minibatch Sampling tutorial. Note that while the environment supports CUDA, the error is produced even when the CPU is used:
This results in:
Expected behavior
DataLoader to be created successfully
Environment
Additional context
I can confirm the graphbolt shared library is present for my PyTorch version:
I'm not sure how to check whether PyTorch is loading it correctly or at all.
Other Versions
Relatedly, my first reaction was to try to a different version of DGL and/or PyTorch. But I found that installing from PyPI on an x86-64 Linux machine I'm restricted to only using version
2.1.0
for v2. On PyPI the2.0.0
wheel is only available for Linuxaarch64
, and no Linux wheels are available for2.2.0
or2.2.1
. Could the CI/CD be updated to build more Linux wheels? I'd love to contribute there if someone could point me in the right direction!The text was updated successfully, but these errors were encountered: