-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Offloading to GPU is not working for DBSCAN #1368
Comments
Maybe a relevant side-question could be: how can I validate that I am actually offloading to the GPU? |
You can use verbose mode to see which device was used: https://intel.github.io/scikit-learn-intelex/verbose.html |
@Alexsandruss, awesome! So I can confirm that it is running on CPU, regardless whether I
I am thinking aloud here, could this be a precision issue of the data itself? That I am using a precision that is not compatible with GPU and that it therefore falls back on the CPU? |
@psmgeelen you can try other algorithms meanwhile. DBSCAN have some specifics that put it apart |
@psmgeelen could you please list your conda env as well? It would be very useful for reproducing |
@napetrov, thanks for responding. So I am trying to do some benchmarking with Intel GPU and followed the compatability list in the documentation here https://intel.github.io/scikit-learn-intelex/algorithms.html; So the algorithms that I have tried running are:
@samir-nasibli , I listed my environment using
|
Thank you @psmgeelen ! Could you please also share what system platforms dpctl returns Also please update your conda env via: |
Hi @samir-nasibli,
I think I might be doing something wrong when running
I think the updating broke the environment, as I now get this error:
I resolved this running:
When I run my script I still get: |
Is there anything else I can do to support the process? |
Hi @psmgeelen! Unfortunately I didn't reproduce your issue. I am getting GPU offloading. Let me investigate it more. I will let you know. |
@samir-nasibli , I might be doing something stupid. I ran this:
In my new environment and got this print out:
So it seems to be working. For some reason, this doesn't work on my larger benchmark test yet. Please give me 24 hours to debug myself before closing this issue. I'll get back to you soon! |
Maybe small inbetween question: what does the log: |
After reinstalling the environment again, the issue is not reproducible anymore. I guess the error was transient. I noticed that the GPU support for Intel is not accurately described in the documentation. I found that:
are supported on an ARC 770, while the so-called supported algorithms for GPUs in the documentation:
Furthermore I had some issues with the methods that are associated to the models. For example the
While using the Regardless, closing the issue. Thanks for the support! |
Describe the bug
Following the example in the documentation about GPU offloading, I noticed that it did run, but that there was CPU load and that it didnt seem to be using the GPU (didnt hear any fans ramp up or anything). The example is:
I have also tried to more explicitly offload by using the general context
But that didn't change much. I also considered the opportunity to offload the object:
Which prints out nicely:
The best argument I could find that GPU offloading is not working as it should is because:
timeit
to compare 10 runs each time, and they are the same regardless whether I offload to CPU or GPU.To Reproduce
Already provided above
Expected behavior
Execution time should change when you run it on different hardware
Environment:
Ubuntu 23.04
CPU: 16-core AMD Ryzen 9 5950X (-MT MCP-) speed/min/max: 2258/2200/5083 MHz
Kernel: 6.2.0-24-generic x86_64 Up: 1h 27m Mem: 7957.1/128724.3 MiB (6.2%)
Storage: 931.51 GiB (37.3% used) Procs: 576 Shell: Bash inxi: 3.3.25
The text was updated successfully, but these errors were encountered: