-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot use lambertw
on the GPU
#30
Comments
See CliMA/Oceananigans.jl#3438 (comment) I am pretty busy with my day job, etc. But the best solution i can think of of the top of my head is to start by returning the result of the root finding and the number of iterations. And put a nice interface on it. And avoid the warning altogether. If you can run a forked or local version (if possible), try commenting out the |
Thanks for checking into this. I haven't tested it myself, but it seems that indeed that I can of course fork the repo, comment that line out, and use it for my purposes, but I'd like to ask if you'd like the code here to be changed instead? In general I think it's advantageous to have packages be GPU-compatible, but I'm curious about your thoughts. If you agree I could submit a PR changing the function to return the value, plus |
Also I see there's been some discussion about possibly moving this package to SpecialFunctions.jl in JuliaMath/SpecialFunctions.jl#371. In my opinion moving |
Perhaps a good starting point is a subtype |
This issue is fixed in It is released as version 1.0.0 here |
Awesome! Thanks |
I've been trying to use this function on the GPU but I always get an error (the original issue I posted is CliMA/Oceananigans.jl#3438). Mostly I work indirectly with KernelAbstractions.jl, and the following MWE illustrates the error I'm getting:
Instead of this working, I get a huge error that starts with
You can see whole error here.
I also was able to come up with a simpler example using CUDA.jl that produces a similar error, but for some reason occasionally this fails with a segfault, so I'm not sure if the same thing is going on here (or maybe I'm doing something wrong since I've never really worked with CUDA directly). Here's the CUDA MWE:
Is there any way to make
lambertw
work on the GPU? More specifically work with KernelAbstractions?Thanks!
The text was updated successfully, but these errors were encountered: