-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run all stdgpu operations on a specified cuda stream #423
Comments
Most of the functionality should support custom CUDA streams by taking a respective |
@stotko Such as
The whole pipeline likes:
But I have to admit that this is difficult to write in the form of operating on stream. Or I don't know if it can be achieved. |
Thanks. Even though the |
Sorry for the long delay. It took a larger refactoring to fill the gaps in the stream support, but with #450 this issue should be resolved. |
I notices that some functions like
stdgpu::detail::memcpy
is non-async and running on DEFAULT cuda stream. More details: stdgpu::detail::memcpy depends ondispatch_memcpy
and it looks like:For example. if we use cuda graph and try to catch all operations on stream, error raises because diff streams (default and customers') are mixed.
So my request: Run all stdgpu operations on a specified cuda stream
The text was updated successfully, but these errors were encountered: