Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caffe and cuda streams #5855

Open
msarett opened this issue Aug 18, 2017 · 0 comments
Open

Caffe and cuda streams #5855

msarett opened this issue Aug 18, 2017 · 0 comments

Comments

@msarett
Copy link

msarett commented Aug 18, 2017

It seems that, for the most part, caffe does not make use of cuda streams in the gpu implementation. This means that all of the operations are synchronized on the default cuda stream.

This is a reasonable implementation if we're not worried about concurrency. But if we wanted to share the gpu among host threads, it's not optimal. Ex: If we try to run inference on two different caffe models from two different host threads, they will constantly block each other on the default cuda stream, even though they are completely independent.

Is this a known and accepted limitation of caffe? Or is there any plan to move computation into cuda streams to avoid blocking the entire gpu?

Thanks for your thoughts!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants