-
Notifications
You must be signed in to change notification settings - Fork 654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytorch support inference on separate cuda stream #2706
pytorch support inference on separate cuda stream #2706
Conversation
@frankfliu @zachgk any chance you can take a look and give some early feedback? I checked the failed build looks like it's a coding convention check failure (not sure if I missed any other error message), which I'll fix once there's consensus on the PR itself |
@jiyuanq
And then check in updated files. |
thank you! just updated |
Codecov ReportPatch coverage:
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## master #2706 +/- ##
===========================================
Coverage 72.08% 72.09%
- Complexity 5126 7026 +1900
===========================================
Files 473 698 +225
Lines 21970 31264 +9294
Branches 2351 3225 +874
===========================================
+ Hits 15838 22541 +6703
- Misses 4925 7190 +2265
- Partials 1207 1533 +326
☔ View full report in Codecov by Sentry. |
Description
This PR is an attempt to support running inference on separate cuda streams for pytorch engine. By doing this, we can maximize GPU utilization when running concurrent inference requests on GPU.
Also added a boolean flag
inferenceSeparateCudaStream
controlled through system properties"ai.djl.pytorch.inference_separate_cuda_stream"
to determine whether this new feature is enabled.I considered exposing the full cuda stream related pytorch api through JNI but in the end decided to only expose a high level boolean flag, mainly because: