-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] baichuan-13b-chat Service exception after long run #677
Labels
bug
Something isn't working
Comments
Can you describe in more detail what exactly happened? For example, does all future requests fail, or just one specific request fail? From the screenshot it feels like it's maybe because the client disconnects and thus the server stops the running request. |
76 tasks
After running for a while, no more inference. vllm's service is still there |
Closed
+1 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Start command
After about 12 hours of operation, the inference service stopped working
GPU:V100
CUDA:11.4
Screenshot of the problem:
The text was updated successfully, but these errors were encountered: