Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Conversation

joerunde
Copy link
Collaborator

@robertgshaw2-neuralmagic

This adds the --disable-frontend-multiprocessing flag and should also correctly pick up embeddings models to disable the multiprocessing here. (Also some unrelated formatting changes)

The backend stuff is wrapped up in a context manager that handles the process startup and shutdown at exit as well, so that we don't have to muck around much in the existing server lifecycle code

Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Signed-off-by: Joe Runde <Joseph.Runde@ibm.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@robertgshaw2-neuralmagic robertgshaw2-neuralmagic merged commit 453939b into neuralmagic:isolate-oai-server-process Jul 30, 2024
2 checks passed
@robertgshaw2-neuralmagic
Copy link
Collaborator

Thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants