-
Notifications
You must be signed in to change notification settings - Fork 802
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add startup / shutdown functions #1065
Comments
Great suggestion @FlorianBorn! This is also what I was planning to do for #1037, where a user wants to have access to the artifacts when initializing the service. In your case, you can probably work around by overriding the @artifacts([SklearnModelArtifact('clf')])
@env(pip_dependencies=["scikit-learn"])
class MyMLService(BentoService):
def __init__(self):
env_vars = read_environment_variables()
client = get_client( env_vars )
self.app.state["client"] = client
def on_exit_callback():
self.app.state["client"].disconnect()
import atexit
atexit.register(on_exit_callback)
@api(input=DataframeInput())
def predict(self, df):
prediction = self.artifacts.clf.predict(df)
**filter = app.state["client"].read(...)**
return filter_prediction(prediction, filter) |
An alternative implementation is to have child classes override predefined base methods for |
On a related note: we also need hooks for |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Update: we are working on this feature in the upcoming 1.0 release |
We are proposing the following solution in BentoML 1.0, although it sill need further discussion: from bentoml.service import service_context, request_context
svc = bentoml.Service()
@svc.on_startup
def startup():
env_vars = read_environment_variables()
service_context["client"] = get_client( env_vars )
@svc.api
def predict(input):
service_context["client"]...
...
|
Hi ! Do you have any updates about this topic? This could be useful for example if the ML service requires any large file artifacts to perform the prediction, and in case these are not defined at the build of the docker image, but at the deployment stage or runtime. If there is an option to assign the event to a single service worker, would the other service workers wait for the callback completion before executing ? |
This would also be very helpful for initializing a feature store object. Our team is using an open source feature store called Feast and would like to initialize the Feast object on startup. We would then use that object for each subsequent request to fetch features either cached locally or through network calls. Having to initialize this object during the build of the docker image is not ideal and initializing it for every request would negate the benefits of local caching of features and increase latency |
I am also interested in this feature! |
me as well |
I am highly interested in this feature. I have been trying to ship bento as an environment without an actual model in it, in an attempt to get a more dynamic service running which can load in different models on startup (which were e.g. trained in a pipeline after the deployment of the bento service). This feature would be a huge help to do so. Thanks. |
This feature has been released in 1.0.22, enjoy! |
Is your feature request related to a problem? Please describe.
My ML Service consumes an externel service to read same data and filter its predictions accordingly.
I would need the possibility to add an API-client at startup of the service, initiate it with environment variables and use it when handling a prediction request.
Describe the solution you'd like
Add startup / shutdown decorators, like this:
Describe alternatives you've considered
Initiate the connection for each request.
Additional context
n/a
The text was updated successfully, but these errors were encountered: