Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add startup / shutdown functions #1065

Closed
FlorianBorn opened this issue Sep 4, 2020 · 12 comments
Closed

Add startup / shutdown functions #1065

FlorianBorn opened this issue Sep 4, 2020 · 12 comments

Comments

@FlorianBorn
Copy link

FlorianBorn commented Sep 4, 2020

Is your feature request related to a problem? Please describe.
My ML Service consumes an externel service to read same data and filter its predictions accordingly.
I would need the possibility to add an API-client at startup of the service, initiate it with environment variables and use it when handling a prediction request.

Describe the solution you'd like
Add startup / shutdown decorators, like this:

 from bentoml import BentoService, env, api, artifacts
 from bentoml.adapters import DataframeInput
 from bentoml.artifact import SklearnModelArtifact

 @artifacts([SklearnModelArtifact('clf')])
 @env(pip_dependencies=["scikit-learn"])
 class MyMLService(BentoService):

    @startup()
    def startup(self, app):
        env_vars = read_environment_variables()
        client = get_client( env_vars )
        app.state["client"] = client

    @api(input=DataframeInput())
    def predict(self, df):
        prediction = self.artifacts.clf.predict(df)
        **filter = app.state["client"].read(...)**
        return filter_prediction(prediction, filter)

    @shutdown()
    def startup(self, app):
        app.state["client"].disconnect()

 if __name__ == "__main__":
    bento_service = MyMLService()
    bento_service.pack('clf', trained_classifier_model)
    bento_service.save_to_dir('/bentoml_bundles')

Describe alternatives you've considered
Initiate the connection for each request.

Additional context
n/a

@parano
Copy link
Member

parano commented Sep 4, 2020

Great suggestion @FlorianBorn!

This is also what I was planning to do for #1037, where a user wants to have access to the artifacts when initializing the service.

In your case, you can probably work around by overriding the __init__ method, e.g.

 @artifacts([SklearnModelArtifact('clf')])
 @env(pip_dependencies=["scikit-learn"])
 class MyMLService(BentoService):

    def __init__(self):
        env_vars = read_environment_variables()
        client = get_client( env_vars )
        self.app.state["client"] = client

       def on_exit_callback():
             self.app.state["client"].disconnect()
       import atexit
      atexit.register(on_exit_callback)
       

    @api(input=DataframeInput())
    def predict(self, df):
        prediction = self.artifacts.clf.predict(df)
        **filter = app.state["client"].read(...)**
        return filter_prediction(prediction, filter)

@liusy182
Copy link
Contributor

An alternative implementation is to have child classes override predefined base methods for startup and shutdown. Do we need the flexibility provided by the annotation, or base methods override is preferred?

@parano
Copy link
Member

parano commented Mar 11, 2021

On a related note: we also need hooks for on_docker_build, some frameworks require downloading large files(e.g. HuggingFace, PaddleHub), and we want to run those initial downloads during docker build or save, instead of after an API server is up and running.

@stale
Copy link

stale bot commented Jun 11, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jun 11, 2021
@bojiang bojiang removed the stale label Jun 21, 2021
@parano
Copy link
Member

parano commented Jul 22, 2021

Update: we are working on this feature in the upcoming 1.0 release

@parano
Copy link
Member

parano commented Jan 19, 2022

We are proposing the following solution in BentoML 1.0, although it sill need further discussion:

from bentoml.service import service_context, request_context

svc = bentoml.Service()

@svc.on_startup
def startup():
        env_vars = read_environment_variables()
        service_context["client"] = get_client( env_vars )
      
@svc.api
def predict(input):
       service_context["client"]...
       ...
  • svc.on_startup should also provide an option to run the callback function only once, instead of once per worker

@To-jak
Copy link

To-jak commented Feb 16, 2023

Hi ! Do you have any updates about this topic?

This could be useful for example if the ML service requires any large file artifacts to perform the prediction, and in case these are not defined at the build of the docker image, but at the deployment stage or runtime.

If there is an option to assign the event to a single service worker, would the other service workers wait for the callback completion before executing ?

@tbusath8
Copy link

tbusath8 commented Mar 9, 2023

This would also be very helpful for initializing a feature store object. Our team is using an open source feature store called Feast and would like to initialize the Feast object on startup. We would then use that object for each subsequent request to fetch features either cached locally or through network calls.

Having to initialize this object during the build of the docker image is not ideal and initializing it for every request would negate the benefits of local caching of features and increase latency

@cpatrickalves
Copy link

I am also interested in this feature!

@prodigy-sub
Copy link

me as well

@jgeysen
Copy link

jgeysen commented Jun 15, 2023

I am highly interested in this feature.

I have been trying to ship bento as an environment without an actual model in it, in an attempt to get a more dynamic service running which can load in different models on startup (which were e.g. trained in a pipeline after the deployment of the bento service). This feature would be a huge help to do so. Thanks.

@frostming
Copy link
Contributor

frostming commented Jun 16, 2023

This feature has been released in 1.0.22, enjoy!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests