Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow access to artifacts in the service __init__(self) during pack process #1037

Closed
shihgianlee opened this issue Aug 27, 2020 · 3 comments
Closed
Assignees

Comments

@shihgianlee
Copy link

shihgianlee commented Aug 27, 2020

Is your feature request related to a problem? Please describe.
In BentoML service class, we don't have access to artifacts during pack process, e.g. self.artifacts.my_artifact, in the __init__(self) method. The reason to have access to the artifacts in the init is so that I can pass the artifact to my class initialization, e.g. self._provider = Provider(self.artifacts.my_artifact). Later in the service predict method, I can access my class this way, e.g. self._provider.fetch_predicted_value, where the predicted value is computed value from the prediction produced by my_artifact.

Describe the solution you'd like
In slack discussion message thread, a service_init callback is proposed to allow the above service initialization.

Describe alternatives you've considered
In slack discussion message thread, a couple solutions were proposed. I ended up using @property decorator for my use case. It is a bit more code but it achieved the end result described above and made my code simpler.

@property
def provider(self):
    if not self._provider:
          self._provider = Foo(self.artifacts.my_csv, self.artifacts.my_json) 
    return self._provider
@parano parano added help-wanted An issue currently lacks a contributor new feature labels Aug 27, 2020
@shihgianlee
Copy link
Author

We encountered the same problem again where we need to load a json config file within the same package but couldn't. We had to use the same workaround for our config, which is awkward. I see help wanted label in this issue. Is there anything I can do to help with this issue?

@stale
Copy link

stale bot commented Jun 4, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Jun 4, 2021
@stale stale bot closed this as completed Jun 18, 2021
@parano parano reopened this Jun 19, 2021
@stale stale bot removed the stale label Jun 19, 2021
@parano
Copy link
Member

parano commented Jan 19, 2022

This is now possible in BentoML 1.0, where models are first saved in local model store before the Bento build process. User can also create a custom Runner class, and define the model initialization process:

import bentoml

class CustomRunner(bentoml.Runner):

    def _setup(self):
        # load model from model store
        model_info = bentoml.model.get('...')
        self.model = load_model(model_info.path)

   def _run_batch(self, df: pd.DataFrame):
        # run inference on a batch input
        ...

runner = CustomRunner('my_runner')
svc = bentoml.Service('my_service', runners=[runner])

...

@parano parano closed this as completed Jan 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants