You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi everybody,
this issue is just some feedback on the documentation, while I'm studying it to hopefully make it work in my lab.
This page says that serving a model will require a service and a deployment, which can be generated respectively with the commands: ks generate tf-serving-service servicename
and ks generate tf-serving-deployment-gcp ${MODEL_COMPONENT}
Then it says that if I want to use GPU, I should specify the parameter (under the deployment parameters I guess) in this way: ks param set ${MODEL_COMPONENT} numGpus X where X is the number of GPU I want to use, in case I want more than 1, right?
This paragraph links to the readme of a github example where a model is served with GPU.
In this example, the command to generate the model is ks generate tf-serving model1 --name=coco: this is neither a service or a deployment. I don't recognize the other parameters as their names are different from the ones in the documentation, except for the numGpus.
I find this confusing because I am new to the field. Maybe the documentation is written describing GCP only?
Thank you for your help.
The text was updated successfully, but these errors were encountered:
Issue-Label Bot is automatically applying the label improvement/enhancement to this issue, with a confidence of 0.56. Please mark this comment with 👍 or 👎 to give our bot feedback!
Hi everybody,
this issue is just some feedback on the documentation, while I'm studying it to hopefully make it work in my lab.
This page says that serving a model will require a service and a deployment, which can be generated respectively with the commands:
ks generate tf-serving-service servicename
and
ks generate tf-serving-deployment-gcp ${MODEL_COMPONENT}
Then it says that if I want to use GPU, I should specify the parameter (under the deployment parameters I guess) in this way:
ks param set ${MODEL_COMPONENT} numGpus X
where X is the number of GPU I want to use, in case I want more than 1, right?This paragraph links to the readme of a github example where a model is served with GPU.
In this example, the command to generate the model is
ks generate tf-serving model1 --name=coco
: this is neither a service or a deployment. I don't recognize the other parameters as their names are different from the ones in the documentation, except for thenumGpus
.I find this confusing because I am new to the field. Maybe the documentation is written describing GCP only?
Thank you for your help.
The text was updated successfully, but these errors were encountered: