From 39bf96ebba1587e7f752cb5d645cbb348d014c87 Mon Sep 17 00:00:00 2001 From: Scott Lee Date: Mon, 5 Aug 2019 19:39:53 -0700 Subject: [PATCH] Add references to kustomize issues (#1015) --- content/docs/components/serving/istio.md | 2 ++ content/docs/components/serving/pytorchserving.md | 2 ++ content/docs/components/serving/seldon.md | 3 +++ content/docs/components/serving/tfserving_new.md | 2 ++ content/docs/components/serving/trtinferenceserver.md | 2 ++ content/docs/components/training/chainer.md | 2 ++ content/docs/components/training/mpi.md | 2 ++ content/docs/components/training/mxnet.md | 2 ++ 8 files changed, 17 insertions(+) diff --git a/content/docs/components/serving/istio.md b/content/docs/components/serving/istio.md index 8557aefa87..f0606896c1 100644 --- a/content/docs/components/serving/istio.md +++ b/content/docs/components/serving/istio.md @@ -35,6 +35,8 @@ the pod has annotation. ## Kubeflow TF Serving with Istio +_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/18](https://github.com/kubeflow/manifests/issues/18)._ + After installing Istio, we can deploy the TF Serving component as in [TensorFlow Serving](/docs/components/tfserving_new/) with additional params: diff --git a/content/docs/components/serving/pytorchserving.md b/content/docs/components/serving/pytorchserving.md index b968eb64e8..32f8f8ea90 100644 --- a/content/docs/components/serving/pytorchserving.md +++ b/content/docs/components/serving/pytorchserving.md @@ -35,6 +35,8 @@ You can find more details about wrapping a model with seldon-core [here](https:/ ## Deploying the model to your Kubeflow cluster +_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/10](https://github.com/kubeflow/manifests/issues/10)._ + We need to have seldon component deployed, you can deploy the model once trained using a pre-defined ksonnet component, similar to [this](https://github.com/kubeflow/examples/blob/master/pytorch_mnist/ks_app/components/serving_model.jsonnet) example. Create an environment variable, `${KF_ENV}`, to represent a conceptual diff --git a/content/docs/components/serving/seldon.md b/content/docs/components/serving/seldon.md index 9f63743cc5..ab3f944720 100644 --- a/content/docs/components/serving/seldon.md +++ b/content/docs/components/serving/seldon.md @@ -5,6 +5,9 @@ weight = 40 +++ ## Serve a model using Seldon + +_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/10](https://github.com/kubeflow/manifests/issues/10)._ + [Seldon-core](https://github.com/SeldonIO/seldon-core) provides deployment for any machine learning runtime that can be [packaged in a Docker container](https://docs.seldon.io/projects/seldon-core/en/latest/wrappers/README.html). Install the seldon package: diff --git a/content/docs/components/serving/tfserving_new.md b/content/docs/components/serving/tfserving_new.md index 1b2278f8d5..e4d19fc118 100644 --- a/content/docs/components/serving/tfserving_new.md +++ b/content/docs/components/serving/tfserving_new.md @@ -6,6 +6,8 @@ weight = 51 ## Serving a model +_This section has not yet been converted to kustomize, please refer to [kubeflow/website/issues/958](https://github.com/kubeflow/website/issues/958)._ + We treat each deployed model as two [components](https://ksonnet.io/docs/tutorial#2-generate-and-deploy-an-app-component) in your APP: one tf-serving-deployment, and one tf-serving-service. We can think of the service as a model, and the deployment as the version of the model. diff --git a/content/docs/components/serving/trtinferenceserver.md b/content/docs/components/serving/trtinferenceserver.md index bdaf371a15..a4fdc1a0df 100644 --- a/content/docs/components/serving/trtinferenceserver.md +++ b/content/docs/components/serving/trtinferenceserver.md @@ -81,6 +81,8 @@ $ gsutil cp -r model_store gs://inference-server-model-store ## Kubernetes Generation and Deploy +_This section has not yet been converted to kustomize, please refer to [kubeflow/website/issues/959](https://github.com/kubeflow/website/issues/959)._ + Next use ksonnet to generate Kubernetes configuration for the NVIDIA TensorRT Inference Server deployment and service. The --image option points to the NVIDIA Inference Server container in the [NVIDIA GPU Cloud diff --git a/content/docs/components/training/chainer.md b/content/docs/components/training/chainer.md index 96438dd99c..433b729624 100644 --- a/content/docs/components/training/chainer.md +++ b/content/docs/components/training/chainer.md @@ -31,6 +31,8 @@ An **alpha** version of [Chainer](https://chainer.org/) support was introduced w ## Verify that Chainer support is included in your Kubeflow deployment +_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/232](https://github.com/kubeflow/manifests/issues/232)._ + Check that the Chainer Job custom resource is installed ```shell diff --git a/content/docs/components/training/mpi.md b/content/docs/components/training/mpi.md index 2c92dd9f51..02416d8f80 100644 --- a/content/docs/components/training/mpi.md +++ b/content/docs/components/training/mpi.md @@ -8,6 +8,8 @@ This guide walks you through using MPI for training. ## Installation +_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/227](https://github.com/kubeflow/manifests/issues/227)._ + If you haven’t already done so please follow the [Getting Started Guide](https://www.kubeflow.org/docs/started/getting-started/) to deploy Kubeflow. An alpha version of MPI support was introduced with Kubeflow 0.2.0. You must be using a version of Kubeflow newer than 0.2.0. diff --git a/content/docs/components/training/mxnet.md b/content/docs/components/training/mxnet.md index d1230c6b2a..ab650465d1 100644 --- a/content/docs/components/training/mxnet.md +++ b/content/docs/components/training/mxnet.md @@ -8,6 +8,8 @@ This guide walks you through using MXNet with Kubeflow. ## Installing MXNet Operator +_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/228](https://github.com/kubeflow/manifests/issues/228)._ + If you haven't already done so please follow the [Getting Started Guide](https://www.kubeflow.org/docs/started/getting-started/) to deploy Kubeflow. A version of MXNet support was introduced with Kubeflow 0.2.0. You must be using a version of Kubeflow newer than 0.2.0.