Skip to content

Commit

Permalink
Add references to kustomize issues (#1015)
Browse files Browse the repository at this point in the history
  • Loading branch information
scottilee authored and k8s-ci-robot committed Aug 6, 2019
1 parent f1f3a59 commit 39bf96e
Show file tree
Hide file tree
Showing 8 changed files with 17 additions and 0 deletions.
2 changes: 2 additions & 0 deletions content/docs/components/serving/istio.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ the pod has annotation.

## Kubeflow TF Serving with Istio

_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/18](https://github.com/kubeflow/manifests/issues/18)._

After installing Istio, we can deploy the TF Serving component as in
[TensorFlow Serving](/docs/components/tfserving_new/) with
additional params:
Expand Down
2 changes: 2 additions & 0 deletions content/docs/components/serving/pytorchserving.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,8 @@ You can find more details about wrapping a model with seldon-core [here](https:/

## Deploying the model to your Kubeflow cluster

_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/10](https://github.com/kubeflow/manifests/issues/10)._

We need to have seldon component deployed, you can deploy the model once trained using a pre-defined ksonnet component, similar to [this](https://github.com/kubeflow/examples/blob/master/pytorch_mnist/ks_app/components/serving_model.jsonnet) example.

Create an environment variable, `${KF_ENV}`, to represent a conceptual
Expand Down
3 changes: 3 additions & 0 deletions content/docs/components/serving/seldon.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ weight = 40
+++

## Serve a model using Seldon

_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/10](https://github.com/kubeflow/manifests/issues/10)._

[Seldon-core](https://github.com/SeldonIO/seldon-core) provides deployment for any machine learning runtime that can be [packaged in a Docker container](https://docs.seldon.io/projects/seldon-core/en/latest/wrappers/README.html).

Install the seldon package:
Expand Down
2 changes: 2 additions & 0 deletions content/docs/components/serving/tfserving_new.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ weight = 51

## Serving a model

_This section has not yet been converted to kustomize, please refer to [kubeflow/website/issues/958](https://github.com/kubeflow/website/issues/958)._

We treat each deployed model as two [components](https://ksonnet.io/docs/tutorial#2-generate-and-deploy-an-app-component)
in your APP: one tf-serving-deployment, and one tf-serving-service.
We can think of the service as a model, and the deployment as the version of the model.
Expand Down
2 changes: 2 additions & 0 deletions content/docs/components/serving/trtinferenceserver.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,8 @@ $ gsutil cp -r model_store gs://inference-server-model-store

## Kubernetes Generation and Deploy

_This section has not yet been converted to kustomize, please refer to [kubeflow/website/issues/959](https://github.com/kubeflow/website/issues/959)._

Next use ksonnet to generate Kubernetes configuration for the NVIDIA TensorRT
Inference Server deployment and service. The --image option points to
the NVIDIA Inference Server container in the [NVIDIA GPU Cloud
Expand Down
2 changes: 2 additions & 0 deletions content/docs/components/training/chainer.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ An **alpha** version of [Chainer](https://chainer.org/) support was introduced w

## Verify that Chainer support is included in your Kubeflow deployment

_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/232](https://github.com/kubeflow/manifests/issues/232)._

Check that the Chainer Job custom resource is installed

```shell
Expand Down
2 changes: 2 additions & 0 deletions content/docs/components/training/mpi.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ This guide walks you through using MPI for training.

## Installation

_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/227](https://github.com/kubeflow/manifests/issues/227)._

If you haven’t already done so please follow the [Getting Started Guide](https://www.kubeflow.org/docs/started/getting-started/) to deploy Kubeflow.

An alpha version of MPI support was introduced with Kubeflow 0.2.0. You must be using a version of Kubeflow newer than 0.2.0.
Expand Down
2 changes: 2 additions & 0 deletions content/docs/components/training/mxnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ This guide walks you through using MXNet with Kubeflow.

## Installing MXNet Operator

_This section has not yet been converted to kustomize, please refer to [kubeflow/manifests/issues/228](https://github.com/kubeflow/manifests/issues/228)._

If you haven't already done so please follow the [Getting Started Guide](https://www.kubeflow.org/docs/started/getting-started/) to deploy Kubeflow.

A version of MXNet support was introduced with Kubeflow 0.2.0. You must be using a version of Kubeflow newer than 0.2.0.
Expand Down

0 comments on commit 39bf96e

Please sign in to comment.