Skip to content

Latest commit

 

History

History
 
 

mnist

Table of Contents generated with DocToc

MNIST on Kubeflow

This example guides you through the process of taking an example model, modifying it to run better within Kubeflow, and serving the resulting trained model.

Follow the version of the guide that is specific to how you have deployed Kubeflow

  1. MNIST on Kubeflow on GCP
  2. MNIST on Kubeflow on AWS
  3. MNIST on Kubeflow on IBM Cloud
  4. MNIST on Kubeflow on vanilla k8s
  5. MNIST on other platforms

MNIST on Kubeflow on GCP

Follow these instructions to run the MNIST tutorial on GCP

  1. Follow the GCP instructions to deploy Kubeflow with IAP

  2. Launch a Jupyter notebook

    • The tutorial has been tested using the Jupyter Tensorflow 1.15 image
  3. Launch a terminal in Jupyter and clone the kubeflow examples repo

    git clone https://github.com/kubeflow/examples.git git_kubeflow-examples
    
    • Tip When you start a terminal in Jupyter, run the command bash to start a bash terminal which is much more friendly then the default shell

    • Tip You can change the URL from '/tree' to '/lab' to switch to using Jupyterlab

  4. Open the notebook mnist/mnist_gcp.ipynb

  5. Follow the notebook to train and deploy MNIST on Kubeflow

MNIST on Kubeflow on AWS

Follow these instructions to run the MNIST tutorial on AWS

  1. Follow the AWS instructions to deploy Kubeflow on AWS

  2. Launch a Jupyter notebook

    • The tutorial has been tested using the Jupyter Tensorflow 1.15 image
  3. Launch a terminal in Jupyter and clone the kubeflow examples repo

    git clone https://github.com/kubeflow/examples.git git_kubeflow-examples
    
    • Tip When you start a terminal in Jupyter, run the command bash to start a bash terminal which is much more friendly then the default shell

    • Tip You can change the URL from '/tree' to '/lab' to switch to using Jupyterlab

  4. Open the notebook mnist/mnist_aws.ipynb

  5. Follow the notebook to train and deploy MNIST on Kubeflow

MNIST on Kubeflow on IBM Cloud

Follow these instructions to run the MNIST tutorial on IBM Cloud

  1. Follow the IBM Cloud instructions to deploy Kubeflow on IBM Cloud

  2. Launch a Jupyter notebook

    • For IBM Cloud, the default NFS storage does not support some of the Python package installation. Therefore, we need to create the notebook with Don't use Persistent Storage for User's home.
    • Due to the Notebook user permission issue, we need to use custom images that were working in the previous version.
      • The tutorial has been tested on image: gcr.io/kubeflow-images-public/tensorflow-1.13.1-notebook-cpu:v0.5.0
  3. Launch a terminal in Jupyter and clone the kubeflow examples repo

    git clone https://github.com/kubeflow/examples.git git_kubeflow-examples
    
    • Tip When you start a terminal in Jupyter, run the command bash to start a bash terminal which is much more friendly then the default shell

    • Tip You can change the URL from '/tree' to '/lab' to switch to using Jupyterlab

  4. Open the notebook mnist/mnist_ibm.ipynb

  5. Follow the notebook to train and deploy MNIST on Kubeflow

MNIST on Kubeflow on Vanilla k8s

  1. Follow these instructions to deploy Kubeflow.

  2. Setup docker credentials.

  3. Launch a Jupyter Notebook

  • The tutorial is run on Jupyter Tensorflow 1.15 image.
  1. Launch a terminal in Jupyter and clone the kubeflow/examples repo
git clone https://github.com/kubeflow/examples.git git_kubeflow-examples
  1. Open the notebook mnist/mnist_vanilla_k8s.ipynb

  2. Follow the notebook to train and deploy on MNIST on Kubeflow

Prerequisites

Configure docker credentials

Why do we need this?

Kaniko is used by fairing to build the model every time the notebook is run and deploy a fresh model. The newly built image is pushed into the DOCKER_REGISTRY and pulled from there by subsequent resources.

Get your docker registry user and password encoded in base64

echo -n USER:PASSWORD | base64

Create a config.json file with your Docker registry url and the previous generated base64 string

{
	"auths": {
		"https://index.docker.io/v1/": {
			"auth": "xxxxxxxxxxxxxxx"
		}
	}
}

Create a config-map in the namespace you're using with the docker config

kubectl create --namespace ${NAMESPACE} configmap docker-config --from-file=<path to config.json>

Source documentation: Kaniko docs