Skaffold is configured to build the artifacts on your cluster using kaniko
skaffold.yaml
is currently configured to run on
- project: issue-label-bot-dev
- cluster: issue-label-bot-dev-kf
- zone: us-east1-d
Setup a namespace for your development
-
Create the namespace
kubectl create namespace ${NAMESPACE}
-
Modify skaffold.yaml; change cluster.namespace to ${NAMESPACE}
- Use a namespace without ISTIO side car injection turned on
- Due to GoogleContainerTools/skaffold#3442 Skaffold won't work with Kaniko in namespaces with ISTIO side car injection turned on
-
Copy the GCP secret to use to push images
NAMESPACE=<new kubeflow namespace> SOURCE=kubeflow NAME=user-gcp-sa SECRET=$(kubectl -n ${SOURCE} get secrets ${NAME} -o jsonpath="{.data.${NAME}\.json}" | base64 -d) kubectl create -n ${NAMESPACE} secret generic ${NAME} --from-literal="${NAME}.json=${SECRET}"
-
Set the namespace in
deployment/overlays/dev/kustomization.yaml
-
Start skaffold
skaffold dev -v info --cleanup=false
-
Port-forward the local port to the remote service
kubectl -n ${NAMESPACE} port-forward service/issue-embedding-server 8080:80
- TODO(jlewi): skaffold supposedly will create local port-forwarding automatically; need to investigate that; looks like it might require an additional flag to skaffold and require ports to be declared.
-
Send a prediction request
curl -d '{"title":"some title", "body":"sometext"}' -H "Content-Type: application/json" -X POST http://localhost:8080/text
- TODO(jlewi): Output is binary so how should we decode it?
-
Kaniko will cache the output of RUN commands using remote layers (info)
-
This means the command
RUN pip install -r requirements.worker.txt
will result in a cached layer- TODO(jlewi) When using skaffold and kaniko its not clear whether the cache is being invalidated when requirements.worker.txt is changing.
- kubeflow/code-intelligence#78Can we use Skaffold sync to sync the python code into the container so we can skip rebuilds?
- Look into using skaffold profiles