-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDK/Components - Renamed DockerContainer spec to to Container #323
SDK/Components - Renamed DockerContainer spec to to Container #323
Conversation
0233065
to
486af9a
Compare
/lgtm |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Ark-kun The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
1 similar comment
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Ark-kun The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…beflow#323) * Improve auto_deploy to support changing zone and testing changes. * Explicitly delete the storage deployment; the delete script won't delete it by design because we don't want to destroy data. * Instead of depending on a dummy kubeconfig file call generate/apply for platform and then for k8s. * For repos take them in the form ${ORG}/${NAME}@Branch. This matches what the other test script does. It also allows us to check out the repos from people's forks which makes testing changes easier. * Move logic in checkout-snapshot.sh into repo_clone_snapshot.py This is cleaner then having the python script shell out to a shell script. * Move the logic in deployment-workflow.sh into create_kf_instance.py * Add an option in create_kf_instance.py to read and parse the snapshot.json file rather than doing it in bash. * Change the arg name to be datadir instead of nfs_mount because nfs is an assumption of how it is run on K8s. * Check out the source into NFS to make it easier to debug. * Add a bash script to set the images using YQ * Add a wait operation for deletes and use it to wait for deletion of storage. * Rename init.sh to auto_deploy.sh to make it more descriptive. * Also modify auto_deploy.sh so we can run it locally. * Use the existing worker ci image as a base image to deduplicate the Dockerfile. * Attach labels to the deployment not the cluster * We want to use deployments not cluster to decide what to recycle * Deloyments are global but clusters are zonal and we want to be able to move to different zones to deal with issues like stockouts. * The GCP API does return labels on deployments. * We can figure out which deployment to recycle just by looking at the insertTime; we don't need to depend on deployment labels. * Add retries to deal with kubeflow/kubeflow#1933 * Fix lint. * * With K8s 1.11 we need to set volumeName otherwise we get storage class not found. Related to kubeflow/kubeflow#2475 * Fix lint. * * Change cron job to run every 12 hours. * This should be the current schedule but it looks like it was never checked in * We want to leave clusters up long enough to facilitate debugging.
* add s3 endpoint instructions for admin use case * Update kfp-admin-guide.md * Apply suggestions from code review Co-authored-by: Animesh Singh <singhan@us.ibm.com> * Align with new suggestions Co-authored-by: Animesh Singh <singhan@us.ibm.com>
No need for the longer name.
This change is