-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fresh install renders 'no matches for kind "Application"' #1933
Comments
Thanks Pete. It sounds like the crd isn't getting registered or possibly the crd is getting registered with the wrong scope. Is this on master? |
Yes |
I'm able to reproduce it in branch
|
Hi @cheyang I encountered the same issue , but its disppear when I re-apply the ksonnet , have some idea for that ? |
you should try
application has a dependency on metacontroller. This is what kfctl.sh is doing on master. |
You can run this on Alibaba Cloud by following the document below which is in Chinese: https://yq.aliyun.com/articles/686672 |
Has anyone found a way to reproduce this? |
when i apply ks show default -c metacontroller -c application > default.yaml |
got ERROR finding app root from starting path: : unable to find ksonnet project |
I observed this error. The logs clearly indicate the apps resource CR was created.
I wonder if it takes some time for the CRD to be loaded. Does simply retrying help? |
It looks like retrying fixes things. Below are the logs the first time |
* Explicitly delete the storage deployment; the delete script won't delete it by design because we don't want to destroy data. * Instead of depending on a dummy kubeconfig file call generate/apply for platform and then for k8s. * For repos take them in the form ${ORG}/${NAME}@Branch. This matches what the other test script does. It also allows us to check out the repos from people's forks which makes testing changes easier. * Move logic in checkout-snapshot.sh into repo_clone_snapshot.py This is cleaner then having the python script shell out to a shell script. * Move the logic in deployment-workflow.sh into create_kf_instance.py * Add an option in create_kf_instance.py to read and parse the snapshot.json file rather than doing it in bash. * Change the arg name to be datadir instead of nfs_mount because nfs is an assumption of how it is run on K8s. * Check out the source into NFS to make it easier to debug. * Add a bash script to set the images using YQ * Add a wait operation for deletes and use it to wait for deletion of storage. * Rename init.sh to auto_deploy.sh to make it more descriptive. * Also modify auto_deploy.sh so we can run it locally. * Use the existing worker ci image as a base image to deduplicate the Dockerfile. * Attach labels to the deployment not the cluster * We want to use deployments not cluster to decide what to recycle * Deloyments are global but clusters are zonal and we want to be able to move to different zones to deal with issues like stockouts. * The GCP API does return labels on deployments. * We can figure out which deployment to recycle just by looking at the insertTime; we don't need to depend on deployment labels. * Add retries to deal with kubeflow/kubeflow#1933
* Improve auto_deploy to support changing zone and testing changes. * Explicitly delete the storage deployment; the delete script won't delete it by design because we don't want to destroy data. * Instead of depending on a dummy kubeconfig file call generate/apply for platform and then for k8s. * For repos take them in the form ${ORG}/${NAME}@Branch. This matches what the other test script does. It also allows us to check out the repos from people's forks which makes testing changes easier. * Move logic in checkout-snapshot.sh into repo_clone_snapshot.py This is cleaner then having the python script shell out to a shell script. * Move the logic in deployment-workflow.sh into create_kf_instance.py * Add an option in create_kf_instance.py to read and parse the snapshot.json file rather than doing it in bash. * Change the arg name to be datadir instead of nfs_mount because nfs is an assumption of how it is run on K8s. * Check out the source into NFS to make it easier to debug. * Add a bash script to set the images using YQ * Add a wait operation for deletes and use it to wait for deletion of storage. * Rename init.sh to auto_deploy.sh to make it more descriptive. * Also modify auto_deploy.sh so we can run it locally. * Use the existing worker ci image as a base image to deduplicate the Dockerfile. * Attach labels to the deployment not the cluster * We want to use deployments not cluster to decide what to recycle * Deloyments are global but clusters are zonal and we want to be able to move to different zones to deal with issues like stockouts. * The GCP API does return labels on deployments. * We can figure out which deployment to recycle just by looking at the insertTime; we don't need to depend on deployment labels. * Add retries to deal with kubeflow/kubeflow#1933 * Fix lint. * * With K8s 1.11 we need to set volumeName otherwise we get storage class not found. Related to kubeflow/kubeflow#2475 * Fix lint. * * Change cron job to run every 12 hours. * This should be the current schedule but it looks like it was never checked in * We want to leave clusters up long enough to facilitate debugging.
I have exactly the same error on the first run (fresh install). I run the script for the 2nd time and the problem is fixed. |
I wonder if there is a race condition and we are trying to create an App resource before the App CRD is fully ready? |
since we removed the application component (in lieu of the kubernetes-sigs/application controller) I don't think this can occur anymore. Closing. |
/close |
@kkasravi: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hey reopening to say this error still popped up for me on a 1.1 install, but at a different point:
rerunning also getting it for kind
and
and here too rerunning the command works. |
Issue-Label Bot is automatically applying the labels:
Please mark this comment with 👍 or 👎 to give our bot feedback! |
I have the same error when deploying on AKS. To deploy I'm using this command: export CONFIG_URI="https://raw.githubusercontent.com/kubeflow/manifests/v1.2-branch/kfdef/kfctl_k8s_istio.v1.2.0.yaml"
kfctl apply -V -f ${CONFIG_URI} I am getting this error: WARN[0077] Encountered error applying application application: (kubeflow.error): Code 500 with message: Apply.Run :
[unable to recognize "/tmp/kout234490513": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1", unable to recognize "/tmp/kout234490513": no matches for kind "Application" in version "app.k8s.io/v1beta1"] filename="kustomize/kustomize.go:284" I've tried rerunning, but the same errors keeps popping up. |
same here |
having the same issue on AKS with kfctl_k8s_istio.v1.2.0.yaml |
I've seen this twice but it seems to remedy itself by regenerating and reapplying the ks app.
/cc @kkasravi
The text was updated successfully, but these errors were encountered: