Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dataImportCronTemplates: Remove instancetype.kubevirt.io labels #2964

Merged
merged 1 commit into from
Jun 10, 2024

Conversation

lyarwood
Copy link
Member

@lyarwood lyarwood commented May 15, 2024

What this PR does / why we need it:

This metadata is now provided by the containerdisks project for these imports and used by CDI to later populate the required labels on the resulting PVCs:

$ podman image inspect quay.io/containerdisks/centos-stream:8 | jq '.[] | .Config.Env'
[
  "INSTANCETYPE_KUBEVIRT_IO_DEFAULT_INSTANCETYPE=u1.medium",
  "INSTANCETYPE_KUBEVIRT_IO_DEFAULT_PREFERENCE=centos.stream8"
]
$ ./cluster-up/kubectl.sh apply -f -<<EOF
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataImportCron
metadata:
  annotations:
    cdi.kubevirt.io/storage.bind.immediate.requested: "true"
  name: centos-stream8-image-cron
spec:
  schedule: "0 */12 * * *"
  template:
    spec:
      source:
        registry:
          url: docker://quay.io/containerdisks/centos-stream:8
      storage:
        resources:
          requests:
            storage: 10Gi
  garbageCollect: Outdated
  managedDataSource: centos-stream8
EOF
[..]
./cluster-up/kubectl.sh get dv
selecting podman as container runtime
NAME                          PHASE       PROGRESS   RESTARTS   AGE
centos-stream8-d06927f5ae68   Succeeded   100.0%                61s
[..]
./cluster-up/kubectl.sh get pvc/centos-stream8-d06927f5ae68 -o json | jq .metadata.labels 
selecting podman as container runtime
{
  "alerts.k8s.io/KubePersistentVolumeFillingUp": "disabled",
  "app": "containerized-data-importer",
  "app.kubernetes.io/component": "storage",
  "app.kubernetes.io/managed-by": "cdi-controller",
  "cdi.kubevirt.io/dataImportCron": "centos-stream8-image-cron",
  "instancetype.kubevirt.io/default-instancetype": "u1.medium",
  "instancetype.kubevirt.io/default-preference": "centos.stream8"
}

$ ./cluster-up/kubectl.sh apply -k https://github.com/kubevirt/common-instancetypes.git
[..]
./cluster-up/virtctl.sh create vm --volume-import type:pvc,size:10Gi,src:default/centos-stream8-d06927f5ae68  --infer-instancetype --infer-preference --cloud-init-user-data $USER_DATA --name centos | ./cluster-up/kubectl.sh apply -f -
[..]
./cluster-up/kubectl.sh get vm/centos -o json | jq '.spec |.instancetype,.preference'
selecting podman as container runtime
{
  "kind": "virtualmachineclusterinstancetype",
  "name": "u1.medium",
  "revisionName": "centos-u1.medium-9af4814d-f78a-458c-b3d1-2b60d64bfcf6-1"
}
{
  "kind": "virtualmachineclusterpreference",
  "name": "centos.stream8",
  "revisionName": "centos-centos.stream8-7a5e492e-bac0-4e73-878a-aa88564d0803-1"
}

./cluster-up/kubectl.sh delete vms/centos

./cluster-up/virtctl.sh create vm --volume-datasource src:centos-stream8,name:centos,size:10Gi  --infer-instancetype --infer-preference --cloud-init-user-data $USER_DATA --name centos | ./cluster-up/kubectl.sh apply -f -
[..]
./cluster-up/kubectl.sh get vm/centos -o json | jq '.spec |.instancetype,.preference'
selecting podman as container runtime
{
  "kind": "virtualmachineclusterinstancetype",
  "name": "u1.medium",
  "revisionName": "centos-u1.medium-9af4814d-f78a-458c-b3d1-2b60d64bfcf6-1"
}
{
  "kind": "virtualmachineclusterpreference",
  "name": "centos.stream8",
  "revisionName": "centos-centos.stream8-7a5e492e-bac0-4e73-878a-aa88564d0803-1"
}

Reviewer Checklist

Reviewers are supposed to review the PR for every aspect below one by one. To check an item means the PR is either "OK" or "Not Applicable" in terms of that item. All items are supposed to be checked before merging a PR.

  • PR Message
  • Commit Messages
  • How to test
  • Unit Tests
  • Functional Tests
  • User Documentation
  • Developer Documentation
  • Upgrade Scenario
  • Uninstallation Scenario
  • Backward Compatibility
  • Troubleshooting Friendly

Jira Ticket:

https://issues.redhat.com/browse/CNV-41767

Release note:

NONE

This metadata is now provided by the containerdisks project for these
imports and used by CDI to later populate the required labels on the
resulting DV and PVCs:

$ podman image inspect quay.io/containerdisks/centos-stream:8 | jq '.[] | .Config.Env'
[
  "INSTANCETYPE_KUBEVIRT_IO_DEFAULT_INSTANCETYPE=u1.medium",
  "INSTANCETYPE_KUBEVIRT_IO_DEFAULT_PREFERENCE=centos.stream8"
]

Signed-off-by: Lee Yarwood <lyarwood@redhat.com>
@kubevirt-bot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

1 similar comment
Copy link

openshift-ci bot commented May 15, 2024

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@kubevirt-bot kubevirt-bot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. size/S labels May 15, 2024
Copy link

Quality Gate Passed Quality Gate passed

Issues
0 New issues
0 Accepted issues

Measures
0 Security Hotspots
No data about Coverage
0.7% Duplication on New Code

See analysis details on SonarCloud

@kubevirt-bot kubevirt-bot requested review from assafad and nunnatsa May 15, 2024 14:22
@kubevirt-bot kubevirt-bot added release-note-none Denotes a PR that doesn't merit a release note. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels May 15, 2024
@lyarwood
Copy link
Member Author

/cc @0xFelix

@kubevirt-bot kubevirt-bot requested a review from 0xFelix May 17, 2024 11:22
Copy link
Member

@0xFelix 0xFelix left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, thanks!

/lgtm

@kubevirt-bot kubevirt-bot added the lgtm Indicates that a PR is ready to be merged. label May 28, 2024
@lyarwood lyarwood marked this pull request as ready for review June 3, 2024 08:58
@kubevirt-bot kubevirt-bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jun 3, 2024
@lyarwood
Copy link
Member Author

lyarwood commented Jun 3, 2024

Thanks @0xFelix, @nunnatsa this should be ready to review now if you have time.

@lyarwood
Copy link
Member Author

lyarwood commented Jun 5, 2024

/retest-required

Copy link

openshift-ci bot commented Jun 5, 2024

@lyarwood: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/hco-e2e-operator-sdk-sno-aws f3434d6 link false /test hco-e2e-operator-sdk-sno-aws
ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-aws f3434d6 link false /test hco-e2e-upgrade-prev-operator-sdk-sno-aws
ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws f3434d6 link false /test hco-e2e-upgrade-operator-sdk-sno-aws
ci/prow/hco-e2e-kv-smoke-azure f3434d6 link true /test hco-e2e-kv-smoke-azure
ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-aws f3434d6 link true /test hco-e2e-consecutive-operator-sdk-upgrades-aws
ci/prow/hco-e2e-upgrade-operator-sdk-aws f3434d6 link true /test hco-e2e-upgrade-operator-sdk-aws
ci/prow/hco-e2e-upgrade-prev-operator-sdk-aws f3434d6 link true /test hco-e2e-upgrade-prev-operator-sdk-aws
ci/prow/hco-e2e-operator-sdk-aws f3434d6 link true /test hco-e2e-operator-sdk-aws

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@hco-bot
Copy link
Collaborator

hco-bot commented Jun 6, 2024

hco-e2e-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-operator-sdk-aws
hco-e2e-upgrade-prev-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-prev-operator-sdk-aws
hco-e2e-upgrade-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-operator-sdk-aws
hco-e2e-consecutive-operator-sdk-upgrades-azure lane succeeded.
/override ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-aws
hco-e2e-kv-smoke-gcp lane succeeded.
/override ci/prow/hco-e2e-kv-smoke-azure
hco-e2e-upgrade-operator-sdk-sno-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws
hco-e2e-upgrade-prev-operator-sdk-sno-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-aws

1 similar comment
@hco-bot
Copy link
Collaborator

hco-bot commented Jun 6, 2024

hco-e2e-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-operator-sdk-aws
hco-e2e-upgrade-prev-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-prev-operator-sdk-aws
hco-e2e-upgrade-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-operator-sdk-aws
hco-e2e-consecutive-operator-sdk-upgrades-azure lane succeeded.
/override ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-aws
hco-e2e-kv-smoke-gcp lane succeeded.
/override ci/prow/hco-e2e-kv-smoke-azure
hco-e2e-upgrade-operator-sdk-sno-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws
hco-e2e-upgrade-prev-operator-sdk-sno-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-aws

@kubevirt-bot
Copy link
Contributor

@hco-bot: Overrode contexts on behalf of hco-bot: ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-aws, ci/prow/hco-e2e-kv-smoke-azure, ci/prow/hco-e2e-operator-sdk-aws, ci/prow/hco-e2e-upgrade-operator-sdk-aws, ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws, ci/prow/hco-e2e-upgrade-prev-operator-sdk-aws, ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-aws

In response to this:

hco-e2e-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-operator-sdk-aws
hco-e2e-upgrade-prev-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-prev-operator-sdk-aws
hco-e2e-upgrade-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-operator-sdk-aws
hco-e2e-consecutive-operator-sdk-upgrades-azure lane succeeded.
/override ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-aws
hco-e2e-kv-smoke-gcp lane succeeded.
/override ci/prow/hco-e2e-kv-smoke-azure
hco-e2e-upgrade-operator-sdk-sno-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws
hco-e2e-upgrade-prev-operator-sdk-sno-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-aws

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@kubevirt-bot
Copy link
Contributor

@hco-bot: Overrode contexts on behalf of hco-bot: ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-aws, ci/prow/hco-e2e-kv-smoke-azure, ci/prow/hco-e2e-operator-sdk-aws, ci/prow/hco-e2e-upgrade-operator-sdk-aws, ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws, ci/prow/hco-e2e-upgrade-prev-operator-sdk-aws, ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-aws

In response to this:

hco-e2e-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-operator-sdk-aws
hco-e2e-upgrade-prev-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-prev-operator-sdk-aws
hco-e2e-upgrade-operator-sdk-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-operator-sdk-aws
hco-e2e-consecutive-operator-sdk-upgrades-azure lane succeeded.
/override ci/prow/hco-e2e-consecutive-operator-sdk-upgrades-aws
hco-e2e-kv-smoke-gcp lane succeeded.
/override ci/prow/hco-e2e-kv-smoke-azure
hco-e2e-upgrade-operator-sdk-sno-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-operator-sdk-sno-aws
hco-e2e-upgrade-prev-operator-sdk-sno-azure lane succeeded.
/override ci/prow/hco-e2e-upgrade-prev-operator-sdk-sno-aws

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@nunnatsa
Copy link
Collaborator

/approve
/override-bot

@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: nunnatsa

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot kubevirt-bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 10, 2024
@nunnatsa
Copy link
Collaborator

hco-e2e-operator-sdk-sno-azure lane passed
/override ci/prow/hco-e2e-operator-sdk-sno-aws

@kubevirt-bot
Copy link
Contributor

@nunnatsa: Overrode contexts on behalf of nunnatsa: ci/prow/hco-e2e-operator-sdk-sno-aws

In response to this:

hco-e2e-operator-sdk-sno-azure lane passed
/override ci/prow/hco-e2e-operator-sdk-sno-aws

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@kubevirt-bot kubevirt-bot merged commit c70bb07 into kubevirt:main Jun 10, 2024
32 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. lgtm Indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. size/S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants