-
Notifications
You must be signed in to change notification settings - Fork 566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support e2e test EFS create on EKS clusters by finding EKS node subnets #707
Conversation
@@ -26,9 +26,9 @@ ADD . . | |||
ARG client_source=k8s | |||
ENV EFS_CLIENT_SOURCE=$client_source | |||
|
|||
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} make aws-efs-csi-driver | |||
RUN OS=${TARGETOS} ARCH=${TARGETARCH} make $TARGETOS/$TARGETARCH |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
curious the purpose of the change on make here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i wanted to make it more consistent with fsx and ebs Makefiles so I copied this pattern from there. Specifically I need the make image
rule for the e2e test to work beacuse the e2e test needs to build/push an image to a per-test tag and the current rule was pushing it to hardcoded master
tag which is not usable since multijple tests could be running in parallel.
As for why ebs and fsx are this in the first place, it's because ebs must build for windows so I made a bunch of changes to its Makefile to use these generic OS/ARCH env variables. That comes at cost of readability/simplicity since for efs we only care about arm/x64... so I am open to simplifying the Makefile but probably not in the near future 😅
.PHONY: all-push | ||
all-push: | ||
docker buildx build \ | ||
--no-cache-filter=linux-amazon \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice to build the no cache! would this be helpful on avoiding unnecessary image cache triggered CVEs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, this is to avoid case where somehow you have cache with older yum packages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice, I will give it a build and push to my dev account to see this can help sort out most of our CVE mitigation
@@ -68,6 +71,7 @@ function eksctl_create_cluster() { | |||
|
|||
if [[ "$WINDOWS" == true ]]; then | |||
${BIN} create nodegroup \ | |||
--managed=false \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reading this flag description, do we change it to false because this is test nodegroups and we do not want to make it show up from the console?
setting --managed=false or using the nodeGroups field creates an unmanaged nodegroup.
Bear in mind that unmanaged nodegroups do not appear in the EKS console, which as a general rule only knows about EKS-managed nodegroups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's because "Windows is only supported for self-managed (--managed=false flag) nodegroups." (note this is in the WINDOWS clause) . https://eksctl.io/usage/windows-worker-nodes/ . I will add a comment in a follow-up PR for hack/e2e changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: wangnyue, wongma7 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
@wangnyue: changing LGTM is restricted to collaborators In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
test/e2e/cloud.go
Outdated
{ | ||
Name: aws.String("tag:Name"), | ||
Values: []*string{ | ||
aws.String(fmt.Sprintf("*%s*", clusterName)), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are all the eks clusters created with eksctl?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another problem is that you may get more subnets than expected. In the event that you have cluster1
and cluster10
, you will get additional subnets when looking for those that belong to cluster1
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in GitHub CI, yes.
however I made this more generic because I also have clusters created with other tools/scripts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated:
first, call getEksctlSubnetIds. eksctl gives us reliable way to find the subnets "tag:alpha.eksctl.io/cluster-name"
second, call getEksCloudFormationSubnetIds. this is necessarily very generic since the name could be anything but for now i narrowed it to "%s-*", clusterName because that's how I name mine
/lgtm |
@Ashley-wenyizha: changing LGTM is restricted to collaborators In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lgtm |
Is this a bug fix or adding new feature? /bug
What is this PR about? / Why do we need it? The e2e test can run against an EKS cluster if an EFS file system is provided (i.e., --create-file-system=false). However, if an EFS file system is not provided then the e2e test EFS create fails because it fails to find EKS node subnets which are needed for creating EFS mount targets.
I don't know of a tag that is guaranteed to exist on ALL EKS clusters so I am trying
eksctl does tag clusters with alpha.eksctl.io/cluster-name but not all EKS clusters are created by eksctl
What testing is done?
TEST_ID=1 CLEAN=false make test-e2e CLUSTER_TYPE=eksctl \ K8S_VERSION="1.20" \ DRIVER_NAME=aws-efs-csi-driver \ HELM_VALUES_FILE="./hack/values_eksctl.yaml" \ CONTAINER_NAME=efs-plugin \ TEST_EXTRA_FLAGS='--cluster-name=$CLUSTER_NAME' \ AWS_REGION=us-west-2 \ AWS_AVAILABILITY_ZONES=us-west-2a,us-west-2b,us-west-2c \ TEST_PATH=./test/e2e/... \ GINKGO_FOCUS="\[efs-csi\]" \ ./hack/e2e/run.sh