Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eksctl utils update-aws-node wiping service account annotation #1645

Closed
kaleb-wade-nc opened this issue Dec 6, 2019 · 4 comments · Fixed by #2459
Closed

eksctl utils update-aws-node wiping service account annotation #1645

kaleb-wade-nc opened this issue Dec 6, 2019 · 4 comments · Fixed by #2459
Assignees
Labels
kind/bug needs-investigation priority/important-soon Ideally to be resolved in time for the next release

Comments

@kaleb-wade-nc
Copy link

kaleb-wade-nc commented Dec 6, 2019

What happened?
We have IRSA configured with the OIDC provider and part of our update or create script runs eksctl utils update-aws-node -f $config_file -p $aws_profile --approve to ensure it is up to date, however it recreates the aws-node SA which is where we have our annotation for the ARN of the role it needs to run as. I also noticed that if you leave off the --aprove it says that it is essentially doing a "plan" (no changes) however I observed the same functionality of it wiping the annotation.

What you expected to happen?
I'm not sure why on the update the Service Account needs to be completely recreated, but that could be due to my lack of understanding some of the underlying components. I would think that it would do a check to see if the Service Account already exists, if not then create it, otherwise just leave it.

How to reproduce it?
Add any annotation to the aws-node SA then run the update-aws-node function and it will remove it.

Anything else we need to know?
I'm using eksctl 0.11.0 on Catalina.

Versions

$ eksctl version
[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.11.0"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:29Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.8-eks-b7174d", GitCommit:"b7174db5ee0e30c94a0b9899c20ac980c0850fc8", GitTreeState:"clean", BuildDate:"2019-10-18T17:56:01Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Logs

@maxstepanov
Copy link

Even if we have the proper service account defined in cluster.yaml

iam:
  withOIDC: true
  serviceAccounts:
  - metadata:
      name: aws-node
      namespace: kube-system
      labels:
        aws-usage: cluster-ops
    attachPolicyARNs:
    - "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
eksctl utils update-aws-node -f cluster.yaml

eksctl just wipes the annotation and aws-node and will fail to create and attach any new IPs to the node.

@TBBle
Copy link
Contributor

TBBle commented Apr 27, 2020

Is this fixed by #1990 in the next release, 0.18.0? Or possibly just the part

I also noticed that if you leave off the --aprove it says that it is essentially doing a "plan" (no changes) however I observed the same functionality of it wiping the annotation.

@kalbir kalbir added the priority/important-soon Ideally to be resolved in time for the next release label May 12, 2020
@TBBle
Copy link
Contributor

TBBle commented Jun 15, 2020

I feel like the behaviour of recreating the ServiceAccount might be "working as intended". Since the addons in question are shipped with EKS, including the service accounts, you'd get the same issue if you follow the EKS user guide to update aws-node, and the current code is just automating that procedure.

I'm actually surprised this ever worked, since the ServiceAccount creation from the iam configuration key should have been rejected because the service account already existed. And if you have to use --override-existing-serviceaccounts the first time to get it working, you'll have to rerun the same comment with --override-existing-serviceaccounts every time one of them is modified, in almost all circumstances using raw manifests.

Using Helm to manage this instead would avoid such issues, but I suspect we don't want to pull Helm into eksctl just to manage addon upgrades when they have a Helm chart available. Personally I plan to move control of aws-node to the Helm chart once it's a bit more stable and supported, and probably core-dns too, at some point.

Edit: #2245 suggests we might pull in Helm after all.

@michaelbeaumont
Copy link
Contributor

While I agree that problems like this are better solved using another component like Helm to manage configuration, there's also an easy fix for this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug needs-investigation priority/important-soon Ideally to be resolved in time for the next release
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants