-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
If a ManagedNodeGroup update fails, the subsequent pulumi up does not recognize that it needs to update the statefile. #875
Comments
Hey @tusharshahrs thank you for this report. Is there a program that I could run to reproduce the issue? |
@tusharshahrs Bump to see if you had some code that we could reproduce this issue? It'd be great if we had a definitive program that causes the ManagedNodeGroup update to fail. Thanks! |
Here is the reproduction of the issue.
Steps
Results:
and it shows that the launch-template for the instances is:
and that the new launch template doesn't show up in the state file. The aws console for managed nodes for EKS now shows that the launch template is stuck on version 11. Now no matter what change I make to my launch template, for example, swapping out the instance size and saving the file, when I run
|
Running into this same issue, are there any plans for a fix? |
Based on the testing I've done, I was able to reproduce this issue with the upstream AWS v5 provider. Re-running the test on v6 causes the test to pass, which indicates that upgrading to v6 should help resolve this issue. |
I believe #910 will fix this when it merges. |
@rquitales @thomas11 Can this be closed out now that EKS 2.0.0 is available/? |
I was able to successfully avoid this issue with v2 of the EKS provider after running the repro steps twice.
Repro Pulumi program: https://github.com/rquitales/repro-eks/tree/v2-test |
What happened?
Issue with eks ManagedNodeGroups. Changed some poddisruptionbudgets that lead to an update to a
ManagedNodeGroup
failing. However, after solving the underlying issue Pulumi didn't think it needed to do another update of the ManagedNodeGroup. I had to run arefresh
. Only then did it do the update again.That the
MNG
update failed and a subsequentpulumi up
didn't try to update the MNG anymore.Essentially there's a MNG update b/c of a LaunchTemplate change from a version
1
to2
.If a MNG rotation fails and I run another update afterwards and Pulumi doesn't pick it up that the nodegroup rotation actually failed
Expected Behavior
If a MNG rotation fails and I run another update(
pulumi up
) afterwards, then pulumi pick it up and updates the state file.Steps to reproduce
pending.
Output of
pulumi about
Using
pulumi/sdk/v3
v3.61.0
and CLIv3.60.1
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The text was updated successfully, but these errors were encountered: