Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

remove check for returning in-memory size when VMSS is in updating state #4787

Merged
merged 1 commit into from
Apr 6, 2022

Conversation

marwanad
Copy link
Member

@marwanad marwanad commented Apr 5, 2022

We've had this logic for a while now. If the VMSS is updating state, return the in-memory size for a nodegroup/scale-set.

This was trying to guard from a scenario like below:

  1. Autoscaler scales up to 4 instances
  2. We get hit by a cache refresh
  3. ARM hasn't registered our new capacity goal so we get stale data and we end up updating the in-memory cache to a lower value.

We've recently guarded against this case by this PR, where we extend our in-memory TTL so we should never hit that scenario.

For cases where the TTL is long, I believe this "Updating" logic can lead to a bad scenario where we take very long to reconcile with the real IaaS capacity goal if you for example get hit by multiple scale ups and it takes VMSS too long to scale or if cloudprovider is updating the network profile. In this case, it's best to not have it.

In the case where you're also experiencing rapid spot evictions, the VMSS state will be "updating" so you'll miss the chance of noticing a Spot eviction and reacting to it before the node object gets removed and pods evicted.

/area provider/azure

@k8s-ci-robot k8s-ci-robot added area/provider/azure Issues or PRs related to azure provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Apr 5, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: marwanad

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot requested review from feiskyer and nilo19 April 5, 2022 22:34
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 5, 2022
@marwanad marwanad force-pushed the updating-state-and-cache branch from 00c7dee to 542e919 Compare April 5, 2022 22:35
@nilo19
Copy link
Member

nilo19 commented Apr 6, 2022

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 6, 2022
@k8s-ci-robot k8s-ci-robot merged commit 1fa0716 into kubernetes:master Apr 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/provider/azure Issues or PRs related to azure provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants