Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: eip qos #2632

Merged
merged 1 commit into from
Apr 11, 2023
Merged

fix: eip qos #2632

merged 1 commit into from
Apr 11, 2023

Conversation

shane965
Copy link
Contributor

@shane965 shane965 commented Apr 11, 2023

When an EIP is created, "cachedEip.Status.IP" is empty. This will cause the QoS settings to fail.

What type of this PR

Examples of user facing changes:

  • Bug fixes

WHAT

🤖 Generated by Copilot at 644b2ca

Fix a bug in addEipQoS function that used the wrong IP address for EIP pods. This change improves the QoS policy for EIP pods in kube-ovn.

🤖 Generated by Copilot at 644b2ca

addEipQoS fixed
wrong IP caused bad policy
autumn leaves fall fast

HOW

🤖 Generated by Copilot at 644b2ca

  • Fix a bug where the wrong IP address was passed to the addEipQoS function (link). Use v4ip, the IPv4 address of the EIP, instead of cachedEip.Status.IP, the IP address of the NAT gateway.

When an EIP is created, "cachedEip.Status.IP" is empty.
This will cause the QoS settings to fail.
@github-actions
Copy link
Contributor

  • The commit message is missing, it should provide a brief description of the changes made in this patch.
  • It's not clear what v4ip is and where it comes from. It might be necessary to add a comment explaining its origin and purpose.
  • It would be better to use a more descriptive error message instead of just logging the error with "failed to add qos".

@zbb88888 zbb88888 merged commit 7e872fb into kubeovn:master Apr 11, 2023
@shane965 shane965 deleted the fix-eip-qos branch April 28, 2023 01:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants