We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Be able to increase MTU if it affects performance
I am not sure if we have a problem here
bond0: connected to bond0 "bond0" bond, B8:CE:F6:CC:A3:F4, sw, mtu 9000 ip4 default inet4 172.24.1.24/24 route4 default via 172.24.1.1 metric 300 route4 172.24.1.0/24 metric 300 ens1f0np0: connected to ens1f0np0 "Mellanox MT2892" ethernet (mlx5_core), B8:CE:F6:CC:A3:F4, hw, mtu 9000 master bond0 ens1f1np1: connected to ens1f1np1 "Mellanox MT2892" ethernet (mlx5_core), B8:CE:F6:CC:A3:F5, hw, mtu 9000 master bond0 lo: connected (externally) to lo "lo" loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536 inet4 127.0.0.1/8 eno8303: disconnected "Broadcom and subsidiaries NetXtreme BCM5720" ethernet (tg3), 70:B5:E8:D0:A0:F0, hw, mtu 1500 eno8403: disconnected "Broadcom and subsidiaries NetXtreme BCM5720" ethernet (tg3), 70:B5:E8:D0:A0:F1, hw, mtu 1500 bpfin.cali: unmanaged "bpfin.cali" ethernet (veth), 7E:D6:B4:AE:4A:2E, sw, mtu 1500 bpfout.cali: unmanaged "bpfout.cali" ethernet (veth), 46:D9:B4:BA:E4:2F, sw, mtu 1500 cali00ba82360c7: unmanaged "cali00ba82360c7" ethernet (veth), EE:EE:EE:EE:EE:EE, sw, mtu 8950 cali010ee92ee2b: unmanaged "cali010ee92ee2b" ethernet (veth), EE:EE:EE:EE:EE:EE, sw, mtu 8950
- chart: projectcalico/tigera-operator version: v3.28.0 name: calico namespace: tigera-operator values: - installation: calicoNetwork: linuxDataplane: BPF mtu: 8950 bgp: Disabled ipPools: - cidr: 10.244.0.0/14 blockSize: 20 encapsulation: VXLAN
kind: ConfigMap apiVersion: v1 metadata: name: kubernetes-services-endpoint namespace: tigera-operator data: KUBERNETES_SERVICE_HOST: "172.24.1.15" KUBERNETES_SERVICE_PORT: "6443"
apiVersion: projectcalico.org/v3 kind: FelixConfiguration metadata: annotations: operator.tigera.io/bpfEnabled: "true" creationTimestamp: "2023-11-09T09:16:01Z" generation: 1 name: default namespace: calico-system resourceVersion: "65934135" uid: 5d0de475-8603-4e7b-9282-baecc231e48e spec: bpfEnabled: true bpfExternalServiceMode: DSR bpfLogLevel: "" floatingIPs: Disabled healthPort: 9099 logSeverityScreen: Info reportingInterval: 0s vxlanMTU: 8950 vxlanVNI: 4096
# /etc/NetworkManager/conf.d/calico.conf [keyfile] unmanaged-devices=interface-name:cali*;interface-name:bpf*.cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:vxlan-v6.calico;interface-name:wireguard.cali;interface-name:wg-v6.cali
apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration certificateKey: "" skipPhases: - addon/kube-proxy --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration networking: serviceSubnet: "10.243.0.0/16" podSubnet: "10.244.0.0/14" dnsDomain: "l8s.local" controllerManager: extraArgs: "node-cidr-mask-size": "20" "allocate-node-cidrs": "false" apiServer: certSANs: - "172.24.1.15" - "172.24.1.16" - "k8clust-lon01.l8s.space" clusterName: "k8clust-lon01" controlPlaneEndpoint: "k8clust-lon01.l8s.space:6443" --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration maxPods: 4000 cgroupDriver: systemd serverTLSBootstrap: true
# uname -r 5.14.0-452.el9.x86_64 # cat /etc/os-release NAME="CentOS Stream" VERSION="9" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="9" PLATFORM_ID="platform:el9" PRETTY_NAME="CentOS Stream 9" ANSI_COLOR="0;31" LOGO="fedora-logo-icon" CPE_NAME="cpe:/o:centos:centos:9" HOME_URL="https://centos.org/" BUG_REPORT_URL="https://issues.redhat.com/" REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
The text was updated successfully, but these errors were encountered:
yeah that sounds like the device should have a large MTU, thanks for reporting it!
Sorry, something went wrong.
tomastigera
Successfully merging a pull request may close this issue.
Expected Behavior
Be able to increase MTU if it affects performance
Current Behavior
I am not sure if we have a problem here
Possible Solution
Steps to Reproduce (for bugs)
Context
Your Environment
kubernetes 1.29.5
The text was updated successfully, but these errors were encountered: