Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip v4 rules on v6 cluster and vice versa #321

Merged
merged 3 commits into from
Oct 23, 2024

Conversation

Pavani-Panakanti
Copy link
Contributor

@Pavani-Panakanti Pavani-Panakanti commented Oct 21, 2024

Issue #, if available:

Description of changes:
EKS clusters are single stack and NP agent is initialized with v4 or v6 structures based on it is a v4 or v6 cluster. Applying v4 rules to v6 cluster and vice versa will result in failures while updating ebpf maps. This will result in map update failures for any other rules present in the same policy. To avoid that, skip any v4 rules in v6 cluster and vice versa

Testing
Create a V4 cluster and applied the following mixed network policy

spec:
  egress:
  - ports:
    - port: 443
      protocol: TCP
    to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 169.254.169.254/32
  - ports:
    - port: 443
      protocol: TCP
    to:
    - ipBlock:
        cidr: ::/0
        except:
        - fd00:ec2::254/128

Logs beefore change

{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:738","msg":"Pod has an Egress hook attached. Update the corresponding map","progFD: ":21,"mapName: ":"egress_map"}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:980","msg":"Current L4 entry count for catch all entry: ","count: ":1}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:969","msg":"IPv6 catch all entry in IPv4 mode - skip "}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:982","msg":"Total L4 entry count for catch all entry: ","count: ":1}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"utils/utils.go:182","msg":"L4 values: ","protocol: ":6,"startPort: ":443,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:946","msg":"Parsed Except CIDR","IP Key: ":"169.254.169.254/32"}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":255,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"utils/utils.go:182","msg":"L4 values: ","protocol: ":255,"startPort: ":443,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:946","msg":"Parsed Except CIDR","IP Key: ":"fd00:ec2::254/128"}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":255,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"utils/utils.go:182","msg":"L4 values: ","protocol: ":255,"startPort: ":443,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:783","msg":"ID of map to update: ","ID: ":292}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:787","msg":"BPF map update failed","error: ":"unable to update map: invalid argument"}
{"level":"info","ts":"2024-10-23T07:40:34.082Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:744","msg":"Egress Map update failed: ","error: ":"unable to update map: invalid argument"}

You can see the map update failure which is also affecting the v4 rule

Logs after the change

{"level":"info","ts":"2024-10-23T07:35:41.572Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:722","msg":"Pod has an Egress hook attached. Update the corresponding map","progFD: ":21,"mapName: ":"egress_map"}
{"level":"info","ts":"2024-10-23T07:35:41.572Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:35:41.572Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:965","msg":"IPv6 catch all entry in IPv4 mode - skip "}
{"level":"info","ts":"2024-10-23T07:35:41.572Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:976","msg":"Current L4 entry count for catch all entry: ","count: ":1}
{"level":"info","ts":"2024-10-23T07:35:41.572Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:978","msg":"Total L4 entry count for catch all entry: ","count: ":1}
{"level":"info","ts":"2024-10-23T07:35:41.572Z","logger":"ebpf-client","caller":"utils/utils.go:182","msg":"L4 values: ","protocol: ":6,"startPort: ":443,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:35:41.572Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:885","msg":"Skipping ipv6 rule in ipv4 cluster: ","CIDR: ":"::/0"}
{"level":"info","ts":"2024-10-23T07:35:41.572Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:942","msg":"Parsed Except CIDR","IP Key: ":"169.254.169.254/32"}
{"level":"info","ts":"2024-10-23T07:35:41.572Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":255,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:35:41.573Z","logger":"ebpf-client","caller":"utils/utils.go:182","msg":"L4 values: ","protocol: ":255,"startPort: ":443,"endPort: ":0}
{"level":"info","ts":"2024-10-23T07:35:41.573Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:767","msg":"ID of map to update: ","ID: ":274}

v6 rule was skipped and v4 rule map update succeeded

Verified changes on ipv6 cluster

{"level":"info","ts":"2024-10-23T08:08:48.710Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:722","msg":"Pod has an Egress hook attached. Update the corresponding map","progFD: ":20,"mapName: ":"egress_map"}
{"level":"info","ts":"2024-10-23T08:08:48.710Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T08:08:48.710Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:976","msg":"Current L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-10-23T08:08:48.710Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:978","msg":"Total L4 entry count for catch all entry: ","count: ":0}
{"level":"info","ts":"2024-10-23T08:08:48.710Z","logger":"ebpf-client","caller":"utils/utils.go:163","msg":"L4 values: ","protocol: ":254,"startPort: ":0,"endPort: ":0}
{"level":"info","ts":"2024-10-23T08:08:48.710Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:879","msg":"Skipping ipv4 rule in ipv6 cluster: ","CIDR: ":"0.0.0.0/0"}
{"level":"info","ts":"2024-10-23T08:08:48.710Z","logger":"ebpf-client","caller":"ebpf/bpf_client.go:767","msg":"ID of map to update: ","ID: ":31}
dev-dsk-pavanipt-2a-0981017d % kubectl describe networkpolicies https-v6-explicit -n test
Name:         https-v6-explicit
Namespace:    test
Created on:   2024-10-23 07:40:07 +0000 UTC
Labels:       <none>
Annotations:  <none>
Spec:
  PodSelector:     <none> (Allowing the specific traffic to all pods in this namespace)
  Not affecting ingress traffic
  Allowing egress traffic:
    To Port: 443/TCP
    To:
      IPBlock:
        CIDR: 0.0.0.0/0
        Except: 169.254.169.254/32
    ----------
    To Port: 443/TCP
    To:
      IPBlock:
        CIDR: ::/0
        Except: fd00:ec2::254/128
  Policy Types: Egress
dev-dsk-pavanipt-2a-0981017d % kubectl describe policyendpoint https-v6-explicit-xfhhc -n test
Name:         https-v6-explicit-xfhhc
Namespace:    test
Labels:       <none>
Annotations:  <none>
API Version:  networking.k8s.aws/v1alpha1
Kind:         PolicyEndpoint
Metadata:
  Creation Timestamp:  2024-10-23T07:40:07Z
  Generate Name:       https-v6-explicit-
  Generation:          6
  Owner References:
    API Version:           networking.k8s.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  NetworkPolicy
    Name:                  https-v6-explicit
    UID:                   954ef673-eb21-4b65-8a94-b12c63210f07
  Resource Version:        3960676
  UID:                     25808e33-c94a-4bbc-89a0-16c58ec9740f
Spec:
  Egress:
    Cidr:  0.0.0.0/0
    Except:
      169.254.169.254/32
    Ports:
      Port:      443
      Protocol:  TCP
    Cidr:        ::/0
    Except:
      fd00:ec2::254/128
    Ports:
      Port:      443
      Protocol:  TCP
  Pod Isolation:
    Egress
  Pod Selector:
  Pod Selector Endpoints:
    Host IP:    192.168.48.217
    Name:       tester1-6b498cbd59-hgdbc
    Namespace:  test
    Pod IP:     192.168.51.201
  Policy Ref:
    Name:       https-v6-explicit
    Namespace:  test
Events:         <none>

ipv6 entry will still be present in the policy endpoint but NPA will skip it while updating maps. Long term fix for this is to skip these in the controller and not add them to policy endpoint
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@Pavani-Panakanti Pavani-Panakanti requested a review from a team as a code owner October 21, 2024 18:14
@orsenthil
Copy link
Member

Could you paste the output of the kubectl describe networkpolicy and kubectl get policyendpoint with this change?

Do we have any unit test for this change?

Copy link
Member

@orsenthil orsenthil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@orsenthil orsenthil merged commit 34104a3 into aws:main Oct 23, 2024
4 checks passed
@Pavani-Panakanti
Copy link
Contributor Author

Added logs, networkpolicies and policyendpoints from the tests with this change to the PR. I will create a followup PR for the unit tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants