Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress rule with named port not resolving to numerical port on target Pod in PolicyEndpoint #81

Open
yndai opened this issue Jan 31, 2024 · 9 comments
Labels
bug Something isn't working

Comments

@yndai
Copy link

yndai commented Jan 31, 2024

What happened:

Summary: When creating a NetworkPolicy with an ingress rule that allows traffic to a named port on a target pod, the ingress rule is not created in the resulting PolicyEndpoint (and traffic is not allowed either).

Additional details:

  • Explicitly specifying the numerical target port in the NetworkPolicy works fine
  • If the source of the ingress is a Pod and that source Pod has the same named ports, the rule also works fine (regardless of the numerical values declared in the source pod) (See the "Anything else we need to know" section)

What you expected to happen:

When specifying a named port in a NetworkPolicy rule, the port should be mapped to the corresponding numerical port on the target Pod, if available.

How to reproduce it (as minimally and precisely as possible):

Example:

kubctl apply -f on:

apiVersion: v1
kind: Pod
metadata:
  name: target
  labels:
    app: target
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    ports:
    - containerPort: 80
      name: web-service # Named port
    - containerPort: 443
      name: s-web-service # Named port
---
apiVersion: v1
kind: Pod
metadata:
  name: source
  labels:
    app: source
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-web-traffic
spec:
  podSelector:
    matchLabels:
      app: target
  policyTypes:
  - Ingress
  ingress:
    - from:
      - ipBlock:
          cidr: 172.17.0.0/16
      ports:
      - protocol: TCP
        port: web-service # Named port
    - from:
      - podSelector:
          matchLabels:
            app: source
      ports:
      - protocol: TCP
        port: s-web-service # Named port

We expect this in the resultingPolicyEndpoint:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress:
  - cidr: 172.17.0.0/16
    ports:
    - port: 80
      protocol: TCP
  - cidr: <source Pod IP>
    ports:
    - port: 443
      protocol: TCP
[...]

But instead got this:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress: []
[...]

Anything else we need to know?:

Another weirdness: if the source Pod itself has the same named ports defined in the ingress rule, the rule that specifies the source Pod actually works as expected!

e.g. if you delete the source pod in the above example and apply this instead:

apiVersion: v1
kind: Pod
metadata:
  name: source
  labels:
    app: source
spec:
  containers:
  - name: nginx-container
    image: nginx:latest
    ports:
    - containerPort: 8000 # Irrelevant
      name: web-service   # Named port
    - containerPort: 8443 # Irrelevant
      name: s-web-service # Named port

We now get:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress:
  - cidr: <source Pod IP>
      ports:
      - port: 443
        protocol: TCP
[...]

The CIDR source rule is still missing, however. This makes me think that there is an incorrect check against the source Pod when mapping named ports to their numerical values instead of the target Pod.

Environment:

  • Kubernetes version (use kubectl version): Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.8-eks-8cb36c9", GitCommit:"fca3a8722c88c4dba573a903712a6feaf3c40a51", GitTreeState:"clean", BuildDate:"2023-11-22T21:52:13Z", GoVersion:"go1.20.11", Compiler:"gc", Platform:"linux/amd64"}
  • CNI Version: amazon-k8s-cni:v1.16.0-eksbuild.1
  • Network Policy Agent Version: aws-network-policy-agent:v1.0.7-eksbuild.1
  • OS (e.g: cat /etc/os-release): Amazon Linux 2
  • Kernel (e.g. uname -a): Linux ip-10-1-61-179.ec2.internal 5.10.192-183.736.amzn2.x86_64 aws/aws-network-policy-agent#1 SMP Wed Sep 6 21:15:41 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
@yndai yndai added the bug Something isn't working label Jan 31, 2024
@aballman
Copy link

aballman commented Feb 1, 2024

#81
#71

@achevuru
Copy link
Contributor

Moving this to Network Policy controller repo. Fix for this issue in NP controller is currently being rolled out to all the clusters. Will update here once the rollout is complete.

@achevuru achevuru transferred this issue from aws/aws-network-policy-agent Feb 20, 2024
@yndai
Copy link
Author

yndai commented Mar 20, 2024

Update:
I see our managed cluster has been updated to v1.27.10-eks-508b6b3 and the managed platform version eks.13.
In the reproduction example from my post, the from.podSelector rule is now working, however the from.ipBlock rule still does not work for a named port. Replacing port: web-service with port: 80 produces a correct EndpointPolicy.

Specifically from above example:

Expected:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress:
  - cidr: 172.17.0.0/16
    ports:
    - port: 80
      protocol: TCP
  - cidr: <source Pod IP>
    ports:
    - port: 443
      protocol: TCP
[...]

But got:

apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
  name: allow-web-traffic-fjz4q
 [...]
spec:
  ingress:
  - cidr: <source Pod IP>
    ports:
    - port: 443
      protocol: TCP
[...]

@achevuru
Copy link
Contributor

@yndai Any particular reason behind specifying a static Pod IP as opposed to using Pod/Namespace selectors? Pod IPs are ephemeral, so trying to understand the use case here. Right now, we don't support named ports when pod IPs are specified as static IPs in the NP resource (and also don't support Service VIPs for static pod IPs)..

@yndai
Copy link
Author

yndai commented Mar 21, 2024

@achevuru I think you might be misunderstanding my example (correct me if I am wrong):

In my net policy I have an ingress rule for a CIDR range (not pod ip):

  ingress:
    - from:
      - ipBlock:
          cidr: 172.17.0.0/16
      ports:
      - protocol: TCP
        port: web-service # Named port on target pod = 80

In the PolicyEndpoint I expect a rule like this:

spec:
  ingress:
  - cidr: 172.17.0.0/16
    ports:
    - port: 80
      protocol: TCP

But I get:

spec:
  ingress: []

However, this CIDR ingress rule is not created. If I change the network policy rule to target the numeric port like so, then it works:

In my net policy I have an ingress rule for a CIDR range (not pod IP):

  ingress:
    - from:
      - ipBlock:
          cidr: 172.17.0.0/16
      ports:
      - protocol: TCP
        port: 80

Basically, the issue persists for ipBlock rules, but was fixed for podSelector rules.

@achevuru
Copy link
Contributor

@yndai Understood. We should be able to extend the named port support to ingress rules at the minimum for ipBlock but I would like to call out that if the CIDR range you provide is trying to capture the cluster's pod IP range, it will run in to issues if the pod to which the NP applies tries to use Service VIPs of those pods (egress rules).

@yndai
Copy link
Author

yndai commented Mar 21, 2024

@achevuru Thank you, and yes, I understand the caveats

@haouc
Copy link
Contributor

haouc commented Mar 21, 2024

@yndai thanks for reporting this. After discussing internally, as @achevuru mentioned there could be some caveats by using IPs, we will add the named ports support for ipBlocks for Ingress only. The change will be made by one of our members soon. Thanks again for testing them quickly!

@yndai
Copy link
Author

yndai commented Mar 26, 2024

Another note for anyone else possibly facing this:
For rules that allow all traffic to specific ports e.g.

ingress:
  - ports:
    - port: https
      protocol: TCP

Under the hood this maps to a CIDR rule on ::/0, so using named ports on such rules are also not working.

haouc added a commit that referenced this issue Mar 30, 2024
<!--  Thanks for sending a pull request!  Here are some tips for you:
1. Ensure you have added the unit tests for your changes.
2. Ensure you have included output of manual testing done in the Testing
section.
3. Ensure number of lines of code for new or existing methods are within
the reasonable limit.
4. Ensure your change works on existing clusters after upgrade.
-->
**What type of PR is this?**

<!--
Add one of the following:
bug
cleanup
documentation
feature
-->
bug

**Which issue does this PR fix**:
#81 

**What does this PR do / Why do we need it**:
Allows named ports when using IPBlocks (ingress rules only)

**If an issue # is not available please add steps to reproduce and the
controller logs**:


**Testing done on this change**:
<!--
output of manual testing/integration tests results and also attach logs
showing the fix being resolved
-->
Manually applied Cx config and policyendpoint looks fine
```
Spec:
  Ingress:
    Cidr:  172.17.0.0/16
    Ports:
      Port:      80
      Protocol:  TCP
    Cidr:        192.168.8.106
    Ports:
      Port:      443
      Protocol:  TCP
  Pod Isolation:
    Ingress
  Pod Selector:
    Match Labels:
      App:  target
  Pod Selector Endpoints:
    Host IP:    192.168.98.89
    Name:       target
    Namespace:  default
    Pod IP:     192.168.125.203
  Policy Ref:
    Name:       allow-web-traffic
    Namespace:  default
```

Using allow all in ingress rule
```
Spec:
  Ingress:
    Cidr:  0.0.0.0/0
    Ports:
      Port:      80
      Protocol:  TCP
      Port:      443
      Protocol:  TCP
    Cidr:        ::/0
    Ports:
      Port:      80
      Protocol:  TCP
      Port:      443
      Protocol:  TCP
  Pod Isolation:
    Ingress
  Pod Selector:
    Match Labels:
      App:  target
  Pod Selector Endpoints:
    Host IP:    192.168.98.89
    Name:       target
    Namespace:  default
    Pod IP:     192.168.116.203
  Policy Ref:
    Name:       allow-web-traffic
    Namespace:  default
```

**Automation added to e2e**:
<!--
List the e2e tests you added as part of this PR.
If no, create an issue with enhancement/testing label
-->

1. NP allowing ingress from IPBlock + 2 named ports
2. NP allowing ingress from all CIDRs + 2 named ports

**Will this PR introduce any new dependencies?**:
<!--
e.g. new K8s API
-->
No

**Will this break upgrades or downgrades. Has updating a running cluster
been tested?**:
No

**Does this PR introduce any user-facing change?**:
<!--
If yes, a release note update is required:
Enter your extended release note in the block below. If the PR requires
additional actions
from users switching to the new release, include the string "action
required".
-->
Yes

```release-note
Allow for using named ports when using IPBlocks for the ingress source.
```

By submitting this pull request, I confirm that my contribution is made
under the terms of the Apache 2.0 license.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants