Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for interpreting NetworkPolicies #36

Merged

Conversation

Lykos153
Copy link
Contributor

@Lykos153 Lykos153 commented Sep 22, 2022

This PR provides a working POC for interpreting NetworkPolicies when the Pod backend is used.

What needs to be fixed before I would consider this done:

  • Agent panics when it receives invalid IP addresses (eg. ""). I would fix this in conjunction with Services without endpoints break nftables #21 by checking all input for validity
  • Rules are currently duplicated for each endpoint address. While this probably doesn't hurt, it's not pretty either. This might be fixed automatically by the following point
  • Re-evaluate the mechanism for matching policies to port forwards. Right now this is done on IP address level
  • Add tests
  • Cleanup
  • Rebase

Reminder: After lbaas forwarded the traffic into the cluster, the NetworkPolicy is applied again by the CNI, only this time the packets appear to originate from the gateway node. So make sure to always add your cluster's internal CIDR to the ingress ipBlocks as well.

Closes #34

@Lykos153 Lykos153 marked this pull request as draft September 22, 2022 12:08
@Lykos153
Copy link
Contributor Author

Open questions

Policy merging

From the documentation:

Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow. Thus, order of evaluation does not affect the policy result.

So I came up with the following design:

  1. Sort all blocks by CIDR suffix in descending order
  2. For each block (starting with the smallest net):
    1. Drop all packets from the 'Except' field
    2. Accept all packets from the CIDR

Under the assumption that no two nets of the same size can overlap (without being identical), this should result in the expected behavior of the union of all policies. Did I miss anything?

ct mark & mask

Previously we had a mark and a mask in nftables. Both were 0x01 by default and they were applied using bitwise AND. What was the purpose of the mask if we only ever had one value for mark? I did away with the mask and am now using distinct marks for each port forward (starting at 1 and couting upwards). If we want to limit the marks being used I propose setting a range in the config.

Matching policies to port forwards

The current design is:

  1. List all NetworkPolicies, check to which pods they apply and put them in a map "IP address" => "rules applying to this address"
  2. For each IP address, add all applying ipBlocks to the corresponding port forward.
  • Is there a simpler way to do a backwards search? "Which NetworkPolicy applies to the given Pod?"
  • Looking up NetworkPolicies by Services would reduce the duplication in the nftables config. However, NetworkPolicies work on Pods. Both NetworkPolicies and Service use selectors. Are there situations in which they select a different set of Pods? Are those situations relevant? I'm hesitant to make simplifications based on assumptions in a context that could impact security.

@horazont
Copy link
Collaborator

So I came up with the following design:

  1. Sort all blocks by CIDR suffix in descending order

  2. For each block (starting with the smallest net):

    1. Drop all packets from the 'Except' field
    2. Accept all packets from the CIDR

Under the assumption that no two nets of the same size can overlap (without being identical), this should result in the expected behavior of the union of all policies. Did I miss anything?

I'm not sure that correctly implements the except clause, but honestly, that's underspecified in the documentation. As far as I can see, there's nothing which explicitly specifies what happens if there is a rule allowing 0.0.0.0/0 and allowing 1.0.0.0/8 except 1.1.0.0/16.

By extension of the logic of if you have a policy which allows all ranges, you cannot lock it down further, the correct implementation would allow everything in that scenario, which I don't think your proposal does.

I think to correctly implement this in nftables, you'd need a chain for each ipRange block, and then evaluate them from most-specific to least-specific. Given network policies matching the following:

  • 172.1.0.0/16 except 172.1.1.0/24
  • 192.168.0.0/16 except 192.168.176.0/24
  • 0.0.0.0/1
  • 128.0.0.0/2

You would generate the following ruleset:

chain forward {
    ip saddr 172.1.0.0/16 jump ruleset1;
    ip saddr 192.168.0.0/16 jump ruleset2;
    ip saddr 128.0.0.0/2 jump ruleset3;
    ip saddr 0.0.0.0/1 jump ruleset4;
}

chain ruleset1 {
    ip saddr 172.1.1.0/24 return;
    accept;
}

chain ruleset2 {
    ip saddr 192.168.176.0/24 return;
    accept;
}

chain ruleset3 {
    accept;
}

chain ruleset4 {
    accept;
}

This effectively allows 128.0.0.0/2 (including all of 192.168.0.0/16!), 0.0.0.0/1, and 172.1.0.0/16, except 172.1.1.0/24 (which is not included by 128.0.0.0/2 or 0.0.0.0/1).

This is all inferred from the implied statement that "if you allow 0.0.0.0/0, it cannot be locked down further".

Previously we had a mark and a mask in nftables. Both were 0x01 by default and they were applied using bitwise AND. What was the purpose of the mask if we only ever had one value for mark?

The purpose of the mark is to have a simpler and more accurate rule in the forward chain, similar to your new use. As we only ever had one rule which was needed in the forward chain, we only needed one value.

Without the mark, we would have to allow arbitrary traffic from the internet into the cluster network, because we have no way of detecting valid DNAT traffic, and the forward chain happens after the DNAT translation.

In short: because the forward chain sees the translated addresses, we would have required a blanked accept rule (to allow e.g. 9.9.9.9 communicating with some pod using the exposed port). As that is a dangerous thing to write, we used the mark to label all packets belonging to a DNAT'd connection and only blanket-accept those. That is safer, because the DNAT rule in the prerouting chain already validates and chooses the destination.

Your new use seems adequate and sensible, and good enough for this iteration, even though it unnecessarily expands the ruleset a little (if one used the least significant bit to denote "this is a valid DNAT'd packet" and shifted the service index to the upper bits, the postrouting chain could be simplified to a single rule again).

This obviously reserves all bits of the mark for ch-k8s-lbaas, which may or may not be a good thing (it probably isn't). AFAICT, we're not yet using any marks in yaook/k8s, but this definitely needs to be documented here and over there.

Is there a simpler way to do a backwards search? "Which NetworkPolicy applies to the given Pod?"

I don't think so.

Looking up NetworkPolicies by Services would reduce the duplication in the nftables config. However, NetworkPolicies work on Pods. Both NetworkPolicies and Service use selectors. Are there situations in which they select a different set of Pods? Are those situations relevant?

Yes, definitely.

It is entirely legitimate to have a network policy which locks down an entire namespace, without further refining by pod selectors, and then have multiple Services which expose things there to the subset of IPs which can still access the workload.

The other way around (having a Service and only letting a subset of pods access it) is probably less sensical, but it's still possible.

By the way, did you consider doing the source address filtering in the prerouting chain already? Doing it that way would allow you to map the scenarios above and resimplify the forward chain and the mark use.

It would extend the prerouting chain significantly, but in the NAT table, that's AFAIK only run for untracked packets anyway, so it would likely be a better place to stuff complexity.

@Lykos153
Copy link
Contributor Author

Lykos153 commented Sep 23, 2022

Thanks for your feedback! So the new design considers the fact that "Pod is matched by the Service" and "Pod is matched by the NetworkPolicy" are orthogonal. It doesn't touch the current aggregation of the Forwards in the controller and neither prerouting nor postrouting chain in the agent.

In short, the controller:

  • Collects all network policies that apply to ingress and puts them into the json object network-policies
  • Finds matching pods and stores the relation between pods and policies in the json object policy-assignments

And the agent:

  • creates prerouting and postrouting rules as usual from the ingress object
  • creates a chain for each network policy found in network-policies that looks like this:
chain nginx {
    mark set 0x2 or 0x1 ct mark set meta mark
    ip saddr 172.17.0.0/16 jump nginx-cidr0;
    ip saddr 46.189.32.0/32 accept; # <- this is still mising
    return;
}
  • creates a chain for each ipBlock that has entries in 'except' that looks like this:
chain nginx-cidr0 {
    ip saddr 172.17.1.0/24 return;
    accept;
}
  • adds rules to the forward chain to send packets to their policy chains according to policy-assignments
  • adds a default drop rule for all packets that have ever seen a policy.
chain forward {
    ct mark 0x1 and 0x1 ip daddr 10.9.8.7 jump nginx;
    ct mark 0x2 or 0x1 drop;
    ct mark 0x1 and 0x1 accept;
}

In the end of each policy chain, we must return because there could be a different policy that accepts the packet. That's why each packet that has ever seen a policy chain is marked with an additional bit and then dropped in the end.

What is still missing now:

  • Ports aren't yet handled by the agent
  • The agent still generates empty chains for ipBlocks that don't have except entries
  • A proper way to add a bit to an existing mark without touching the others. I'd be thankful for a hint because I'm having a hard time finding anything in the nftables docs
  • Proper formating of the template

@Lykos153
Copy link
Contributor Author

The purpose of the mark is to have a simpler and more accurate rule in the forward chain, similar to your new use.

I was actually asking about the mask. We do mark set {{ $cfg.FWMarkBits | printf "0x%x" }} and {{ $cfg.FWMarkMask | printf "0x%x" }} where both mark and mask are 0x1 by default. What exactly is the benefit over just doing mark set {{ $cfg.FWMarkBits | printf "0x%x" }}

@Lykos153 Lykos153 force-pushed the feature/interpret-networkpolicies branch 2 times, most recently from 0bb49c8 to acf6059 Compare September 29, 2022 17:29
@horazont
Copy link
Collaborator

The purpose of the mark is to have a simpler and more accurate rule in the forward chain, similar to your new use.

I was actually asking about the mask. We do mark set {{ $cfg.FWMarkBits | printf "0x%x" }} and {{ $cfg.FWMarkMask | printf "0x%x" }} where both mark and mask are 0x1 by default. What exactly is the benefit over just doing mark set {{ $cfg.FWMarkBits | printf "0x%x" }}

The purpose is to allow it to co-exist with potential other use of marks, as we're injecting these rules into an existing framework of nftables rules.

@horazont
Copy link
Collaborator

Could you share a generated nft file? That would help understanding the current end result. I think that the marks you added there are not necessary, but I'd have to see it in full.

@Lykos153
Copy link
Contributor Author

Lykos153 commented Sep 30, 2022

load-balancer-config:
  ingress:
    - address: 185.187.1.1
      ports:
        - protocol: TCP
          inbound-port: 80
          destination-addresses:
            - 10.9.8.7
            - 10.9.8.6
          destination-port: 8080
  policy-assignments:
    - address: 10.9.8.7
      network-policies:
      - pol1
    - address: 10.10.2.1
      network-policies:
      - pol1
      - pol2
  network-policies:
    - name: pol1
      ports:
        - protocol: TCP
          port: 80
        - protocol: TCP
          port: 8080
          end-port: 8090
        - protocol: UDP
      allowed-ip-blocks:
        - cidr: 185.187.0.0/16
          except:
          - 185.187.13.0/24
        - cidr: 185.187.19.0/24
    - name: pol2
      allowed-ip-blocks:
        - cidr: 185.187.19.0/24
          except:
          - 185.187.19.0/25
        - cidr: 185.187.19.0/23
          except:
          - 185.187.19.100/32
          - 185.187.19.102/32
          - 185.187.19.109/32
          - 185.187.19.110/32

will result in

table inet filter {
	chain forward {
		ct mark 0x1 and 0x1 ip daddr 10.9.8.7 tcp dport 80 jump pol1;
		ct mark 0x1 and 0x1 ip daddr 10.9.8.7 tcp dport 8080-8090 jump pol1;
		ct mark 0x1 and 0x1 ip daddr 10.9.8.7 udp jump pol1;
		ct mark 0x1 and 0x1 ip daddr 10.10.2.1 tcp dport 80 jump pol1;
		ct mark 0x1 and 0x1 ip daddr 10.10.2.1 tcp dport 8080-8090 jump pol1;
		ct mark 0x1 and 0x1 ip daddr 10.10.2.1 udp jump pol1;
		ct mark 0x1 and 0x1 ip daddr 10.10.2.1 jump pol2;
		ct mark 0x2 or 0x1 drop;
		ct mark 0x1 and 0x1 accept;
	}
	chain pol1 {
		mark set 0x2 or 0x1 ct mark set meta mark
		ip saddr 185.187.0.0/16 jump pol1-cidr0;
		ip saddr 185.187.19.0/24 accept;
		return;
	}
	chain pol1-cidr0 {
		ip saddr 185.187.13.0/24 return;
		accept;
	}
	chain pol2 {
		mark set 0x2 or 0x1 ct mark set meta mark
		ip saddr 185.187.19.0/24 jump pol2-cidr0;
		ip saddr 185.187.19.0/23 jump pol2-cidr1;
		return;
	}
	chain pol2-cidr0 {
		ip saddr 185.187.19.0/25 return;
		accept;
	}
	chain pol2-cidr1 {
		ip saddr 185.187.19.100/32 return;
		ip saddr 185.187.19.102/32 return;
		ip saddr 185.187.19.109/32 return;
		ip saddr 185.187.19.110/32 return;
		accept;
	}
}

table ip nat {
	chain prerouting {
		ip daddr 185.187.1.1 tcp dport 80 mark set 0x1 and 0x1 ct mark set meta mark dnat to numgen inc mod 2 map {0 : 10.9.8.6, 1 : 10.9.8.7, } : 8080;
	}

	chain postrouting {
		mark 0x1 and 0x1 masquerade;
	}
}

If I didn't use the mark, then packets that were not accepted inside pol1 or pol2 would fall through to ct mark 0x1 and 0x1 accept;. However, if there exists at least one policy, we want to drop everything that isn't matched by any policy. But policies can only accept and never drop, because there could be a different policy that would accept the packet. If I'd change the policy chains such that they don't return but drop in the end, then pol2 would never be evaluated for 10.10.2.1.

@horazont
Copy link
Collaborator

horazont commented Oct 5, 2022

How about a scheme like this:

table ip nat {
    chain prerouting {
        ip daddr 185.187.1.1 tcp dport 80 jump lbaas-svc1;
    }

    chain lbaas-svc1 {
        ip saddr 185.187.0.0/16 jump lbaas-svc1-cidr1;
        ip saddr 185.187.19.0/24 goto lbaas-svc1-dnat;
        drop;  # maybe? should be valid if I'm not mistaken. note that goto does not return.
    }

    chain lbaas-svc1-cidr1 {
	ip saddr 185.187.13.0/24 return;
        goto lbaas-svc1-dnat;
    }

    chain lbaas-svc1-dnat {
        mark set 0x1 and 0x1 ct mark set meta mark dnat to numgen inc mod 2 map {0 : 10.9.8.6, 1 : 10.9.8.7, } : 8080;
    }

    chain postrouting {
        mark 0x1 and 0x1 masquerade;
    }
}

table inet filter {
    chain forward {
        ct mark 0x1 and 0x1 accept;
    }
}

This has the following advantages:

  • Avoids creation of conntrack entries for traffic which is not allowed.
  • Runs the network policy logic only once per flow, not on each packet

The downside is that this won't interrupt already ongoing connections on application of a different network policy, though I'm not sure if this can be expected anyway.

@Lykos153
Copy link
Contributor Author

Lykos153 commented Oct 5, 2022

This approach does not account for the orthogonality of "Pod is matched by the Service" and "Pod is matched by the NetworkPolicy" by making the (reasonable?) assumption that policies apply to all pods of a service which contradicts your statement from above:

The other way around (having a Service and only letting a subset of pods access it) is probably less sensical, but it's still possible.

Because k8s does allow for a policy to only apply to one single pod of a service, implementing it like this would mean that the controller had to iterate over all pods of a service and then merge (and deduplicate) the policies before serving them to the agent. This would result in more complexity and in a behavior that is not consistent with how policies work in k8s (adding a policy to one pod of the service locks down the whole service).

@Lykos153
Copy link
Contributor Author

Lykos153 commented Oct 5, 2022

I cannot assess the impact on performance of these points:

  • Avoids creation of conntrack entries for traffic which is not allowed.
  • Runs the network policy logic only once per flow, not on each packet

Do you think the performance benefit of your proposal outweighs the added complexity and the nonconformity?

@horazont
Copy link
Collaborator

horazont commented Oct 7, 2022

Because k8s does allow for a policy to only apply to one single pod of a service, implementing it like this would mean that the controller had to iterate over all pods of a service and then merge (and deduplicate) the policies before serving them to the agent. This would result in more complexity and in a behavior that is not consistent with how policies work in k8s (adding a policy to one pod of the service locks down the whole service).

That's a very good point, I had missed that. So we should stay with your rough layout.

In any case, in your current approach you don't need to actually transfer the mark you set in the forward chain to the conntrack mark, because AIUI the mark will stick on the packet until it leaves the system.

Hence, there's no need for moving it to conntrack, as the rules will be re-evaluated for each packet anyway.

And I still wonder if you can get around the marking using goto

internal/model/loadbalancer.go Outdated Show resolved Hide resolved
internal/controller/model_pod.go Outdated Show resolved Hide resolved
internal/controller/model_pod.go Outdated Show resolved Hide resolved
internal/controller/model_pod_test.go Outdated Show resolved Hide resolved
internal/controller/model_pod_test.go Outdated Show resolved Hide resolved
internal/agent/nftables_generator.go Outdated Show resolved Hide resolved
internal/agent/nftables_generator.go Outdated Show resolved Hide resolved
@Lykos153 Lykos153 marked this pull request as ready for review October 17, 2022 09:38
@Lykos153 Lykos153 force-pushed the feature/interpret-networkpolicies branch 3 times, most recently from ec71fa2 to 94b8846 Compare October 17, 2022 14:31
@Lykos153
Copy link
Contributor Author

Example NetworkPolicy -> nftables.conf

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: nginx
  namespace: default
spec:
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          project: myproject
    ports:
    - port: 80
      protocol: TCP
  podSelector:
    matchLabels:
      app: nginx
  policyTypes:
  - Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  egress:
  - ports:
    - port: 5978
      protocol: TCP
    to:
    - ipBlock:
        cidr: 10.0.0.0/24
  ingress:
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16
        except:
        - 172.17.1.0/24
    - namespaceSelector:
        matchLabels:
          project: myproject
    - podSelector:
        matchLabels:
          role: frontend
    ports:
    - port: 6379
      protocol: TCP
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Ingress
  - Egress
table inet filter {
        chain forward {
                ct mark 0x1 and 0x1 ip daddr 10.244.175.10 goto pod-10.244.175.10;
                ct mark 0x1 and 0x1 ip daddr 10.244.175.11 goto pod-10.244.175.11;
                ct mark 0x1 and 0x1 accept;
        }
        chain pod-10.244.175.10 {
                jump nginx;
                drop;
        }
        chain pod-10.244.175.11 {
                jump nginx;
                drop;
        }
        chain nginx {
        }
        chain test-network-policy {
                jump test-network-policy-rule0;
        }
        chain test-network-policy-rule0 {
                ip saddr 172.17.0.0/16 tcp dport {6379} jump test-network-policy-rule0-cidr0;
        }
        chain test-network-policy-rule0-cidr0 {
                ip saddr 172.17.1.0/24 return;
                accept;
        }
}

table ip nat {
        chain prerouting {
                ip daddr 172.30.154.141 tcp dport 80 mark set 0x1 and 0x1 ct mark set meta mark dnat to numgen inc mod
2 map {0 : 10.244.175.12, 1 : 10.244.175.13, } : 80;
                ip daddr 172.30.154.174 tcp dport 80 mark set 0x1 and 0x1 ct mark set meta mark dnat to numgen inc mod
2 map {0 : 10.244.175.10, 1 : 10.244.175.11, } : 80;
        }

        chain postrouting {
                mark 0x1 and 0x1 masquerade;
        }
}

Copy link
Collaborator

@horazont horazont left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm not mistaken, at least one unit test needs to be added (against 158c8cd) and there either needs to be some commenting or some refactoring (or both!)

internal/agent/nftables_generator.go Outdated Show resolved Hide resolved
internal/agent/nftables_generator.go Outdated Show resolved Hide resolved
internal/controller/model_pod.go Show resolved Hide resolved
internal/controller/model_pod.go Outdated Show resolved Hide resolved
internal/model/loadbalancer.go Outdated Show resolved Hide resolved
@Lykos153 Lykos153 force-pushed the feature/interpret-networkpolicies branch from 5d732b1 to 9570ec8 Compare October 20, 2022 08:40
This commit adds two new elements to the second hierarchy level of the model:
NetworkPolicies and PolicyAssignments.

Each NetworkPolicy entry can contain zero or more AllowedIngresses which in
turn can contain zero or more IPBlockFilters and zero or more PortFilters.
namespaceSelectors and podSelectors are not part of the model.

PolicyAssignments contains a mapping from one podIP to zero or more
NetworkPolicy names.

The new entries in the model do not interfere with the existing Ingress
element. This accounts for the fact that "Pod is matched by a Service" and
"Pod is matched by a NetworkPolicy" are orthogonal.
@Lykos153 Lykos153 force-pushed the feature/interpret-networkpolicies branch 2 times, most recently from 4c32a41 to be28618 Compare October 20, 2022 15:28
Copy link
Collaborator

@horazont horazont left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested this in my cluster and it mostly works as expected. I have however found a way to make it fail spectacularly, which you should probably fix :-).

internal/controller/model_pod_test.go Outdated Show resolved Hide resolved
internal/controller/model_pod.go Show resolved Hide resolved
internal/agent/nftables_generator.go Outdated Show resolved Hide resolved
@Lykos153 Lykos153 force-pushed the feature/interpret-networkpolicies branch from be28618 to 578998b Compare November 3, 2022 16:29
This commit adds support for converting NetworkPolicies into nftable
rules:
* For each NetworkPolicy entry in the model, a new chain is created
  in the inet filter table which contains jump rules to the corresponding
  IngressRule chains.
* For each AllowedIngress inside each NetworkPolicy, a new
  ingressRuleChain is created.
* Inside the IngressRuleChain, for each combination of PortFilter and
  IPBlockFilter, an 'accept' rule is created, except for those
  IPBlockFilters that contain a 'Block' list. Those jump to the
  corresponding chain (see below) instead. If there are no PortFilters,
  the rules only contain source addresses. If there are no IPBlockFilters,
  the rules only contain ports. If there are neither IPBlockFilters nor
  PortFilters, the rule breaks down to 'accept' everything.
* For each IPBlockFilters that contains an 'Except' list, a new chain is
  created. Inside this chain, for each entry in the 'Block' list, a
  'drop' rule is created. The chain's default verdict is 'accept'.

The NetworkPolicy chains are not yet reachable at this point. This will
be added in a separate commit.
For each podIP for which a PolicyAssignment exists in the model,
* a new chain in the filter table with default 'drop' is created.
  For each NetworkPolicy assigned to this podIP, a jump rule is added
  that points to the chain of the respective NetworkPolicy.
* a 'goto' rule is added to the forward chain that points to the podIP
  specific chain.

The existing code to generate the nat table remains untouched.
NetworkPolicies are handled after the DNAT because only then is the
podIP known. This accounts for the fact that "Pod is matched by the Service"
and "Pod is matched by the NetworkPolicy" are orthogonal.
This commit adds a python script to hack/debug-agent which can be used
to debug a (locally running) agent by manually sending requests to it.
The requests are read from request.yaml.
This commit adds support to the controller for creating NetworkPolicy and
PolicyAssignment entries in the model.

For each NetworkPolicy in the cluster which applies to Ingress
(spec.PolicyTypes contains "Ingress"), a NetworkPolicy entry in the
cluster is created. All IngressRules that contain an ipBlock are added
to the NetworkPolicy entry as AllowedIngress. Policies that only contain
namespaceSelectors or podSelectors result in a NetworkPolicy entry
without AllowedIngress.

Also, for each NetworkPolicy in the cluster which applies to Ingress,
and for each pod to which it applies, an entry in PolicyAssignment is
created.
@Lykos153 Lykos153 force-pushed the feature/interpret-networkpolicies branch from 578998b to 3035522 Compare November 3, 2022 16:37
@horazont horazont merged commit 16fbf35 into cloudandheat:master Nov 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for interpreting NetworkPolicies
2 participants