-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PolicyEndpoint Missing Rules with Named Ports #71
Comments
@aballman can you share your netpol applied pod/deployment definition? If not full, the definition around ports can also help since the int ports work but not string type. Thanks. |
Here's the ports definition
|
Thanks for sharing the definition with us. They look fine to me. Are you using service? Can you share the service yaml? Or if it is ok, sharing the yamls for service and its pods/deployment may be easier for us to reprod and identify the issue with the netpol you have already shared. Thanks |
Sure, only posted the snippet for brevity. It's all public ArgoCD chart/manifests. Thanks for investigating! DeploymentapiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "12"
labels:
app.kubernetes.io/component: dex-server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-dex-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.9.3
helm.sh/chart: argo-cd-5.51.6
name: argocd-dex-server
namespace: argocd
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 3
selector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-dex-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
checksum/cmd-params: 9a94a44fd242e275bbd83fceab03f7a691505234d20aa87dc67d4f27c2f2dea6
kubectl.kubernetes.io/restartedAt: "2024-01-04T15:01:08-08:00"
creationTimestamp: null
labels:
app.kubernetes.io/component: dex-server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-dex-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.9.3
helm.sh/chart: argo-cd-5.51.6
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: argocd-dex-server
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- args:
- rundex
command:
- /shared/argocd-dex
- --logformat=json
- --loglevel=warn
env:
- name: ARGOCD_DEX_SERVER_DISABLE_TLS
valueFrom:
configMapKeyRef:
key: dexserver.disable.tls
name: argocd-cmd-params-cm
optional: true
image: ghcr.io/dexidp/dex:v2.37.0
imagePullPolicy: IfNotPresent
name: dex-server
ports:
- containerPort: 5556
name: http
protocol: TCP
- containerPort: 5557
name: grpc
protocol: TCP
- containerPort: 5558
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp/oidc
name: google-json
readOnly: true
- mountPath: /shared
name: static-files
- mountPath: /tmp
name: dexconfig
- mountPath: /tls
name: argocd-dex-server-tls
dnsPolicy: ClusterFirst
initContainers:
- command:
- /bin/cp
- -n
- /usr/local/bin/argocd
- /shared/argocd-dex
image: quay.io/argoproj/argocd:v2.9.3
imagePullPolicy: IfNotPresent
name: copyutil
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /shared
name: static-files
- mountPath: /tmp
name: dexconfig
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 999
serviceAccount: argocd-dex-server
serviceAccountName: argocd-dex-server
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
volumes:
- name: static-files
- name: dexconfig
- name: argocd-dex-server-tls
secret:
defaultMode: 420
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: ca.crt
path: ca.crt
optional: true
secretName: argocd-dex-server-tls
- name: google-json
secret:
defaultMode: 420
items:
- key: oidc.google.serviceAccountJSON
path: googleAuth.json
secretName: argocd-secret ServiceapiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: dex-server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-dex-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.9.3
helm.sh/chart: argo-cd-5.51.6
name: argocd-dex-server
namespace: argocd
spec:
clusterIP: 172.20.220.123
clusterIPs:
- 172.20.220.123
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 5556
targetPort: http
- name: grpc
port: 5557
targetPort: grpc
selector:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-dex-server Default NetworkPolicy + PolicyEndpoint (Broken)apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: argocd-dex-server
namespace: argocd
spec:
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
ports:
- port: http
protocol: TCP
- port: grpc
protocol: TCP
podSelector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-dex-server
policyTypes:
- Ingress
---
apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
name: argocd-dex-server-d2kxp
namespace: argocd
spec:
podIsolation:
- Ingress
podSelector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-dex-server
podSelectorEndpoints:
- hostIP: 10.146.52.181
name: argocd-dex-server-5c6dfff575-mhvzq
namespace: argocd
podIP: 10.146.52.212
policyRef:
name: argocd-dex-server
namespace: argocd Updated NetworkPolicy + PolicyEndpoint (Working)apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: argocd-dex-server-supplemental
namespace: argocd
spec:
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
ports:
- port: 5556
protocol: TCP
- port: 5557
protocol: TCP
podSelector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-dex-server
policyTypes:
- Ingress
---
apiVersion: networking.k8s.aws/v1alpha1
kind: PolicyEndpoint
metadata:
name: argocd-dex-server-supplemental-h4xdv
namespace: argocd
spec:
ingress:
- cidr: 10.146.54.136
ports:
- port: 5556
protocol: TCP
- port: 5557
protocol: TCP
- cidr: 10.146.63.240
ports:
- port: 5556
protocol: TCP
- port: 5557
protocol: TCP
podIsolation:
- Ingress
podSelector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-dex-server
podSelectorEndpoints:
- hostIP: 10.146.52.181
name: argocd-dex-server-5c6dfff575-mhvzq
namespace: argocd
podIP: 10.146.52.212
policyRef:
name: argocd-dex-server-supplemental
namespace: argocd |
We're experiencing the same issue. You can reproduce the issue by following these steps: helm repo add argo https://argoproj.github.io/argo-helm && helm repo update
helm upgrade argocd -n argocd argo/argo-cd --version 5.53.11 \
--set global.networkPolicy.create=true \
--create-namespace --install
❯ kubectl get networkpolicy,policyendpoints -n argocd
NAME POD-SELECTOR AGE
networkpolicy.networking.k8s.io/argocd-application-controller app.kubernetes.io/instance=argocd,app.kubernetes.io/name=argocd-application-controller 88s
networkpolicy.networking.k8s.io/argocd-dex-server app.kubernetes.io/instance=argocd,app.kubernetes.io/name=argocd-dex-server 88s
networkpolicy.networking.k8s.io/argocd-redis app.kubernetes.io/instance=argocd,app.kubernetes.io/name=argocd-redis 88s
networkpolicy.networking.k8s.io/argocd-repo-server app.kubernetes.io/instance=argocd,app.kubernetes.io/name=argocd-repo-server 88s
networkpolicy.networking.k8s.io/argocd-server app.kubernetes.io/instance=argocd,app.kubernetes.io/name=argocd-server 88s
NAME AGE
policyendpoint.networking.k8s.aws/argocd-application-controller-tsfnk 88s
policyendpoint.networking.k8s.aws/argocd-dex-server-z4cvc 88s
policyendpoint.networking.k8s.aws/argocd-redis-hcnvq 88s
policyendpoint.networking.k8s.aws/argocd-repo-server-fdpd6 88s
policyendpoint.networking.k8s.aws/argocd-server-jp6dp 88s The problem was fixed by changing the named ports to port numbers: ❯ kubectl get networkpolicy -n argocd argocd-repo-server -ojsonpath='{.spec.ingress[0].ports}'
[{"port":"repo-server","protocol":"TCP"}]
# No values found in ingress fields
❯ kubectl get policyendpoints -n argocd argocd-repo-server-fdpd6 -ojsonpath='{.spec.ingress}'
❯ kubectl patch networkpolicy -n argocd argocd-repo-server --type='json' \
-p='[{"op": "replace", "path": "/spec/ingress/0/ports/0/port", "value": 8081}]'
networkpolicy.networking.k8s.io/argocd-repo-server patched
❯ kubectl get policyendpoints -n argocd argocd-repo-server-fdpd6 -ojsonpath='{.spec.ingress}'
[{"cidr":"10.0.2.194","ports":[{"port":8081,"protocol":"TCP"}]},{"cidr":"10.0.1.66","ports":[{"port":8081,"protocol":"TCP"}]},{"cidr":"10.0.1.126","ports":[{"port":8081,"protocol":"TCP"}]},{"cidr":"10.0.1.85","ports":[{"port":8081,"protocol":"TCP"}]}]
amazon-network-policy-controller-k8s/pkg/k8s/pod_utils.go Lines 22 to 45 in 0499f4c
Hmm, it doesn't work if we have the same named port in both the pod and service spec.
❯ kubectl get svc -n argocd argocd-redis -ojsonpath='{.spec.ports}'
[{"name":"redis","port":6379,"protocol":"TCP","targetPort":"redis"}]
❯ kubectl get deploy -n argocd argocd-redis -ojsonpath='{.spec.template.spec.containers[0].ports}'
[{"containerPort":6379,"name":"redis","protocol":"TCP"}]
❯ kubectl get networkpolicy -n argocd argocd-redis -ojsonpath='{.spec.ingress[0].ports}'
[{"port":"redis","protocol":"TCP"}]
# No values found in ingress fields
❯ kubectl get policyendpoints -n argocd argocd-redis-b96c7 -ojsonpath='{.spec.ingress}' The other example: helm upgrade argocd -n argocd argo/argo-cd --version 5.53.11 \
--set global.networkPolicy.create=true \
--set repoServer.service.portName=repo-server \
--create-namespace --install
❯ kubectl get svc -n argocd argocd-repo-server -ojsonpath='{.spec.ports}'
[{"name":"repo-server","port":8081,"protocol":"TCP","targetPort":"repo-server"}]
❯ kubectl get deploy -n argocd argocd-repo-server -ojsonpath='{.spec.template.spec.containers[0].ports}'
[{"containerPort":8081,"name":"repo-server","protocol":"TCP"},{"containerPort":8084,"name":"metrics","protocol":"TCP"}]
❯ kubectl get networkpolicy -n argocd argocd-repo-server -ojsonpath='{.spec.ingress[0].ports}'
[{"port":"repo-server","protocol":"TCP"}]
# No values found in ingress fields
❯ kubectl get policyendpoints -n argocd argocd-repo-server-c8gf4 -ojsonpath='{.spec.ingress}' |
Do you have ports defined in your source pods? |
Yes, ports are defined on the source pod - containerPort: 8080
name: server
protocol: TCP
- containerPort: 8083
name: metrics
protocol: TCP Full Manifest - ArgoCD ServerapiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "30"
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.9.3
argoproj.io/application-name: sh-argocd
helm.sh/chart: argo-cd-5.51.6
name: argocd-server
namespace: argocd
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 3
selector:
matchLabels:
app.kubernetes.io/instance: argocd
app.kubernetes.io/name: argocd-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
checksum/cmd-params: 9a94a44fd242e275bbd83fceab03f7a691505234d20aa87dc67d4f27c2f2dea6
kubectl.kubernetes.io/restartedAt: "2023-12-04T09:57:37-08:00"
creationTimestamp: null
labels:
app.kubernetes.io/component: server
app.kubernetes.io/instance: argocd
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: argocd-server
app.kubernetes.io/part-of: argocd
app.kubernetes.io/version: v2.9.3
helm.sh/chart: argo-cd-5.51.6
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: argocd-server
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- args:
- /usr/local/bin/argocd-server
- --port=8080
- --metrics-port=8083
- --loglevel=warn
env:
- name: ARGOCD_SERVER_INSECURE
valueFrom:
configMapKeyRef:
key: server.insecure
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_BASEHREF
valueFrom:
configMapKeyRef:
key: server.basehref
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_ROOTPATH
valueFrom:
configMapKeyRef:
key: server.rootpath
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_LOGFORMAT
valueFrom:
configMapKeyRef:
key: server.log.format
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_LOG_LEVEL
valueFrom:
configMapKeyRef:
key: server.log.level
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_REPO_SERVER
valueFrom:
configMapKeyRef:
key: repo.server
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_DEX_SERVER
valueFrom:
configMapKeyRef:
key: server.dex.server
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_DISABLE_AUTH
valueFrom:
configMapKeyRef:
key: server.disable.auth
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_ENABLE_GZIP
valueFrom:
configMapKeyRef:
key: server.enable.gzip
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_REPO_SERVER_TIMEOUT_SECONDS
valueFrom:
configMapKeyRef:
key: server.repo.server.timeout.seconds
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_X_FRAME_OPTIONS
valueFrom:
configMapKeyRef:
key: server.x.frame.options
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_CONTENT_SECURITY_POLICY
valueFrom:
configMapKeyRef:
key: server.content.security.policy
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_REPO_SERVER_PLAINTEXT
valueFrom:
configMapKeyRef:
key: server.repo.server.plaintext
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_REPO_SERVER_STRICT_TLS
valueFrom:
configMapKeyRef:
key: server.repo.server.strict.tls
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_DEX_SERVER_PLAINTEXT
valueFrom:
configMapKeyRef:
key: server.dex.server.plaintext
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_DEX_SERVER_STRICT_TLS
valueFrom:
configMapKeyRef:
key: server.dex.server.strict.tls
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_TLS_MIN_VERSION
valueFrom:
configMapKeyRef:
key: server.tls.minversion
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_TLS_MAX_VERSION
valueFrom:
configMapKeyRef:
key: server.tls.maxversion
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_TLS_CIPHERS
valueFrom:
configMapKeyRef:
key: server.tls.ciphers
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_CONNECTION_STATUS_CACHE_EXPIRATION
valueFrom:
configMapKeyRef:
key: server.connection.status.cache.expiration
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_OIDC_CACHE_EXPIRATION
valueFrom:
configMapKeyRef:
key: server.oidc.cache.expiration
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_LOGIN_ATTEMPTS_EXPIRATION
valueFrom:
configMapKeyRef:
key: server.login.attempts.expiration
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_STATIC_ASSETS
valueFrom:
configMapKeyRef:
key: server.staticassets
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APP_STATE_CACHE_EXPIRATION
valueFrom:
configMapKeyRef:
key: server.app.state.cache.expiration
name: argocd-cmd-params-cm
optional: true
- name: REDIS_SERVER
valueFrom:
configMapKeyRef:
key: redis.server
name: argocd-cmd-params-cm
optional: true
- name: REDIS_COMPRESSION
valueFrom:
configMapKeyRef:
key: redis.compression
name: argocd-cmd-params-cm
optional: true
- name: REDISDB
valueFrom:
configMapKeyRef:
key: redis.db
name: argocd-cmd-params-cm
optional: true
- name: REDIS_USERNAME
valueFrom:
secretKeyRef:
key: redis-username
name: argocd-redis-ha-haproxy
optional: true
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
key: redis-password
name: argocd-redis-ha-haproxy
optional: true
- name: ARGOCD_DEFAULT_CACHE_EXPIRATION
valueFrom:
configMapKeyRef:
key: server.default.cache.expiration
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_MAX_COOKIE_NUMBER
valueFrom:
configMapKeyRef:
key: server.http.cookie.maxnumber
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_LISTEN_ADDRESS
valueFrom:
configMapKeyRef:
key: server.listen.address
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_METRICS_LISTEN_ADDRESS
valueFrom:
configMapKeyRef:
key: server.metrics.listen.address
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_OTLP_ADDRESS
valueFrom:
configMapKeyRef:
key: otlp.address
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_APPLICATION_NAMESPACES
valueFrom:
configMapKeyRef:
key: application.namespaces
name: argocd-cmd-params-cm
optional: true
- name: ARGOCD_SERVER_ENABLE_PROXY_EXTENSION
valueFrom:
configMapKeyRef:
key: server.enable.proxy.extension
name: argocd-cmd-params-cm
optional: true
image: <redacted>
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz?full=true
port: server
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: server
ports:
- containerPort: 8080
name: server
protocol: TCP
- containerPort: 8083
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: server
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 2G
requests:
cpu: "1"
memory: 2G
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /app/config/ssh
name: ssh-known-hosts
- mountPath: /app/config/tls
name: tls-certs
- mountPath: /app/config/server/tls
name: argocd-repo-server-tls
- mountPath: /app/config/dex/tls
name: argocd-dex-server-tls
- mountPath: /home/argocd
name: plugins-home
- mountPath: /shared/app/custom
name: styles
- mountPath: /tmp
name: tmp
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 999
serviceAccount: argocd-server
serviceAccountName: argocd-server
terminationGracePeriodSeconds: 30
tolerations:
- key: CriticalAddonsOnly
operator: Exists
volumes:
- name: plugins-home
- name: tmp
- configMap:
defaultMode: 420
name: argocd-ssh-known-hosts-cm
name: ssh-known-hosts
- configMap:
defaultMode: 420
name: argocd-tls-certs-cm
name: tls-certs
- configMap:
defaultMode: 420
name: argocd-styles-cm
optional: true
name: styles
- name: argocd-repo-server-tls
secret:
defaultMode: 420
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: ca.crt
path: ca.crt
optional: true
secretName: argocd-repo-server-tls
- name: argocd-dex-server-tls
secret:
defaultMode: 420
items:
- key: tls.crt
path: tls.crt
- key: ca.crt
path: ca.crt
optional: true
secretName: argocd-dex-server-tls |
I think the issue is we shouldn't check src ports for |
<!-- Thanks for sending a pull request! Here are some tips for you: 1. Ensure you have added the unit tests for your changes. 2. Ensure you have included output of manual testing done in the Testing section. 3. Ensure number of lines of code for new or existing methods are within the reasonable limit. 4. Ensure your change works on existing clusters after upgrade. --> **What type of PR is this?** fix a bug <!-- Add one of the following: bug cleanup documentation feature --> bug **Which issue does this PR fix**: #71 **What does this PR do / Why do we need it**: The controller shouldn't loop through src pods to check if their ports matching policies'. **If an issue # is not available please add steps to reproduce and the controller logs**: **Testing done on this change**: <!-- output of manual testing/integration tests results and also attach logs showing the fix being resolved --> Unit tests are updated and functional tests were done policy applied pod (dst) ``` ports: - containerPort: 80 name: http protocol: TCP - containerPort: 443 name: https protocol: TCP ``` src pod ``` ports: - name: src containerPort: 9999 protocol: TCP ``` svc ``` spec: ports: - port: 80 name: http targetPort: http - port: 443 name: https targetPort: https ``` policyendpoint ``` spec: ingress: - cidr: 192.168.xx.xxx ports: - port: 80 protocol: TCP - port: 443 protocol: TCP podIsolation: - Ingress ``` **Automation added to e2e**: <!-- List the e2e tests you added as part of this PR. If no, create an issue with enhancement/testing label --> no **Will this PR introduce any new dependencies?**: <!-- e.g. new K8s API --> no **Will this break upgrades or downgrades. Has updating a running cluster been tested?**: no **Does this PR introduce any user-facing change?**: <!-- If yes, a release note update is required: Enter your extended release note in the block below. If the PR requires additional actions from users switching to the new release, include the string "action required". --> no ```release-note ``` By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
I noticed there's a 1.0.6 image published that includes the changes in #72. I gave this a shot in my cluster and it is creating the PE as expected. Thank you for the fix! For anyone on EKS and self-managing the addon, make sure it is disabled in the CNI config map as described here. Mine was enabled and the controller appeared to be doing nothing, no logs after lease was acquired. enable-network-policy-controller: "false" |
Thanks for testing the fix. We added another improvement and published another version. We will soon release the version to our fully managed service. |
@haouc thanks for implementing the fix. Our managed cluster got an update to the I've written out this example in my original issue here: #81 (comment) Happy to open a separate issue if needed. |
We have released the fix to all cluster now. Please referring to the release note for update EKS platform versions. Thanks @yndai the other issues are taken care right now. We can provide an update as soon as we have the release plan available. I am closing this issue and tracking the other issues at #81. |
I've recently switched from using Calico to the AWS CNI for enforcing network policies. I'm using ArgoCD that comes with some default network policies that reference other components by named port. I'd noticed that these policies didn't seem to be taking effect after the switch, so I dug into it and it seems that the
PolicyEndpoint
isn't being built correctly.What I'm providing
What it built
What I expected
The PE is built like the above if the port is specified by number instead of by name.
The deployment and the service both have the ports named and match
The text was updated successfully, but these errors were encountered: