-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added MLMD GRPC Server Network Policy #686
Added MLMD GRPC Server Network Policy #686
Conversation
ab80834
to
77337bf
Compare
Change to PR detected. A new PR build was completed. |
app: ds-pipeline-metadata-grpc-{{ .Name }} | ||
component: data-science-pipelines | ||
ingress: | ||
- ports: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add these ports to ingress too maybe? https://github.com/opendatahub-io/data-science-pipelines-operator/blob/main/config/internal/ml-metadata/metadata-envoy.service.yaml.tmpl#L12
Looks like the envoy and other mlmd services are utilizing them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope. We have these two flows:
stuff --1--> envoy --2--> mlmd
stuff --3--> mlmd
Things that connect to envoy (arrow 1) need to connect on those two ports you linked to.
Envoy itself only needs to connect to 8080 on mlmd (arrow 2).
Things that connect to mlmd directly only need to connect to 8080 on mlmd (arrow 3).
Arrows 2 and 3 are the ones we care about in this NetworkPolicy, so 8080 is all we need.
namespaceSelector: | ||
matchLabels: | ||
kubernetes.io/metadata.name: {{ .Namespace }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not needed, by default if no namespaceSelector is provided, pods in the policy's own namespace are selected :
If NamespaceSelector is also set, then the NetworkPolicyPeer as a whole selects the Pods matching PodSelector in the Namespaces selected by NamespaceSelector. Otherwise it selects the Pods matching PodSelector in the policy's own Namespace. [1]
I don't think the testing instructions are sufficient. We need to also verify that pods not selected are denied access. This verification may require you to provide a pod from which you can connect to the mlmd grpc service via cli in some way |
Updated the testing instructions, please check. |
db1ce46
to
0e72ead
Compare
Change to PR detected. A new PR build was completed. |
Signed-off-by: vmudadla <vmudadla@redhat.com>
d84d6db
to
d1cef16
Compare
Change to PR detected. A new PR build was completed. |
/lgtm |
perfect, thanks @VaniHaripriya! /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: HumairAK The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The issue resolved by this Pull Request:
Resolves RHOAIENG-10994
Description of your changes:
Added a network policy for MLMD GRPC Server.
Testing instructions
curl -v ds-pipeline-metadata-grpc-myproject.<namespace>.svc.cluster.local:8080
Checklist