-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restrict Cluster Role access authorizations #3156
Comments
I interested to work on this, can i get some details on which part I have to refactor? |
Since I'm still familiarizing myself with the Operator codebase and associated Helm chart, I'd appreciate clarification on the following:
After that I would also be prepared to help fix the issue. :) |
We prefer to handle this as part of the operator. Not everybody using the operator uses Helm to deploy. Also, it would involve more work on the Helm chart side.
I think you should ask this in the Helm chart repository. |
discussed at SIG meeting:
A ticket should be opened at helm char repo to change the packaging - when installed in namespace more use the role instead of clusterrole. |
Component(s)
No response
Describe the issue you're reporting
Context
Our current setup uses the OpenTelemetry Operator to make the application traceable. The operator is deployed through a Helm Chart. However, our Trivy scanner identifies that the operator has broad permissions via the Kubernetes ClusterRole.
Revise
Based on my understanding, the OpenTelemetry Operator's current permissions allow it to delete various Kubernetes resources like pods, services, and service accounts. This level of access seems unnecessary for the operator's intended functionality.
The RBAC are generated via go maker comments. The most relevant ones are on the
OpenTelemetryCollectorReconciler
struct in the Reconcile function.Suggestion
+kubebuilder:rbac
and do not group them in a single go comment marker.Hint
As these are cluster roles, this applies to all namespaces.
Version
Helm Chart v0.58.2
The text was updated successfully, but these errors were encountered: