-
Notifications
You must be signed in to change notification settings - Fork 754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Constraint-Level Audit Override Settings #2055
Comments
What's your use case here? Are you experiencing performance issues with certain Constraints that trigger on many objects in your cluster? (e.g. does your cluster slow down/experience excessive CPU usage whenever audit is triggered) |
@willbeason The initial use case is that the audit activity was triggering the policy to call the external data provider every 60s. The external data provider was then making calls outside of the cluster. While we can certainly add caching to the service called by the external data provider, the situation made me think that maybe not all policies are of the same criticality or priority for auditing purposes. It just made sense to to me that constraints could be used to provide that policy audit granularity. Without the properties provided in the constraint, the policy would use default audit settings. With constraint-level settings, policy audit tiers, with different audit intervals, would be possible. |
Thanks for responding! That makes sense - I'll bring that up in our meetings. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
Can we get this considered still? It is similar to Issue #2266. |
Do I need to open a new issue? |
Un-marking as stale. #2266 is a bit different in that it is asking for a flag as to whether a constraint should be audited at all. Having different audit cycles and configurations for different constraints is a heavier lift to the point where, as-requested, I'm not sure it's possible. One thing that could be interesting would be to have a way to partition different constraints to be evaluated by different pods, as a way of load balancing. Then, it would be possible to run multiple audit pods with different high-level configurations and it could get the same top-level behavior. I'd want to get a bit more signal as to amount of use cases and frequency of the need in order to know how to prioritize this. |
WRT external data providers, I would cache at the provider layer as they are best-positioned to know what sort of caching-invalidation model makes sense and how it should be configured. |
Yep, we are looking into that. Currently, is disabling the |
For a specific constraint? I believe so, though I'd imagine webhook evaluation to be higher-traffic than audit. Would the ability to disable a specific constraint for audit (if not the finer grained tuning you're asking for), be a useful short-term mitigation for you? |
So, I was thinking about a CRD change like so:
Then change the I guess you could also hack at it and use a |
I think this might help with reducing calls to external data providers in audit #2386 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
Describe the solution you'd like
I would like to be able to control the audit functionality at the the constraint level. Currently, the settings described here are system-wide settings. In testing with the new External Data Provider feature, I realized that it would be very nice to be able to override the gatekeeper system-level audit settings by surfacing constraint elements settings, such as:
Anything else you would like to add:
N/A
Environment:
v3.8.1
kubectl version
):The text was updated successfully, but these errors were encountered: