diff --git a/manifests/modules/security/kyverno/.workshop/terraform/main.tf b/manifests/modules/security/kyverno/.workshop/terraform/main.tf index 4df10ffc8..93a1c97e9 100644 --- a/manifests/modules/security/kyverno/.workshop/terraform/main.tf +++ b/manifests/modules/security/kyverno/.workshop/terraform/main.tf @@ -26,17 +26,4 @@ module "kyverno_policies" { ] depends_on = [module.kyverno] -} - -module "policy_reporter" { - source = "aws-ia/eks-blueprints-addon/aws" - version = "1.1.1" - - description = "Kyverno Policy Reporter which shows policy reports in a graphical web-based front end." - chart = "policy-reporter" - chart_version = "1.3.0" - namespace = "kyverno" - repository = "https://kyverno.github.io/policy-reporter/" - - depends_on = [module.kyverno] -} +} \ No newline at end of file diff --git a/website/docs/security/kyverno/baseline-pss.md b/website/docs/security/kyverno/baseline-pss.md index 1a33b5481..dca4c1596 100644 --- a/website/docs/security/kyverno/baseline-pss.md +++ b/website/docs/security/kyverno/baseline-pss.md @@ -3,14 +3,13 @@ title: "Enforcing Pod Security Standards" sidebar_position: 72 --- -As discussed in the introduction for [Pod Security Standards (PSS)](../pod-security-standards/) section, there are 3 pre-defined Policy levels, **Privileged**, **Baseline**, and **Restricted**. While it is recommended to setup a Restricted PSS, it can cause unintended behavior on the application level unless properly set. To get started it is recommended to setup a Baseline Policy that will prevent known Privileged escalations such as Containers accessing HostProcess, HostPath, HostPorts or allow traffic snooping for example, being possible to setup individual policies to restrict or disallow those privileged access to containers. +As discussed in the introduction for [Pod Security Standards (PSS)](../pod-security-standards/) section, there are three pre-defined policy levels: **Privileged**, **Baseline**, and **Restricted**. While implementing a Restricted PSS is recommended, it can cause unintended behavior at the application level unless properly configured. To get started, it's recommended to set up a Baseline Policy that will prevent known privileged escalations such as containers accessing HostProcess, HostPath, HostPorts, or allowing traffic snooping. Individual policies can then be set up to restrict or disallow these privileged accesses to containers. -A Kyverno Baseline Policy will help to restrict all the known privileged escalation under a single policy, and also maintain and update the Policy regularly adding the latest found vulnerabilities to the Policy. +A Kyverno Baseline Policy helps restrict all known privileged escalations under a single policy. It also allows for regular maintenance and updates to incorporate the latest discovered vulnerabilities into the policy. -Privileged containers can do almost everything that the host can do and are often used in CI/CD pipelines to allow building and publishing Container images. -With the now fixed [CVE-2022-23648](https://github.com/containerd/containerd/security/advisories/GHSA-crp2-qrr5-8pq7) any bad actor, could escape the privileged container by abusing the Control Groups `release_agent` functionality to execute arbitrary commands on the container host. +Privileged containers can perform almost all actions that the host can do and are often used in CI/CD pipelines to allow building and publishing container images. With the now fixed [CVE-2022-23648](https://github.com/containerd/containerd/security/advisories/GHSA-crp2-qrr5-8pq7), a malicious actor could escape the privileged container by exploiting the Control Groups `release_agent` functionality to execute arbitrary commands on the container host. -In this lab we will run a privileged Pod on our EKS cluster. To do that, execute the following command: +In this lab, we will run a privileged Pod on our EKS cluster. Execute the following command: ```bash $ kubectl run privileged-pod --image=nginx --restart=Never --privileged @@ -19,24 +18,24 @@ $ kubectl delete pod privileged-pod pod "privileged-pod" deleted ``` -In order to avoid such escalated privileged capabilities and avoid unauthorized use of above permissions, it's recommended to setup a Baseline Policy using Kyverno. +To prevent such escalated privileged capabilities and avoid unauthorized use of these permissions, it's recommended to set up a Baseline Policy using Kyverno. -The baseline profile of the Pod Security Standards is a collection of the most basic and important steps that can be taken to secure Pods. Beginning with Kyverno 1.8, an entire profile may be assigned to the cluster through a single rule. To check more on the privileges blocked by Baseline Profile, please refer [here](https://kyverno.io/policies/#:~:text=Baseline%20Pod%20Security%20Standards,cluster%20through%20a%20single%20rule) +The baseline profile of the Pod Security Standards is a collection of the most fundamental and crucial steps that can be taken to secure Pods. Starting from Kyverno 1.8, an entire profile can be assigned to the cluster through a single rule. To learn more about the privileges blocked by the Baseline Profile, please refer to the [Kyverno documentation](https://kyverno.io/policies/#:~:text=Baseline%20Pod%20Security%20Standards,cluster%20through%20a%20single%20rule). ```file manifests/modules/security/kyverno/baseline-policy/baseline-policy.yaml ``` -Notice that he above policy is in `Enforce` mode, and will block any requests to create privileged Pod. +Note that the above policy is in `Enforce` mode and will block any requests to create privileged Pods. -Go ahead and apply the Baseline Policy. +Go ahead and apply the Baseline Policy: ```bash $ kubectl apply -f ~/environment/eks-workshop/modules/security/kyverno/baseline-policy/baseline-policy.yaml clusterpolicy.kyverno.io/baseline-policy created ``` -Now, try to run the privileged Pod again. +Now, try to run the privileged Pod again: ```bash expectError=true $ kubectl run privileged-pod --image=nginx --restart=Never --privileged @@ -49,10 +48,10 @@ baseline-policy: Validation rule 'baseline' failed. It violates PodSecurity "baseline:latest": ({Allowed:false ForbiddenReason:privileged ForbiddenDetail:container "privileged-pod" must not set securityContext.privileged=true}) ``` -As seen the creation failed, because it isn't in compliance with our Baseline Policy set on the Cluster. +As you can see, the creation failed because it doesn't comply with our Baseline Policy set on the cluster. ### Note on Auto-Generated Policies -PSA operates at the Pod level, but in practice Pods are usually managed by Pod controllers, like Deployments. Having no indication of Pod security errors at the Pod controller level can make issues complex to troubleshoot. The PSA enforce mode is the only PSA mode that stops Pods from being created, however PSA enforcement doesn’t act at the Pod controller level. To improve this experience, it's recommended that PSA `warn` and `audit` modes are also used with `enforce`. With that PSA will indicate that the controller resources are trying to create Pods that would fail with the applied PSS level. +Pod Security Admission (PSA) operates at the Pod level, but in practice, Pods are usually managed by Pod controllers like Deployments. Having no indication of Pod security errors at the Pod controller level can make issues complex to troubleshoot. The PSA enforce mode is the only PSA mode that prevents Pods from being created; however, PSA enforcement doesn't act at the Pod controller level. To improve this experience, it's recommended that PSA `warn` and `audit` modes are also used with `enforce`. This way, PSA will indicate that the controller resources are trying to create Pods that would fail with the applied PSS level. -Using PaC solutions with Kubernetes presents another challenge of writing and maintaining policies to cover all the different resources used within clusters. With the [Kyverno Auto-Gen Rules for Pod Controllers](https://kyverno.io/docs/writing-policies/autogen/) feature, the Pod policies auto-generate associated Pod controller (Deployment, DaemonSet, etc.) policies. This Kyverno feature increases the expressive nature of policies and reduces the effort to maintain policies for associated resources. improving PSA user experience where controllers resources are not prevented from progressing, while the underlying Pods are. +Using Policy-as-Code (PaC) solutions with Kubernetes presents another challenge of writing and maintaining policies to cover all the different resources used within clusters. With the [Kyverno Auto-Gen Rules for Pod Controllers](https://kyverno.io/docs/writing-policies/autogen/) feature, the Pod policies auto-generate associated Pod controller (Deployment, DaemonSet, etc.) policies. This Kyverno feature enhances the expressive nature of policies and reduces the effort to maintain policies for associated resources, improving the PSA user experience where controller resources are not prevented from progressing while the underlying Pods are. diff --git a/website/docs/security/kyverno/creating-policy.md b/website/docs/security/kyverno/creating-policy.md index 28f1f15ad..7c7f93524 100644 --- a/website/docs/security/kyverno/creating-policy.md +++ b/website/docs/security/kyverno/creating-policy.md @@ -3,26 +3,26 @@ title: "Creating a Simple Policy" sidebar_position: 71 --- -To get an understanding of Kyverno Policies, we will start our lab with a simple Pod Label requirement. As you may know, Labels in Kubernetes can be used to tag objects and resources in the Cluster. +To gain an understanding of Kyverno Policies, we'll start our lab with a simple Pod Label requirement. As you may know, Labels in Kubernetes are used to tag objects and resources in the cluster. -Below we have a sample policy requiring a Label `CostCenter`. +Below is a sample policy requiring a Label `CostCenter`: ```file manifests/modules/security/kyverno/simple-policy/require-labels-policy.yaml ``` -Kyverno has 2 kinds of Policy resources, **ClusterPolicy** used for Cluster-Wide Resources and **Policy** used for Namespaced Resources. The example above shows a ClusterPolicy. Take sometime to dive deep and check the below details in the configuration. +Kyverno has two kinds of Policy resources: **ClusterPolicy** used for Cluster-Wide Resources and **Policy** used for Namespaced Resources. The example above shows a ClusterPolicy. Take some time to examine the following details in the configuration: -- Under the spec section of the Policy, there is a an attribute `validationFailureAction` it tells Kyverno if the resource being validated should be allowed but reported `Audit` or blocked `Enforce`. Defaults to Audit, the example is set to Enforce. -- The `rules` is one or more rules to be validated. -- The `match` statement sets the scope of what will be checked. In this case, it is any `Pod` resource. -- The `validate` statement tries to positively check what is defined. If the statement, when compared with the requested resource, is true, it is allowed. If false, it is blocked. +- Under the `spec` section of the Policy, there's an attribute `validationFailureAction`. It tells Kyverno if the resource being validated should be allowed but reported (`Audit`) or blocked (`Enforce`). The default is `Audit`, but our example is set to `Enforce`. +- The `rules` section contains one or more rules to be validated. +- The `match` statement sets the scope of what will be checked. In this case, it's any `Pod` resource. +- The `validate` statement attempts to positively check what is defined. If the statement, when compared with the requested resource, is true, it's allowed. If false, it's blocked. - The `message` is what gets displayed to a user if this rule fails validation. -- The `pattern` object defines what pattern will be checked in the resource. In this case, it is looking for `metadata.labels` with `CostCenter`. +- The `pattern` object defines what pattern will be checked in the resource. In this case, it's looking for `metadata.labels` with `CostCenter`. -The Above Example Policy, will block any Pod Creation which doesn't have the label `CostCenter`. +This example Policy will block any Pod creation that doesn't have the label `CostCenter`. -Create the policy using the following command. +Create the policy using the following command: ```bash $ kubectl apply -f ~/environment/eks-workshop/modules/security/kyverno/simple-policy/require-labels-policy.yaml @@ -30,7 +30,7 @@ $ kubectl apply -f ~/environment/eks-workshop/modules/security/kyverno/simple-po clusterpolicy.kyverno.io/require-labels created ``` -Next, take a look on the Pods running in the `ui` Namespace, notice the applied labels. +Next, take a look at the Pods running in the `ui` Namespace and notice the applied labels: ```bash $ kubectl -n ui get pods --show-labels @@ -38,9 +38,9 @@ NAME READY STATUS RESTARTS AGE LABELS ui-67d8cf77cf-d4j47 1/1 Running 0 9m app.kubernetes.io/component=service,app.kubernetes.io/created-by=eks-workshop,app.kubernetes.io/instance=ui,app.kubernetes.io/name=ui,pod-template-hash=67d8cf77cf ``` -Check the running Pod doesn't have the required Label and Kyverno didn't terminate it, this happened because as seen earlier, Kyverno operates as an `AdmissionController` and will not interfere in resources that already exist in the cluster. +Notice that the running Pod doesn't have the required Label, and Kyverno didn't terminate it. This is because Kyverno operates as an `AdmissionController` and won't interfere with resources that already exist in the cluster. -However if you delete the running Pod, it won't be able to be recreated since it doesn't have the required Label. Go ahead and delete de Pod running in the `ui` Namespace. +However, if you delete the running Pod, it won't be able to be recreated since it doesn't have the required Label. Go ahead and delete the Pod running in the `ui` Namespace: ```bash $ kubectl -n ui delete pod --all @@ -49,7 +49,7 @@ $ kubectl -n ui get pods No resources found in ui namespace. ``` -As mentioned, the Pod was not recreated, try to force a rollout of the `ui` deployment. +As mentioned, the Pod was not recreated. Try to force a rollout of the `ui` deployment: ```bash expectError=true $ kubectl -n ui rollout restart deployment/ui @@ -64,7 +64,7 @@ require-labels: The rollout failed with the admission webhook denying the request due to the `require-labels` Kyverno Policy. -You can also check this `error` message describing the `ui` deployment, or visualizing the `events` in the `ui` Namespace. +You can also check this `error` message by describing the `ui` deployment or viewing the `events` in the `ui` Namespace: ```bash $ kubectl -n ui describe deployment ui @@ -80,7 +80,7 @@ $ kubectl -n ui get events | grep PolicyViolation 9m Warning PolicyViolation deployment/ui policy require-labels/autogen-check-team fail: validation error: Label 'CostCenter' is required to deploy the Pod. rule autogen-check-team failed at path /spec/template/metadata/labels/CostCenter/ ``` -Now add the required label `CostCenter` to the `ui` Deployment, using the Kustomization patch below. +Now add the required label `CostCenter` to the `ui` Deployment, using the Kustomization patch below: ```kustomization modules/security/kyverno/simple-policy/ui-labeled/deployment.yaml @@ -100,21 +100,21 @@ NAME READY STATUS RESTARTS AGE LABELS ui-5498685db8-k57nk 1/1 Running 0 60s CostCenter=IT,app.kubernetes.io/component=service,app.kubernetes.io/created-by=eks-workshop,app.kubernetes.io/instance=ui,app.kubernetes.io/name=ui,pod-template-hash=5498685db8 ``` -As you can see the admission webhook successfully validated the Policy and the Pod was created with the correct Label `CostCenter=IT`! +As you can see, the admission webhook successfully validated the Policy and the Pod was created with the correct Label `CostCenter=IT`! ### Mutating Rules -In the above examples, you checked how Validation Policies work in their default behavior defined in `validationFailureAction`. However Kyverno can also be used to manage Mutating rules within the Policy, in order to modify any API Requests to satisfy or enforce the specified requirements on the Kubernetes resources. The resource mutation occurs before validation, so the validation rules will not contradict the changes performed by the mutation section. +In the above examples, you checked how Validation Policies work in their default behavior defined in `validationFailureAction`. However, Kyverno can also be used to manage Mutating rules within the Policy, to modify any API Requests to satisfy or enforce the specified requirements on the Kubernetes resources. The resource mutation occurs before validation, so the validation rules will not contradict the changes performed by the mutation section. -Below is a sample Policy with a mutation rule defined, which will be used to automatically add our label `CostCenter=IT` as default to any `Pod`. +Below is a sample Policy with a mutation rule defined, which will be used to automatically add our label `CostCenter=IT` as default to any `Pod`: ```file manifests/modules/security/kyverno/simple-policy/add-labels-mutation-policy.yaml ``` -Notice the `mutate` section, under the ClusterPolicy `spec`. +Notice the `mutate` section under the ClusterPolicy `spec`. -Go ahead, and create the above Policy using the following command. +Go ahead and create the above Policy using the following command: ```bash $ kubectl apply -f ~/environment/eks-workshop/modules/security/kyverno/simple-policy/add-labels-mutation-policy.yaml @@ -122,7 +122,7 @@ $ kubectl apply -f ~/environment/eks-workshop/modules/security/kyverno/simple-p clusterpolicy.kyverno.io/add-labels created ``` -In order to validate the Mutation Webhook, lets this time rollout the `assets` Deployment without explicitly adding a label: +To validate the Mutation Webhook, let's roll out the `assets` Deployment without explicitly adding a label: ```bash $ kubectl -n assets rollout restart deployment/assets @@ -131,7 +131,7 @@ $ kubectl -n assets rollout status deployment/assets deployment "assets" successfully rolled out ``` -Validate the automatically added label `CostCenter=IT` to the Pod to meet the policy requirements, resulting a successful Pod creation even with the Deployment not having the label specified: +Validate that the label `CostCenter=IT` was automatically added to the Pod to meet the policy requirements, resulting in a successful Pod creation even though the Deployment didn't have the label specified: ```bash $ kubectl -n assets get pods --show-labels @@ -141,4 +141,4 @@ assets-bb88b4789-kmk62 1/1 Running 0 25s CostCenter=IT,app.ku It's also possible to mutate existing resources in your Amazon EKS Clusters with Kyverno Policies using `patchStrategicMerge` and `patchesJson6902` parameters in your Kyverno Policy. -This was just a simple example of labels for our Pods with Validating and Mutating rules. This can be applied to various scenarios such as restricting Images from unknown registries, adding Data to Config Maps, Tolerations and much more. In the next upcoming labs, you will go through some more advanced use-cases. +This was just a simple example of labels for our Pods with Validating and Mutating rules. This can be applied to various scenarios such as restricting images from unknown registries, adding data to ConfigMaps, setting tolerations, and much more. In the upcoming labs, you will explore some more advanced use-cases. diff --git a/website/docs/security/kyverno/index.md b/website/docs/security/kyverno/index.md index f8627cf9a..0bac450ec 100644 --- a/website/docs/security/kyverno/index.md +++ b/website/docs/security/kyverno/index.md @@ -25,33 +25,33 @@ Install the following Kubernetes addons in the EKS cluster: You can view the Terraform that applies these changes [here](https://github.com/aws-samples/eks-workshop-v2/tree/main/manifests/modules/security/kyverno/.workshop/terraform). ::: -As containers are largely adopted in production environments, DevOps, Security, and Platform teams need a solution to effectively collaborate and manage Governance and [Policy-as-Code (PaC)](https://aws.github.io/aws-eks-best-practices/security/docs/pods/#policy-as-code-pac). This ensures that all different teams are able to have the same source of truth in what regards to security, as well as use the same baseline "language" when describing their individual needs. +As containers are increasingly adopted in production environments, DevOps, Security, and Platform teams require an effective solution to collaborate and manage Governance and [Policy-as-Code (PaC)](https://aws.github.io/aws-eks-best-practices/security/docs/pods/#policy-as-code-pac). This ensures that all teams share the same source of truth regarding security and use a consistent baseline "language" when describing their individual needs. -Kubernetes by its nature is meant to be a tool to build on and orchestrate, this means that out of the box it lacks pre-defined guardrails. In order to give builders a way to control security Kubernetes provides (starting on version 1.23) [Pod Security Admission (PSA)](https://kubernetes.io/docs/concepts/security/pod-security-admission/), a built-in admission controller that implements the security controls outlined in the [Pod Security Standards (PSS)](https://kubernetes.io/docs/concepts/security/pod-security-standards/), enabled by default in Amazon Elastic Kubernetes Service (EKS). +Kubernetes, by its nature, is designed as a tool to build upon and orchestrate, which means it lacks pre-defined guardrails out of the box. To provide builders with a way to control security, Kubernetes offers [Pod Security Admission (PSA)](https://kubernetes.io/docs/concepts/security/pod-security-admission/) starting from version 1.23. PSA is a built-in admission controller that implements the security controls outlined in the [Pod Security Standards (PSS)](https://kubernetes.io/docs/concepts/security/pod-security-standards/), and is enabled by default in Amazon Elastic Kubernetes Service (EKS). -### What is Kyverno +### What is Kyverno? -[Kyverno](https://kyverno.io/) (Greek for “govern”) is a policy engine designed specifically for Kubernetes. It is a Cloud Native Computing Foundation (CNCF) project allowing teams to collaborate and enforce Policy-as-Code. +[Kyverno](https://kyverno.io/) (Greek for "govern") is a policy engine specifically designed for Kubernetes. It is a Cloud Native Computing Foundation (CNCF) project that enables teams to collaborate and enforce Policy-as-Code. -The Kyverno policy engine integrates with the Kubernetes API server as [Dynamic Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), allowing policies to **mutate** and **validate** inbound Kubernetes API requests, thus ensuring compliance with the defined rules prior to the data being persisted and ultimately applied into the cluster. +The Kyverno policy engine integrates with the Kubernetes API server as a [Dynamic Admission Controller](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), allowing policies to **mutate** and **validate** inbound Kubernetes API requests. This ensures compliance with defined rules prior to the data being persisted and applied to the cluster. -Kyverno allows for declarative Kubernetes resources written in YAML, with no new policy language to learn, and results are available as Kubernetes resources and as events. +Kyverno uses declarative Kubernetes resources written in YAML, eliminating the need to learn a new policy language. Results are available as Kubernetes resources and events. -Kyverno policies can be used to **validate**, **mutate**, and **generate** resource configurations, and also **validate** image signatures and attestations, providing all the necessary building blocks for a complete software supply chain security standards enforcement. +Kyverno policies can be used to **validate**, **mutate**, and **generate** resource configurations, as well as **validate** image signatures and attestations, providing all the necessary building blocks for comprehensive software supply chain security standards enforcement. ### How Kyverno Works -As mentioned above, Kyverno runs as a Dynamic Admission Controller in an Kubernetes Cluster. Kyverno receives validating and mutating admission webhook HTTP callbacks from the Kubernetes API server and applies matching policies to return results that enforce admission policies or reject requests. It can also be used to Audit the requests and to monitor the Security posture of the environment before enforcing. +Kyverno operates as a Dynamic Admission Controller in a Kubernetes Cluster. It receives validating and mutating admission webhook HTTP callbacks from the Kubernetes API server and applies matching policies to return results that enforce admission policies or reject requests. It can also be used to audit requests and monitor the security posture of the environment before enforcement. -The diagram below shows the high-level logical architecture of Kyverno. +The diagram below illustrates the high-level logical architecture of Kyverno: ![KyvernoArchitecture](assets/ky-arch.webp) -The two major components are the Webhook Server & the Webhook Controller. The **Webhook Server** handles incoming AdmissionReview requests from the Kubernetes API server and sends them to the Engine for processing. It is dynamically configured by the **Webhook Controller** which watches the installed policies and modifies the webhooks to request only the resources matched by those policies. +The two major components are the Webhook Server and the Webhook Controller. The **Webhook Server** handles incoming AdmissionReview requests from the Kubernetes API server and sends them to the Engine for processing. It is dynamically configured by the **Webhook Controller**, which monitors installed policies and modifies the webhooks to request only the resources matched by those policies. --- -Before proceeding with the labs, validate the Kyverno resources provisioned by the `prepare-environment` script. +Before proceeding with the labs, validate the Kyverno resources provisioned by the `prepare-environment` script: ```bash $ kubectl -n kyverno get all diff --git a/website/docs/security/kyverno/reports.md b/website/docs/security/kyverno/reports.md index 101d4d706..b3cf3d976 100644 --- a/website/docs/security/kyverno/reports.md +++ b/website/docs/security/kyverno/reports.md @@ -3,13 +3,18 @@ title: "Reports & Auditing" sidebar_position: 74 --- -Kyverno also includes a [Policy Reporting](https://kyverno.io/docs/policy-reports/) tool, using the open format defined by the Kubernetes Policy Working Group and deployed as custom resources in the cluster. Kyverno emits these reports when admission actions like _CREATE_, _UPDATE_, and _DELETE_ are performed in the cluster, they are also generated as a result of background scans that validate policies on already existing resources. +Kyverno includes a [Policy Reporting](https://kyverno.io/docs/policy-reports/) tool that uses an open format defined by the Kubernetes Policy Working Group. These reports are deployed as custom resources in the cluster. Kyverno generates these reports when admission actions like _CREATE_, _UPDATE_, and _DELETE_ are performed in the cluster. Reports are also generated as a result of background scans that validate policies on existing resources. -So far in the workshop we have created a few Policies for specific rules. When a resource is matched by one or more rules according to the policy definition and violate any of them, an entry will be created in the report for each violation, resulting in multiple entries if the same resources matches and violate multiple rules. When resources are deleted their entry will be removed from the reports, meaning that Kyverno Reports will always represent the current state of the cluster and do not record historical information. +Throughout this workshop, we have created several policies with specific rules. When a resource matches one or more rules according to the policy definition and violates any of them, an entry is created in the report for each violation. This can result in multiple entries if the same resource matches and violates multiple rules. When resources are deleted, their entries are removed from the reports. This means that Kyverno Reports always represent the current state of the cluster and do not record historical information. -As seen earlier, Kyverno has two types of `validationFailureAction`, `Audit` mode that will allow resouces to be created, and report the action in the Policy Reports, or `Enforce` which will deny the resource creation, but also not add an entry in the Policy Reports. For example, if a Policy in `Audit` mode contain a single rule which requires all resources to set the label `CostCenter` and a Pod is created without the that, Kyverno will allow the Pod’s creation but record it as a `FAIL` result in a Policy Report due the rule violation. If this same Policy is configured with `Enforce` mode, Kyverno will immediately block the resource creation and this will not generate an entry in the Policy Reports, however if the Pod is created in compliance with the rule, it will be reported as `PASS` in the report. It is possible to check blocked actions in the Kubernetes events for the Namespace where the action was requested. +As discussed earlier, Kyverno has two types of `validationFailureAction`: -Now, we will check on our cluster's status on compliance with the policies we have created so far in this workshop with an overview of the Policy Reports generated. +1. `Audit` mode: Allows resources to be created and reports the action in the Policy Reports. +2. `Enforce` mode: Denies resource creation but does not add an entry in the Policy Reports. + +For example, if a Policy in `Audit` mode contains a single rule requiring all resources to set the label `CostCenter`, and a Pod is created without that label, Kyverno will allow the Pod's creation but record it as a `FAIL` result in a Policy Report due to the rule violation. If this same Policy is configured with `Enforce` mode, Kyverno will immediately block the resource creation, and this will not generate an entry in the Policy Reports. However, if the Pod is created in compliance with the rule, it will be reported as `PASS` in the report. You can check blocked actions in the Kubernetes events for the Namespace where the action was requested. + +Let's examine our cluster's compliance status with the policies we've created so far in this workshop by reviewing the Policy Reports generated. ```bash hook=reports $ kubectl get policyreports -A @@ -47,11 +52,11 @@ ui cpol-require-labels 0 3 0 0 0 ui cpol-restrict-image-registries 3 0 0 0 0 25m ``` -> The output may vary. +> Note: The output may vary. -Because we worked with just ClusterPolicies, you can see in the above output a number of Reports that were generated across all Namespaces, such as `cpol-verify-image`, `cpol-baseline-policy`, and `cpol-restrict-image-registries` and not just in the `default` Namespace, where we created the resources to be validated. You can also see the status of objects such as `PASS`, `FAIL`, `WARN`, `ERROR`, and `SKIP`. +As we worked with ClusterPolicies, you can see in the above output that Reports were generated across all Namespaces, not just in the `default` Namespace where we created the resources to be validated. The reports show the status of objects using `PASS`, `FAIL`, `WARN`, `ERROR`, and `SKIP`. -As mentioned earlier, the blocked actions will reside in the Namespace events, take a look on those using the command below. +As mentioned earlier, blocked actions are recorded in the Namespace events. Let's examine those using the following command: ```bash $ kubectl get events | grep block @@ -59,9 +64,9 @@ $ kubectl get events | grep block 3m Warning PolicyViolation clusterpolicy/restrict-image-registries Pod default/nginx-public: [validate-registries] fail (blocked); validation error: Unknown Image registry. rule validate-registries failed at path /spec/containers/0/image/ ``` -> The output may vary. +> Note: The output may vary. -Now, take a closer look in the Policy Reports for the `default` Namespace used in the labs. +Now, let's take a closer look at the Policy Reports for the `default` Namespace used in the labs: ```bash $ kubectl get policyreports @@ -71,11 +76,11 @@ default cpol-require-labels 2 0 0 0 0 default cpol-restrict-image-registries 1 1 0 0 0 13m ``` -Check that for the `restrict-image-registries` ClusterPolicy we have just one `FAIL` and one `PASS` reports. This happened because all the ClusterPolicies were created with `Enforce` mode, and as mentioned the blocked resources are not reported, also the previously running resources that could violate policy rules, were already removed. +Notice that for the `restrict-image-registries` ClusterPolicy, we have one `FAIL` and one `PASS` report. This is because all the ClusterPolicies were created with `Enforce` mode, and as mentioned, blocked resources are not reported. Additionally, previously running resources that could violate policy rules were already removed. -The `nginx` Pod, that we left running with a publicly available image, is the only remaining resource that violates the `restrict-image-registries` policy, and it's shown in the report. +The `nginx` Pod, which we left running with a publicly available image, is the only remaining resource that violates the `restrict-image-registries` policy, and it's shown in the report. -Check that in more detail the violations for this Policy, by describing a specific report. As shown in the example below, use the `kubectl describe` command for the `cpol-restrict-image-registries` Report to see the validation results for the `restrict-image-registries` ClusterPolicy. +To examine the violations for this Policy in more detail, describe the specific report. Use the `kubectl describe` command for the `cpol-restrict-image-registries` Report to see the validation results for the `restrict-image-registries` ClusterPolicy: ```bash $ kubectl describe policyreport cpol-restrict-image-registries @@ -131,8 +136,8 @@ Summary: Events: ``` -The above output display the `nginx` Pod policy validation receiving a `fail` Result and validation error Message. In the other hand the `nginx-ecr` policy validation received a `pass` Result. Monitoring reports in this way could be an overhead for administrators. Kyverno also supports a GUI based tool for [Policy reporter](https://kyverno.github.io/policy-reporter/core/targets/#policy-reporter-ui). This is outside of this workshop's scope. +The above output displays the `nginx` Pod policy validation receiving a `fail` Result and validation error Message. On the other hand, the `nginx-ecr` policy validation received a `pass` Result. Monitoring reports in this way could be an overhead for administrators. Kyverno also supports a GUI-based tool for [Policy reporter](https://kyverno.github.io/policy-reporter/core/targets/#policy-reporter-ui), which is outside the scope of this workshop. -This Lab, you learned how to augment the Kubernetes PSA/PSS configurations with Kyverno. Pod Security Standards (PSS) and the in-tree Kubernetes implementation of these standards, Pod Security Admission (PSA), provide good building blocks for managing pod security. The majority of users switching from Kubernetes Pod Security Policies (PSP) should be successful using the PSA/PSS features. +In this lab, you learned how to augment the Kubernetes PSA/PSS configurations with Kyverno. Pod Security Standards (PSS) and the in-tree Kubernetes implementation of these standards, Pod Security Admission (PSA), provide good building blocks for managing pod security. The majority of users switching from Kubernetes Pod Security Policies (PSP) should be successful using the PSA/PSS features. -Kyverno augments the user experience created by PSA/PSS by leveraging the in-tree Kubernetes pod security implementation and providing several helpful enhancements. You can use Kyverno to govern the proper use of pod security labels. In addition, you can use the new Kyverno `validate.podSecurity` rule to easily manage pod security standards with additional flexibility and an enhanced user experience. And, with the Kyverno CLI, you can automate policy evaluation, upstream of your clusters. +Kyverno enhances the user experience created by PSA/PSS by leveraging the in-tree Kubernetes pod security implementation and providing several helpful enhancements. You can use Kyverno to govern the proper use of pod security labels. Additionally, you can use the new Kyverno `validate.podSecurity` rule to easily manage pod security standards with additional flexibility and an enhanced user experience. And, with the Kyverno CLI, you can automate policy evaluation upstream of your clusters. diff --git a/website/docs/security/kyverno/restricting-images.md b/website/docs/security/kyverno/restricting-images.md index af4172ce9..e6e000194 100644 --- a/website/docs/security/kyverno/restricting-images.md +++ b/website/docs/security/kyverno/restricting-images.md @@ -3,11 +3,11 @@ title: "Restricting Image Registries" sidebar_position: 73 --- -Using container images form unknown sources on your EKS Clusters, that may not be a scanned for Common Vulnerabilities and Exposure (CVE), represent a risk factor for the overall security of your environment. When choosing container images sources, you need to ensure that they are originated from Trusted Registries, in order to reduce the threat exposure and exploits of vulnerabilities. Some larger organizations also have Security Guidelines that limit containers to use images from their own hosted private image registry. +Using container images from unknown sources in your EKS clusters can pose significant security risks, especially if these images haven't been scanned for Common Vulnerabilities and Exposures (CVEs). To mitigate these risks and reduce the threat of vulnerability exploitation, it's crucial to ensure that container images originate from trusted registries. Many organizations also have security guidelines that mandate the use of images exclusively from their own hosted private image registries. -In this section, you will see how Kyverno can help you run secure container workloads by restricting the Image Registries that can be used in your cluster. +In this section, we'll explore how Kyverno can help you run secure container workloads by restricting the image registries that can be used in your cluster. -As seen in previous labs, you can run Pods with images from any available registry, so run a sample Pod using the default registry that points to `docker.io`. +As demonstrated in previous labs, you can run Pods with images from any available registry. Let's start by running a sample Pod using the default registry, which points to `docker.io`: ```bash $ kubectl run nginx --image=nginx @@ -20,19 +20,19 @@ $ kubectl describe pod nginx | grep Image Image ID: docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac ``` -In this case, it was just an `nginx` base image being pulled from the Public Registry. A bad actor could pull any vulnerable image and run on the EKS Cluster, exploiting resources allocated in the cluster. +In this case, we've pulled a basic `nginx` image from the public registry. However, a malicious actor could potentially pull a vulnerable image and run it on the EKS cluster, potentially exploiting the cluster's resources. -Next, as a best practice you'll define a policy that will restrict the use of any unauthorized Image Registry, and rely only on specified Trusted Registries. +To implement best practices, we'll define a policy that restricts the use of unauthorized image registries and relies only on specified trusted registries. -In this lab, you will be using [Amazon ECR Public Gallery](https://public.ecr.aws/) as the Trusted Registry, blocking any containers that use Images hosted in other registries to run. Below is a sample Kyverno Policy to restrict the image pull for this use-case. +For this lab, we'll use the [Amazon ECR Public Gallery](https://public.ecr.aws/) as our trusted registry, blocking any containers that use images hosted in other registries. Here's a sample Kyverno policy to restrict image pulling for this use case: ```file manifests/modules/security/kyverno/images/restrict-registries.yaml ``` -> The above doesn't restrict usage of InitContainers or Ephemeral Containers to the referred repository. +> Note: This policy doesn't restrict the usage of InitContainers or Ephemeral Containers to the referred repository. -Apply the above policy with the command below. +Let's apply this policy using the following command: ```bash $ kubectl apply -f ~/environment/eks-workshop/modules/security/kyverno/images/restrict-registries.yaml @@ -40,7 +40,7 @@ $ kubectl apply -f ~/environment/eks-workshop/modules/security/kyverno/images/re clusterpolicy.kyverno.io/restrict-image-registries created ``` -Try to run another sample Pod using the default image from the public Registry. +Now, let's attempt to run another sample Pod using the default image from the public registry: ```bash expectError=true $ kubectl run nginx-public --image=nginx @@ -54,17 +54,17 @@ restrict-image-registries: failed at path /spec/containers/0/image/' ``` -The Pod failed to run and presented an output stating Pod Creation was blocked due to our previously created Kyverno Policy. +As we can see, the Pod failed to run, and we received an output stating that Pod creation was blocked due to our previously created Kyverno policy. -Now try to run a sample Pod using the `nginx` Image hosted in the Trusted Registry, previously defined in the Policy (public.ecr.aws). +Let's now try to run a sample Pod using the `nginx` image hosted in our trusted registry (public.ecr.aws), which we defined in the policy: ```bash $ kubectl run nginx-ecr --image=public.ecr.aws/nginx/nginx pod/nginx-public created ``` -The Pod was successfully created! +Success! The Pod was created successfully. -You have seen how you can block Images from public registries to run on your EKS Clusters, and restrict only allowed Image Repositories. One can further go ahead, and allow only private repositories as a Security Best Practice. +We've now seen how we can block images from public registries from running on our EKS clusters and restrict usage to only allowed image repositories. As a further security best practice, you might consider allowing only private repositories. -> Don't remove the running Pods created in this task as we will use them for the next lab. +> Note: Don't remove the running Pods created in this task, as we'll use them in the next lab.