-
Notifications
You must be signed in to change notification settings - Fork 706
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCP4: Add additonal control response for SA-10(1) integrity check #7973
Conversation
This datastream diff is auto generated by the check Click here to see the full diffOCIL for rule 'xccdf_org.ssgproject.content_rule_gcp_disk_encryption_enabled' differs:
--- old datastream
+++ new datastream
@@ -1,5 +1,5 @@
Run the following command to retrieve if the GCP disk encryption is enabled:
-$ oc get machineset --all-namespaces -o json | jq '[.items[] | select(.spec.template.spec.providerSpec.value.disks[0].encryptionKey.kmsKey.name != null) | .metadata.name]'
+$ oc get machineset --all-namespaces -o json | jq [.items[] | select(.spec.template.spec.providerSpec.value.disks[0].encryptionKey.kmsKey.name != null) | .metadata.name]
Make sure that the result is an array MachineSet names. These MachineSets
have references to the GCP's KMS key names, which can be inspected by going through them
with $ oc get machineset --all-namespaces -o yaml
OCIL for rule 'xccdf_org.ssgproject.content_rule_machine_volume_encrypted' differs:
--- old datastream
+++ new datastream
@@ -3,7 +3,7 @@
Make sure that the result is an array of 'true' values.
Run the following command to retrieve if the GCP disk encryption is enabled:
-$ oc get machineset --all-namespaces -o json | jq '[.items[] | select(.spec.template.spec.providerSpec.value.disks[0].encryptionKey.kmsKey.name != null) | .metadata.name]'
+$ oc get machineset --all-namespaces -o json | jq [.items[] | select(.spec.template.spec.providerSpec.value.disks[0].encryptionKey.kmsKey.name != null) | .metadata.name]
Make sure that the result is an array MachineSet names. These MachineSets
have references to the GCP's KMS key names, which can be inspected by going through them
with $ oc get machineset --all-namespaces -o yaml
OCIL for rule 'xccdf_org.ssgproject.content_rule_file_integrity_notification_enabled' differs:
--- old datastream
+++ new datastream
@@ -1,5 +1,5 @@
Run the following command to see if alert monitor is enabled by File Integrity Operator:
-$ oc get prometheusrules --all-namespaces -o json | jq '[.items[] | select(.metadata.name =="file-integrity") | .metadata.name]'
+$ oc get prometheusrules --all-namespaces -o json | jq [.items[] | select(.metadata.name =="file-integrity") | .metadata.name]
Make sure that there is one output named: file-integrity
Is it the case that A prometheus rule object is not generated by File Integrity Operator?
xccdf_org.ssgproject.content_rule_kubelet_disable_hostname_override is missing in new datastream.
OCIL for rule 'xccdf_org.ssgproject.content_rule_kubelet_enable_cert_rotation' differs:
--- old datastream
+++ new datastream
@@ -1,5 +1,5 @@
Run the following command on the kubelet node(s):
$ sudo grep rotateCertificates /etc/kubernetes/kubelet.conf
-The output should return nothing or true.
+The output should return true.
Is it the case that the kubelet cannot rotate client certificate?
xccdf_org.ssgproject.content_rule_kubelet_read_only_port_secured is missing in new datastream.
OCIL for rule 'xccdf_org.ssgproject.content_rule_route_ip_whitelist' differs:
--- old datastream
+++ new datastream
@@ -1,5 +1,5 @@
Run the following command to retrieve a list routes that does not have IP whitelist set::
-$ oc get routes --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.annotations["haproxy.router.openshift.io/ip_whitelist"] | not) | .metadata.name]'
+$ oc get routes --all-namespaces -o json | jq [.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.annotations["haproxy.router.openshift.io/ip_whitelist"] | not) | .metadata.name]
Make sure that there is output nothing in the result.
Is it the case that IP whitelist is not enabled for all routes outside the openshift namespaces?
OCIL for rule 'xccdf_org.ssgproject.content_rule_routes_rate_limit' differs:
--- old datastream
+++ new datastream
@@ -1,5 +1,5 @@
Run the following command to retrieve a list routes that does not have rate limit enabled:
-$ oc get routes --all-namespaces -o json | jq '[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.annotations["haproxy.router.openshift.io/rate-limit-connections"] == "true" | not) | .metadata.name]'
+$ oc get routes --all-namespaces -o json | jq [.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.annotations["haproxy.router.openshift.io/rate-limit-connections"] == "true" | not) | .metadata.name]
Make sure that there is output nothing in the result.
Is it the case that Rate limit is not enabled for all routes outside the openshift namespaces?
OCIL for rule 'xccdf_org.ssgproject.content_rule_file_permissions_worker_kubeconfig' differs:
--- old datastream
+++ new datastream
@@ -1,3 +1,4 @@
+
To check the permissions of /var/lib/kubelet/kubeconfig,
you'll need to log into a node in the cluster.
As a user with administrator privileges, log into a node in the relevant pool:
@@ -13,5 +14,6 @@
$ ls -l /var/lib/kubelet/kubeconfig
If properly configured, the output should indicate the following permissions:
-rw-------
- Is it the case that /var/lib/kubelet/kubeconfig has unix mode -rw-------?
+ Is it the case that
+/var/lib/kubelet/kubeconfig has unix mode -rw-------?
|
8702f65
to
deb38f1
Compare
/retest |
applications/openshift/authentication/ocp_idp_no_htpasswd/rule.yml
Outdated
Show resolved
Hide resolved
applications/openshift/integrity/cluster_version_operator_exists/rule.yml
Outdated
Show resolved
Hide resolved
applications/openshift/integrity/cluster_version_operator_exists/rule.yml
Outdated
Show resolved
Hide resolved
applications/openshift/integrity/cluster_version_operator_exists/rule.yml
Outdated
Show resolved
Hide resolved
applications/openshift/integrity/cluster_version_operator_exists/rule.yml
Show resolved
Hide resolved
applications/openshift/integrity/cluster_version_operator_verify_integrity/rule.yml
Outdated
Show resolved
Hide resolved
applications/openshift/integrity/cluster_version_operator_verify_integrity/rule.yml
Outdated
Show resolved
Hide resolved
applications/openshift/integrity/cluster_version_operator_verify_integrity/rule.yml
Outdated
Show resolved
Hide resolved
...ions/openshift/integrity/cluster_version_operator_verify_integrity/tests/allverified.pass.sh
Show resolved
Hide resolved
...ons/openshift/integrity/cluster_version_operator_verify_integrity/tests/someverified.fail.sh
Show resolved
Hide resolved
6861cc6
to
d6ea049
Compare
/retest |
3 similar comments
/retest |
/retest |
/retest |
/retest |
/retest |
2 similar comments
/retest |
/retest |
Added two rules, cluster_version_operator_exists to check if cluster version operator is available, and cluster_version_operator_verify_integrity to check if cluster image is verified Related link regarding how RHCOS integrity check https://github.com/openshift/machine-config-operator/blob/master/docs/OSUpgrades.md#questions-and-answers
/retest |
3 similar comments
/retest |
/retest |
/retest |
@Vincent056: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
To address SA-10 (1) control, the Integrity of the OpenShift platform is handled to start by the cluster version operator.
Link: https://github.com/openshift/machine-config-operator/blob/master/docs/OSUpgrades.md#questions-and-answers
Added two rules, cluster_version_operator_exists to check if cluster version operator is available, and cluster_version_operator_verify_integrity to check if cluster image is verified