Guide to the Secure Configuration of Red Hat OpenShift Container Platform 4

with profile NIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level
This compliance profile reflects the core set of High-Impact Baseline configuration settings for deployment of Red Hat OpenShift Container Platform into U.S. Defense, Intelligence, and Civilian agencies. Development partners and sponsors include the U.S. National Institute of Standards and Technology (NIST), U.S. Department of Defense, the National Security Agency, and Red Hat. This baseline implements configuration requirements from the following sources: - NIST 800-53 control selections for High-Impact systems (NIST 800-53) For any differing configuration requirements, e.g. password lengths, the stricter security setting was chosen. Security Requirement Traceability Guides (RTMs) and sample System Security Configuration Guides are provided via the scap-security-guide-docs package. This profile reflects U.S. Government consensus content and is developed through the ComplianceAsCode initiative, championed by the National Security Agency. Except for differences in formatting to accommodate publishing processes, this profile mirrors ComplianceAsCode content as minor divergences, such as bugfixes, work through the consensus and release processes.
This guide presents a catalog of security-relevant configuration settings for Red Hat OpenShift Container Platform 4. It is a rendering of content structured in the eXtensible Configuration Checklist Description Format (XCCDF) in order to support security automation. The SCAP content is is available in the scap-security-guide package which is developed at https://www.open-scap.org/security-policies/scap-security-guide.

Providing system administrators with such guidance informs them how to securely configure systems under their control in a variety of network roles. Policy makers and baseline creators can use this catalog of settings, with its associated references to higher-level security control catalogs, in order to assist them in security baseline creation. This guide is a catalog, not a checklist, and satisfaction of every item is not likely to be possible or sensible in many operational scenarios. However, the XCCDF format enables granular selection and adjustment of settings, and their association with OVAL and OCIL content provides an automated checking capability. Transformations of this document, and its associated automated checking content, are capable of providing baselines that meet a diverse set of policy objectives. Some example XCCDF Profiles, which are selections of items that form checklists and can be used as baselines, are available with this guide. They can be processed, in an automated fashion, with tools that support the Security Content Automation Protocol (SCAP). The NIST National Checklist Program (NCP), which provides required settings for the United States Government, is one example of a baseline created from this guidance.
Do not attempt to implement any of the settings in this guide without first testing them in a non-operational environment. The creators of this guidance assume no responsibility whatsoever for its use by other parties, and makes no guarantees, expressed or implied, about its quality, reliability, or any other characteristic.

Profile Information

Profile TitleNIST 800-53 High-Impact Baseline for Red Hat OpenShift - Platform level
Profile IDxccdf_org.ssgproject.content_profile_high-rev-4

CPE Platforms

  • cpe:/a:redhat:openshift_container_platform_node_on_ovn:4
  • cpe:/a:redhat:openshift_container_platform_node_on_sdn:4
  • cpe:/o:redhat:openshift_container_platform_node:4
  • cpe:/a:redhat:openshift_container_platform_on_aws:4
  • cpe:/a:redhat:openshift_container_platform_on_azure:4
  • cpe:/a:redhat:openshift_container_platform_on_gcp:4
  • cpe:/a:redhat:openshift_container_platform_on_ovn:4
  • cpe:/a:redhat:openshift_container_platform_on_sdn:4
  • cpe:/a:redhat:openshift_container_platform:4.10
  • cpe:/a:redhat:openshift_container_platform:4.11
  • cpe:/a:redhat:openshift_container_platform:4.12
  • cpe:/a:redhat:openshift_container_platform:4.13
  • cpe:/a:redhat:openshift_container_platform:4.14
  • cpe:/a:redhat:openshift_container_platform:4.15
  • cpe:/a:redhat:openshift_container_platform:4.16
  • cpe:/a:redhat:openshift_container_platform:4.17
  • cpe:/a:redhat:openshift_container_platform:4.18
  • cpe:/a:redhat:openshift_container_platform:4.6
  • cpe:/a:redhat:openshift_container_platform:4.7
  • cpe:/a:redhat:openshift_container_platform:4.8
  • cpe:/a:redhat:openshift_container_platform:4.9
  • cpe:/a:redhat:openshift_container_platform:4.1

Revision History

Current version: 0.1.75

  • draft (as of 2024-11-04)

Table of Contents

  1. Kubernetes Settings
    1. System and Software Integrity
    2. Kubernetes - Account and Access Control
    3. OpenShift Kube API Server
    4. Authentication
    5. OpenShift Controller Settings
    6. OpenShift etcd Settings
    7. Kubernetes - General Security Practices
    8. Kubernetes Kubelet Settings
    9. OpenShift - Logging Settings
    10. Kubernetes - Network Configuration and Firewalls
    11. OpenShift API Server
    12. Role-based Access Control
    13. Kubernetes - Registry Security Practices
    14. OpenShift - Risk Assessment Settings
    15. Security Context Constraints (SCC)
    16. OpenShift - Kubernetes - Scheduler Settings
    17. Kubernetes Secrets Management
    18. Kubernetes - Worker Node Settings

Checklist

Group   Guide to the Secure Configuration of Red Hat OpenShift Container Platform 4   Group contains 20 groups and 136 rules
Group   Kubernetes Settings   Group contains 19 groups and 136 rules
[ref]   Each section of this configuration guide includes information about the configuration of a Kubernetes cluster and a set of recommendations for hardening the configuration. For each hardening recommendation, information on how to implement the control and/or how to verify or audit the control is provided. In some cases, remediation information is also provided. Some of the settings in the hardening guide are in place by default. The audit information for these settings is provided in order to verify that the cluster administrator has not made changes that would be less secure. A small number of items require configuration. Finally, there are some recommendations that require decisions by the system operator, such as audit log size, retention, and related settings.
Group   System and Software Integrity   Group contains 1 group and 4 rules
[ref]   System and software integrity can be gained by installing antivirus, increasing system encryption strength with FIPS, verifying installed software, enabling SELinux, installing an Intrusion Prevention System, etc. However, installing or enabling integrity checking tools cannot prevent intrusions, but they can detect that an intrusion may have occurred. Requirements for integrity checking may be highly dependent on the environment in which the system will be used.
Group   System Cryptographic Policies   Group contains 1 rule
[ref]   OpenShift has the capability to centrally configure cryptographic polices.

Rule   Ensure that FIPS mode is enabled on all cluster nodes   [ref]

OpenShift has an installation-time flag that can enable FIPS mode for the cluster. The flag
fips: true
must be enabled at install time in the
install-config.yaml
file.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/machineconfiguration.openshift.io/v1/machineconfigs API endpoint, filter with with the jq utility using the following filter [.items[] | select(.metadata.name | test("^rendered-worker-[0-9a-z]+$|^rendered-master-[0-9a-z]+$"))] | map(.spec.fips == true) and persist it to the local /kubernetes-api-resources/apis/machineconfiguration.openshift.io/v1/machineconfigs#ab7e02a1c3f44ae48f843ce3dee7b948d624d2f702b9428760efbfd4653847ba file.
Rationale:
Use of weak or untested encryption algorithms undermines the purposes of utilizing encryption to protect data. The system must implement cryptographic modules adhering to the higher standards approved by the federal government since this provides assurance they have been tested and validated.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_fips_mode_enabled_on_all_nodes
Identifiers:

CCE-85860-5

References:
nerc-cipCIP-003-8 R4.2, CIP-007-3 R5.1, CIP-007-3 R7.1
nistAC-17(2), CM-3(6), SC-12(2), SC-12(3), SC-13, IA-7
pcidssReq-3.4.1
app-srg-ctrSRG-APP-000126-CTR-000275, SRG-APP-000219-CTR-000550, SRG-APP-000411-CTR-000995, SRG-APP-000412-CTR-001000, SRG-APP-000416-CTR-001015, SRG-APP-000514-CTR-001315, SRG-APP-000610-CTR-001385, SRG-APP-000635-CTR-001405, CNTR-OS-000340, CNTR-OS-000510, CNTR-OS-001080
stigrefSV-257536r921551_rule, SV-257546r921581_rule, SV-257587r921704_rule

Rule   Ensure that Cluster Version Operator is deployed   [ref]

Integrity of the OpenShift platform is handled to start by the cluster version operator. Cluster Version Operator will by default GPG verify the integrity of the release image before applying it. [1] This rule checks if Cluster Version Operator is deployed and available in the system. [1] https://github.com/openshift/machine-config-operator/blob/master/docs/OSUpgrades.md#questions-and-answers
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/config.openshift.io/v1/clusterversions/version API endpoint, filter with with the jq utility using the following filter [.status.conditions[] | select(.type=="Available") | .status] and persist it to the local /kubernetes-api-resources/apis/config.openshift.io/v1/clusterversions/version#588c29ac9d4c67b1444308c5ba310271832895fee54701f7d0cb6cbced390443 file.
Rationale:
Integrity check prevent a malicious actor from using a unauthorized system image, hence it will ensure the image has not been tampered with, or corrupted.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_cluster_version_operator_exists
Identifiers:

CCE-90670-1

References:
nistSA-10(1)
app-srg-ctrSRG-APP-000384-CTR-000915, CNTR-OS-000740
bsiAPP.4.4.A17
stigrefSV-257561r921626_rule

Rule   Ensure that Cluster Version Operator verifies integrity   [ref]

Integrity of the OpenShift platform is handled to start by the cluster version operator. Cluster Version Operator will by default GPG verify the integrity of the release image before applying it. This rule check if there is an unverified cluster image.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/config.openshift.io/v1/clusterversions/version API endpoint, filter with with the jq utility using the following filter [.status.history[0:-1]|.[]|.verified] and persist it to the local /kubernetes-api-resources/apis/config.openshift.io/v1/clusterversions/version#69adcfd65c8b8d723e4a7118c170f634cebbb349e9b554dd15001e6551a586f8 file.
Rationale:
Unverified cluster image will compromise the system integrity. Integrity check prevent a malicious actor from using a unauthorized system image, hence it will ensure the image has not been tampered with, or corrupted.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_cluster_version_operator_verify_integrity
Identifiers:

CCE-90671-9

References:
nistSA-10(1)
app-srg-ctrSRG-APP-000384-CTR-000915, CNTR-OS-000740
bsiAPP.4.4.A17
stigrefSV-257561r921626_rule

Rule   Ensure that File Integrity Operator is scanning the cluster   [ref]

The File Integrity Operator continually runs file integrity checks on the cluster nodes. It deploys a daemon set that initializes and runs privileged AIDE containers on each node, providing a status object with a log of files that are modified during the initial run of the daemon set pods.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/fileintegrity.openshift.io/v1alpha1/fileintegrities?limit=5 API endpoint to the local /kubernetes-api-resources/apis/fileintegrity.openshift.io/v1alpha1/fileintegrities?limit=5 file.
Rationale:
File integrity scanning able to detect potential and unauthorised access.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_file_integrity_exists
Identifiers:

CCE-83657-7

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-007-3 R4, CIP-007-3 R4.1, CIP-007-3 R4.2
nistSC-4(23), SI-6, SI-7, SI-7(1), CM-6(a), SI-7(2), SI-4(24)
pcidssReq-10.5.5, Req-11.5
app-srg-ctrSRG-APP-000516-CTR-001325
bsiAPP.4.4.A17
pcidss410.3.4, 10.3, 11.5.2
Group   Kubernetes - Account and Access Control   Group contains 2 rules
[ref]   In traditional Unix security, if an attacker gains shell access to a certain login account, they can perform any action or access any file to which that account has access. The same idea applies to cloud technology such as Kubernetes. Therefore, making it more difficult for unauthorized people to gain shell access to accounts, particularly to privileged accounts, is a necessary part of securing a system. This section introduces mechanisms for restricting access to accounts under Kubernetes.
Group   OpenShift Kube API Server   Group contains 43 rules
[ref]   This section contains recommendations for kube-apiserver configuration.

Rule   Disable the AlwaysAdmit Admission Control Plugin   [ref]

To ensure OpenShift only responses to requests explicitly allowed by the admission control plugin. Check that the config ConfigMap object does not contain the AlwaysAdmit plugin.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson]{{else}}[.data."config.yaml" | fromjson]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#ffe65d9fac11909686e59349c6a0111aaf57caa26bd2db3e7dcb1a0a22899145 file.
Rationale:
Enabling the admission control plugin AlwaysAdmit allows all requests and does not provide any filtering.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_admission_control_plugin_alwaysadmit
Identifiers:

CCE-84148-6

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.10
pcidss42.2.1, 2.2

Rule   Ensure that the Admission Control Plugin AlwaysPullImages is not set   [ref]

The AlwaysPullImages admission control plugin should be disabled, since it can introduce new failure modes for control plane components if an image registry is unreachable.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson]{{else}}[.data."config.yaml" | fromjson]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#ffe65d9fac11909686e59349c6a0111aaf57caa26bd2db3e7dcb1a0a22899145 file.
Rationale:
Setting admission control policy to AlwaysPullImages forces every new pod to pull the required images every time. In a multi-tenant cluster users can be assured that their private images can only be used by those who have the credentials to pull them. Without this admission control policy, once an image has been pulled to a node, any pod from any user can use it simply by knowing the image’s name, without any authorization check against the image ownership. When this plug-in is enabled, images are always pulled prior to starting containers, which means valid credentials are required. However, turning on this admission plugin can introduce new kinds of cluster failure modes. OpenShift 4 master and infrastructure components are deployed as pods. Enabling this feature can result in cases where loss of contact to an image registry can cause a redeployed infrastructure pod (oauth-server for example) to fail on an image pull for an image that is currently present on the node. We use PullIfNotPresent so that a loss of image registry access does not prevent the pod from starting. If it becomes PullAlways, then an image registry access outage can cause key infrastructure components to fail. The pull policy can be managed per container, using imagePullPolicy.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_api_server_admission_control_plugin_alwayspullimages
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.11
pcidss42.2.1, 2.2

Rule   Enable the NamespaceLifecycle Admission Control Plugin   [ref]

OpenShift enables the NamespaceLifecycle plugin by default.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
Setting admission control policy to NamespaceLifecycle ensures that objects cannot be created in non-existent namespaces, and that namespaces undergoing termination are not used for creating new objects. This is recommended to enforce the integrity of the namespace termination process and also for the availability of new objects.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_admission_control_plugin_namespacelifecycle
Identifiers:

CCE-83854-0

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.13
pcidss42.2.1, 2.2

Rule   Enable the NodeRestriction Admission Control Plugin   [ref]

To limit the Node and Pod objects that a kubelet could modify, ensure that the NodeRestriction plugin on kubelets is enabled in the api-server configuration by running the following command:
$ oc -n openshift-kube-apiserver get configmap config -o json | jq -r '.data."config.yaml"' | jq '.apiServerArguments."enable-admission-plugins"'
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
Using the NodeRestriction plugin ensures that the kubelet is restricted to the Node and Pod objects that it could modify as defined. Such kubelets will only be allowed to modify their own Node API object, and only modify Pod API objects that are bound to their node.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_admission_control_plugin_noderestriction
Identifiers:

CCE-83753-4

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.15
pcidss42.2.1, 2.2

Rule   Enable the SecurityContextConstraint Admission Control Plugin   [ref]

To ensure pod permissions are managed, make sure that the SecurityContextConstraint admission control plugin is used.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
A Security Context Constraint is a cluster-level resource that controls the actions which a pod can perform and what the pod may access. The SecurityContextConstraint objects define a set of conditions that a pod must run with in order to be accepted into the system. Security Context Constraints are comprised of settings and strategies that control the security features a pod has access to and hence this must be used to control pod access permissions.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_admission_control_plugin_scc
Identifiers:

CCE-83602-3

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.14
pcidss42.2.1, 2.2

Rule   Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used   [ref]

Instead of using a customized SecurityContext for pods, a Pod Security Policy (PSP) or a SecurityContextConstraint should be used. These are cluster-level resources that control the actions that a pod can perform and what resource the pod may access. The SecurityContextDeny disallows folks from setting a pod's securityContext fields. Ensure that the list of admission controllers does not include SecurityContextDeny:
$ oc -n openshift-kube-apiserver get configmap config -o json | jq -r '.data."config.yaml"' | jq '.apiServerArguments."enable-admission-plugins"' 
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson]{{else}}[.data."config.yaml" | fromjson]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#ffe65d9fac11909686e59349c6a0111aaf57caa26bd2db3e7dcb1a0a22899145 file.
Rationale:
The SecurityContextDeny admission control plugin disallows setting any security options for your pods. SecurityContextConstraints allow you to enforce RBAC rules on who can set these options on the pods, and what they're allowed to set. Thus, using the SecurityContextDeny will deter you from enforcing granular permissions on your pods.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_admission_control_plugin_securitycontextdeny
Identifiers:

CCE-83586-8

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.13

Rule   Ensure that anonymous requests to the API Server are authorized   [ref]

By default, anonymous access to the OpenShift API is enabled, but at the same time, all requests must be authorized. If no authentication mechanism is used, the request is assigned the system:anonymous virtual user and the system:unauthenticated virtual group. This allows the authorization layer to determine which requests, if any, is an anonymous user authorized to make. To verify the authorization rules for anonymous requests run the following:
$ oc describe clusterrolebindings
and inspect the bindings of the system:anonymous virtual user and the system:unauthenticated virtual group. To test that an anonymous request is authorized to access the readyz endpoint, run:
$ oc get --as="system:anonymous" --raw='/readyz?verbose'
In contrast, a request to list all projects should not be authorized:
$ oc get --as="system:anonymous" projects
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterrolebindings API endpoint to the local /kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterrolebindings file.
Rationale:
When enabled, requests that are not rejected by other configured authentication methods are treated as anonymous requests. These requests are then served by the API server. If you are using RBAC authorization, it is generally considered reasonable to allow anonymous access to the API Server for health checks and discovery purposes, and hence this recommendation is not scored. However, you should consider whether anonymous discovery is an acceptable risk for your purposes.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_anonymous_auth
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.1
bsiAPP.4.4.A3
pcidss42.2.1, 2.2

Rule   Ensure catch-all FlowSchema object for API Priority and Fairness Exists   [ref]

Using APIPriorityAndFairness feature provides a fine-grained way to control the behaviour of the Kubernetes API server in an overload situation. The well-known FlowSchema catch-all should be available to make sure that every request gets some kind of classification. By default, the catch-all priority level only allows one concurrency share and does not queue requests. To inspect all the FlowSchema objects, run:
oc get flowschema
To inspect the well-known catch-all object, run the following:
oc describe flowschema catch-all
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/flowcontrol.apiserver.k8s.io/v1alpha1/flowschemas/catch-all API endpoint to the local /kubernetes-api-resources/apis/flowcontrol.apiserver.k8s.io/v1alpha1/flowschemas/catch-all file true /apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/catch-all API endpoint to the local /kubernetes-api-resources/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas/catch-all file true /apis/flowcontrol.apiserver.k8s.io/v1beta2/flowschemas/catch-all API endpoint to the local /kubernetes-api-resources/apis/flowcontrol.apiserver.k8s.io/v1beta2/flowschemas/catch-all file true /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/catch-all API endpoint to the local /kubernetes-api-resources/apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/catch-all file true .
Rationale:
The FlowSchema API objects enforce a limit on the number of events that the API Server will accept in a given time slice In a large multi-tenant cluster, there might be a small percentage of misbehaving tenants which could have a significant impact on the performance of the cluster overall. It is recommended to limit the rate of events that the API Server will accept.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_api_priority_flowschema_catch_all
Identifiers:

CCE-85891-0

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.10

Rule   Configure the Kubernetes API Server Maximum Retained Audit Logs   [ref]

To configure how many rotations of audit logs are retained, edit the openshift-kube-apiserver configmap and set the audit-log-maxbackup parameter to 10 or to an organizationally appropriate value:
"apiServerArguments":{
  ...
  "audit-log-maxbackup": [10],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
OpenShift automatically rotates the log files. Retaining old log files ensures OpenShift Operators will have sufficient log data available for carrying out any investigation or correlation. For example, if the audit log size is set to 100 MB and the number of retained log files is set to 10, OpenShift Operators would have approximately 1 GB of log data to use during analysis.
Severity: 
low
Rule ID:xccdf_org.ssgproject.content_rule_api_server_audit_log_maxbackup
Identifiers:

CCE-83739-3

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.22
pcidss42.2.1, 2.2

Rule   Configure Kubernetes API Server Maximum Audit Log Size   [ref]

To rotate audit logs upon reaching a maximum size, edit the openshift-kube-apiserver configmap and set the audit-log-maxsize parameter to an appropriate size in MB. For example, to set it to 100 MB:
"apiServerArguments":{
  ...
  "audit-log-maxsize": ["100"],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
OpenShift automatically rotates log files. Retaining old log files ensures that OpenShift Operators have sufficient log data available for carrying out any investigation or correlation. If you have set file size of 100 MB and the number of old log files to keep as 10, there would be approximately 1 GB of log data available for use in analysis.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_audit_log_maxsize
Identifiers:

CCE-83607-2

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.23
pcidss42.2.1, 2.2

Rule   Configure the Audit Log Path   [ref]

To enable auditing on the Kubernetes API Server, the audit log path must be set. Edit the openshift-kube-apiserver configmap and set the audit-log-path to a suitable path and file where audit logs should be written. For example:
"apiServerArguments":{
  ...
  "audit-log-path":"/var/log/kube-apiserver/audit.log",
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
Auditing of the Kubernetes API Server is not enabled by default. Auditing the API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by users, administrators, or other system components.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_api_server_audit_log_path
Identifiers:

CCE-84020-7

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.20
pcidss42.2.1, 2.2

Rule   The authorization-mode cannot be AlwaysAllow   [ref]

Do not always authorize all requests.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson]{{else}}[.data."config.yaml" | fromjson]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#ffe65d9fac11909686e59349c6a0111aaf57caa26bd2db3e7dcb1a0a22899145 file.
Rationale:
The API Server, can be configured to allow all requests. This mode should not be used on any production cluster.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_auth_mode_no_aa
Identifiers:

CCE-84207-0

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.7
pcidss42.2.1, 2.2

Rule   Ensure authorization-mode Node is configured   [ref]

Restrict kubelet nodes to reading only objects associated with them.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
The Node authorization mode only allows kubelets to read Secret, ConfigMap, PersistentVolume, and PersistentVolumeClaim objects associated with their nodes.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_auth_mode_node
Identifiers:

CCE-83889-6

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.8

Rule   Ensure authorization-mode RBAC is configured   [ref]

To ensure OpenShift restricts different identities to a defined set of operations they are allowed to perform, check that the API server's authorization-mode configuration option list contains RBAC.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
Role Based Access Control (RBAC) allows fine-grained control over the operations that different entities can perform on different objects in the cluster. Enabling RBAC is critical in regulating access to an OpenShift cluster as the RBAC rules specify, given a user, which operations can be executed over a set of namespaced or cluster-wide resources.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_auth_mode_rbac
Identifiers:

CCE-84102-3

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.8
pcidss42.2.1, 2.2

Rule   Disable basic-auth-file for the API Server   [ref]

Basic Authentication should not be used for any reason. If needed, edit API Edit the openshift-kube-apiserver configmap and remove the basic-auth-file parameter:
"apiServerArguments":{
  ...
  "basic-auth-file":[
    "/path/to/any/file"
  ],
  ...
Alternate authentication mechanisms such as tokens and certificates will need to be used. Username and password for basic authentication will be disabled.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
Basic authentication uses plaintext credentials for authentication. Currently the basic authentication credentials last indefinitely, and the password cannot be changed without restarting the API Server. The Basic Authentication is currently supported for convenience and is not intended for production workloads.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_basic_auth
Identifiers:

CCE-83936-5

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.2
pcidss42.2.1, 2.2

Rule   Ensure that the bindAddress is set to a relevant secure port   [ref]

The bindAddress is set by default to 0.0.0.0:6443, and listening with TLS enabled.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
The OpenShift API server is served over HTTPS with authentication and authorization; the secure API endpoint is bound to 0.0.0.0:6443 by default. In OpenShift, the only supported way to access the API server pod is through the load balancer and then through the internal service. The value is set by the bindAddress argument under the servingInfo parameter.
Severity: 
low
Rule ID:xccdf_org.ssgproject.content_rule_api_server_bind_address
Identifiers:

CCE-83646-0

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.18
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Configure the Client Certificate Authority for the API Server   [ref]

Certificates must be provided to fully setup TLS client certificate authentication. To ensure the API Server utilizes its own TLS certificates, the clientCA must be configured. Verify that servingInfo has the clientCA configured in the openshift-kube-apiserver config configmap to something similar to:
"apiServerArguments": {
  ...
    "client-ca-file": [
      "/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt"
    ],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["client-ca-file"]) | .apiServerArguments["client-ca-file"][] | select(test("/etc/kubernetes/certs/client-ca/ca.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["client-ca-file"]) | .apiServerArguments["client-ca-file"][] | select(test("{{.var_apiserver_client_ca}}"))]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#d56e72c377d8f85e0601a704d4218064a0ea4a2235ceee82d20db6cdafc74608 file.
Rationale:
API Server communication contains sensitive parameters that should remain encrypted in transit. Configure the API Server to serve only HTTPS traffic. If -clientCA is set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_client_ca
Identifiers:

CCE-84284-9

References:
nerc-cipCIP-003-8 R4.2, CIP-007-3 R5.1
nistSC-8, SC-8(1), SC-8(2)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095
cis1.2.29
bsiAPP.4.4.A17
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Configure the Encryption Provider Cipher   [ref]

When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:

  • Secrets
  • ConfigMaps
  • Routes
  • OAuth access tokens
  • OAuth authorize tokens

When you enable etcd encryption, encryption keys are created. These keys are rotated on a weekly basis. You must have these keys in order to restore from an etcd backup.

To ensure the correct cipher, set the encryption type to aescbc or aesgcm in the apiserver object which configures the API server itself.

spec:
  encryption:
    type: aescbc

For more information, follow the relevant documentation.

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/apis/hypershift.openshift.io/v1beta1/namespaces/{{.hypershift_namespace_prefix}}/hostedclusters/{{.hypershift_cluster}}{{else}}/apis/config.openshift.io/v1/apiservers/cluster{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.spec.secretEncryption.type]{{else}}[.spec.encryption.type]{{end}} and persist it to the local /kubernetes-api-resources/apis/config.openshift.io/v1/apiservers/cluster#a1d4b20a86b76e7e2d634dbeff420b1a80df6800836dad1b552314d1b24a18cb file.
Rationale:
etcd is a highly available key-value store used by OpenShift deployments for persistent storage of all REST API objects. These objects are sensitive in nature and should be encrypted at rest to avoid any disclosures. Where etcd encryption is used, it is important to ensure that the appropriate set of encryption providers is used. Currently, aescbc and aesgcm are the only types supported by OCP.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_encryption_provider_cipher
Identifiers:

CCE-83585-0

References:
nerc-cipCIP-003-8 R4.2
nistSC-28, SC-28(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000429-CTR-001060, CNTR-OS-000780
cis1.2.31, 2.8
bsiSYS.1.6.A8
pcidss42.2.1, 2.2, 3.5.1.3, 3.5.1, 3.5
stigrefSV-257564r921635_rule

---
apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
  name: cluster
spec:
  encryption:
    type: "{{.var_apiserver_encryption_type}}"

Rule   Configure the etcd Certificate Authority for the API Server   [ref]

To ensure etcd is configured to make use of TLS encryption for client connections, follow the OpenShift documentation and setup the TLS connection between the API Server and etcd. Then, verify that apiServerArguments has the etcd-cafile configured in the openshift-kube-apiserver config configmap to something similar to:
"apiServerArguments": {
  ...
    "etcd-cafile": [
        "/etc/kubernetes/static-pod-resources/configmaps/etcd-serving-ca/ca-bundle.crt"
    ],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["etcd-cafile"]) | .apiServerArguments["etcd-cafile"][] | select(test("/etc/kubernetes/certs/etcd-ca/ca.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["etcd-cafile"]) | .apiServerArguments["etcd-cafile"][] | select(test("{{.var_apiserver_etcd_ca}}"))]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#33769e7a3c14dd6dc237eb2b13a72140eeadf2ce49578f57bc9e0fd096cf4e9a file.
Rationale:
etcd is a highly-available key-value store used by OpenShift deployments for persistent storage of all REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API Server to identify itself to the etcd server using a SSL Certificate Authority file.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_etcd_ca
Identifiers:

CCE-84216-1

References:
nerc-cipCIP-003-8 R4.2, CIP-007-3 R5.1
nistSC-8, SC-8(1), SC-8(2)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095
cis1.2.30
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Configure the etcd Certificate for the API Server   [ref]

To ensure etcd is configured to make use of TLS encryption for client communications, follow the OpenShift documentation and setup the TLS connection between the API Server and etcd. Then, verify that apiServerArguments has the etcd-certfile configured in the openshift-kube-apiserver configmap to something similar to:
...
"etcd-certfile": [
    "/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.crt"
],
...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
etcd is a highly-available key-value store used by OpenShift deployments for persistent storage of all REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API Server to identify itself to the etcd server using a client certificate and key.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_etcd_cert
Identifiers:

CCE-83876-3

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.27
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Configure the etcd Certificate Key for the API Server   [ref]

To ensure etcd is configured to make use of TLS encryption for client communications, follow the OpenShift documentation and setup the TLS connection between the API Server and etcd. Then, verify that apiServerArguments has the etcd-keyfile configured in the openshift-kube-apiserver configmap to something similar to:
...
"etcd-keyfile": [
    "/etc/kubernetes/static-pod-resources/secrets/etcd-client/tls.key"
],
...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
etcd is a highly-available key-value store used by OpenShift deployments for persistent storage of all REST API objects. These objects are sensitive in nature and should be protected by client authentication. This requires the API Server to identify itself to the etcd server using a client certificate and key.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_etcd_key
Identifiers:

CCE-83546-2

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.27
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Ensure that the --kubelet-https argument is set to true   [ref]

The kube-apiserver ensures https to the kubelet by default. The apiserver flag "--kubelet-https" is deprecated and should be either set to "true" or omitted from the argument list.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
Connections from the kube-apiserver to kubelets could potentially carry sensitive data such as secrets and keys. It is thus important to use in-transit encryption for any communication between the apiserver and kubelets.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_https_for_kubelet_conn
References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.4
bsiAPP.4.4.A17
pcidss42.2.1, 2.2.7, 2.2

Rule   Disable Use of the Insecure Bind Address   [ref]

OpenShift should not bind to non-loopback insecure addresses. Edit the openshift-kube-apiserver configmap and remove the insecure-bind-address if it exists:
"apiServerArguments":{
  ...
  "insecure-bind-address":[
    "127.0.0.1"
  ],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson | .apiServerArguments{{else}}.data."config.yaml" | fromjson | .apiServerArguments{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#95b5b27bb6ea2b122e810c99c17c2430c4845596942804847dd677557cfed88e file.
Rationale:
If the API Server is bound to an insecure address the installation would be susceptible to unauthenticated and unencrypted access to the master node(s). The API Server does not perform authentication checking for insecure binds and the traffic is generally not encrypted.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_insecure_bind_address
Identifiers:

CCE-83955-5

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.16
pcidss42.2.1, 2.2

Rule   Configure the kubelet Certificate Authority for the API Server   [ref]

To ensure OpenShift verifies kubelet certificates before establishing connections, follow the OpenShift documentation and setup the TLS connection between the API Server and kubelets. Edit the openshift-kube-apiserver configmap and set the below parameter if it is not already configured:
"apiServerArguments":{
  ...
  "kubelet-certificate-authority":"/etc/kubernetes/static-pod-resources/configmaps/kubelet-serving-ca/ca-bundle.crt",
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["kubelet-certificate-authority"]) | .apiServerArguments["kubelet-certificate-authority"][] | select(test("/etc/kubernetes/certs/kubelet-ca/ca.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["kubelet-certificate-authority"]) | .apiServerArguments["kubelet-certificate-authority"][] | select(test("{{.var_apiserver_kubelet_certificate_authority}}"))]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#1118b118fc93b557cda9eb3f29584d2f92f5c3976f77dec35848eb54e0d819cc file.
Rationale:
Connections from the API Server to the kubelet are used for fetching logs for pods, attaching (through kubectl) to running pods, and using the kubelet port-forwarding functionality. These connections terminate at the kubelet HTTPS endpoint. By default, the API Server does not verify the kubelet serving certificate, which makes the connection subject to man-in-the-middle attacks, and unsafe to run over untrusted and/or public networks.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_api_server_kubelet_certificate_authority
Identifiers:

CCE-84196-5

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.6
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Configure the kubelet Certificate File for the API Server   [ref]

To enable certificate based kubelet authentication, edit the config configmap in the openshift-kube-apiserver namespace and set the below parameter in the config.yaml key if it is not already configured:
"apiServerArguments":{
...
  "kubelet-client-certificate":"/etc/kubernetes/static-pod-resources/secrets/kubelet-client/tls.crt",
...
}
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["kubelet-client-certificate"]) | .apiServerArguments["kubelet-client-certificate"][] | select(test("/etc/kubernetes/certs/kubelet/tls.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["kubelet-client-certificate"]) | .apiServerArguments["kubelet-client-certificate"][] | select(test("{{.var_apiserver_kubelet_client_cert}}"))]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#e5500055b4aa2fcf00dc09ad0e66e44b6b42d67f8d53d1e72ff81b32f0e09865 file.
Rationale:
By default the API Server does not authenticate itself to the kubelet's HTTPS endpoints. Requests from the API Server are treated anonymously. Configuring certificate-based kubelet authentication ensures that the API Server authenticates itself to kubelets when submitting requests.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_api_server_kubelet_client_cert
Identifiers:

CCE-84080-1

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.5
bsiAPP.4.4.A17
pcidss42.2.1, 2.2

Rule   Configure the kubelet Certificate File for the API Server   [ref]

To enable certificate based kubelet authentication, edit the config configmap in the openshift-kube-apiserver namespace and set the below parameter in the config.yaml key if it is not already configured:
"apiServerArguments":{
...
  "kubelet-client-certificate":"/etc/kubernetes/static-pod-resources/secrets/kubelet-client/tls.crt",
...
}
Note that this particular rule is only valid for OCP releases up to and
including 4.8
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /api/v1/namespaces/openshift-kube-apiserver/configmaps/config API endpoint to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config file.
Rationale:
By default the API Server does not authenticate itself to the kubelet's HTTPS endpoints. Requests from the API Server are treated anonymously. Configuring certificate-based kubelet authentication ensures that the API Server authenticates itself to kubelets when submitting requests.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_api_server_kubelet_client_cert_pre_4_9
Identifiers:

CCE-85890-2

References:
nistCM-6, CM-6(1), SC-8, SC-8(1)
cis1.2.5
pcidss42.2.1, 2.2

Rule   Configure the kubelet Certificate Key for the API Server   [ref]

To enable certificate based kubelet authentication, edit the config configmap in the openshift-kube-apiserver namespace and set the below parameter in the config.yaml key if it is not already configured:
"apiServerArguments":{
...
  "kubelet-client-key":"/etc/kubernetes/static-pod-resources/secrets/kubelet-client/tls.key",
...
}
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["kubelet-client-key"]) | .apiServerArguments["kubelet-client-key"][] | select(test("/etc/kubernetes/certs/kubelet/tls.key"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["kubelet-client-key"]) | .apiServerArguments["kubelet-client-key"][] | select(test("{{.var_apiserver_kubelet_client_key}}"))]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#1e2b7c1158e0b9a602cb20d62c82b4660907bb57b63dac11c6c7c64211c49c69 file.
Rationale:
By default the API Server does not authenticate itself to the kubelet's HTTPS endpoints. Requests from the API Server are treated anonymously. Configuring certificate-based kubelet authentication ensures that the API Server authenticates itself to kubelets when submitting requests.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_api_server_kubelet_client_key
Identifiers:

CCE-83591-8

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.5
bsiAPP.4.4.A17
pcidss42.2.1, 2.2

Rule   Configure the kubelet Certificate Key for the API Server   [ref]

To enable certificate based kubelet authentication, edit the config configmap in the openshift-kube-apiserver namespace and set the below parameter in the config.yaml key if it is not already configured:
"apiServerArguments":{
...
  "kubelet-client-key":"/etc/kubernetes/static-pod-resources/secrets/kubelet-client/tls.key",
...
}
Note that this particular rule is only valid for OCP releases up to and
including 4.8
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /api/v1/namespaces/openshift-kube-apiserver/configmaps/config API endpoint to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config file.
Rationale:
By default the API Server does not authenticate itself to the kubelet's HTTPS endpoints. Requests from the API Server are treated anonymously. Configuring certificate-based kubelet authentication ensures that the API Server authenticates itself to kubelets when submitting requests.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_api_server_kubelet_client_key_pre_4_9
Identifiers:

CCE-90794-9

References:
nistCM-6, CM-6(1), SC-8, SC-8(1)
cis1.2.5
pcidss42.2.1, 2.2

Rule   Ensure all admission control plugins are enabled   [ref]

To make sure none of them is explicitly disabled except PodSecurity, run the following command:
$ oc -n openshift-kube-apiserver get configmap config -o json | jq -r '[.data."config.yaml" | fromjson | .apiServerArguments | select(has("disable-admission-plugins")) | if ."disable-admission-plugins" != ["PodSecurity"] then ."disable-admission-plugins" else empty end]'
and make sure the output is empty.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | .apiServerArguments | select(has("disable-admission-plugins")) | if ."disable-admission-plugins" != ["PodSecurity"] then ."disable-admission-plugins" else empty end]{{else}}[.data."config.yaml" | fromjson | .apiServerArguments | select(has("disable-admission-plugins")) | if ."disable-admission-plugins" != ["PodSecurity"] then ."disable-admission-plugins" else empty end]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#8c02c853df9307960712da853d79f916a091fe8bce6312720d7c17de03c2017b file.
Rationale:
Several hardening controls depend on certain API server admission plugins being enabled. Checking that no admission control plugins are disabled helps assert that all the critical admission control plugins are indeed enabled and providing the security benefits required.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_no_adm_ctrl_plugins_disabled
Identifiers:

CCE-83799-7

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.13, 1.2.14, 1.2.14, 1.2.15, 1.2.16, 1.2.17

Rule   Ensure the openshift-oauth-apiserver service uses TLS   [ref]

By default, the OpenShift OAuth API Server uses TLS. HTTPS should be used for connections between openshift-oauth-apiserver and kube-apiserver. By default, the OpenShift OAuth API Server uses Intermediate profile which requires a minimum TLS version of 1.2.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/apiservers/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/apiservers/cluster file.
Rationale:
Connections between the kube-apiserver and the extension openshift-oauth-apiserver could potentially carry sensitive data such as secrets and keys. It is important to use in-transit encryption for any communication between the kube-apiserver and the extension openshift-apiserver.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_oauth_https_serving_cert
References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.4
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Ensure the openshift-oauth-apiserver service uses TLS   [ref]

By default, the OpenShift API Server uses TLS. HTTPS should be used for connections between openshift-apiserver and kube-apiserver. By default, the OpenShift OAuth API Server uses Intermediate profile which requires a minimum TLS version of 1.2.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/apiservers/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/apiservers/cluster file.
Rationale:
Connections between the kube-apiserver and the extension openshift-apiserver could potentially carry sensitive data such as secrets and keys. It is important to use in-transit encryption for any communication between the kube-apiserver and the extension openshift-apiserver.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_openshift_https_serving_cert
References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.4
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Profiling is protected by RBAC   [ref]

Ensure that the cluster-debugger cluster role includes the /metrics resource URL. This demonstrates that profiling is protected by RBAC, with a specific cluster role to allow access.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger API endpoint to the local /kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger file.
Rationale:
Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. To ensure the collected data is not exploited, profiling endpoints are secured via RBAC (see cluster-debugger role). By default, the profiling endpoints are accessible only by users bound to cluster-admin or cluster-debugger role. Profiling can not be disabled.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_profiling_protected_by_rbac
Identifiers:

CCE-84212-0

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.19
pcidss42.2.1, 2.2

Rule   Configure the API Server Minimum Request Timeout   [ref]

The API server minimum request timeout defines the minimum number of seconds a handler must keep a request open before timing it out. To set this, edit the openshift-kube-apiserver configmap and set min-request-timeout under the apiServerArguments field:
"apiServerArguments":{
  ...
  "min-request-timeout":[
    3600
  ],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
Setting global request timeout allows extending the API Server request timeout limit to a duration appropriate to the user's connection speed. By default, it is set to 1800 seconds which might not be suitable for some environments. Setting the limit too low may result in excessive timeouts, and a limit that is too large may exhaust the API Server resources making it prone to Denial-of-Service attack. It is recommended to set this limit as appropriate and change the default limit of 1800 seconds only if needed.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_request_timeout
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.24
pcidss42.2.1, 2.2

Rule   Configure the Certificate for the API Server   [ref]

To ensure the API Server utilizes its own TLS certificates, the tls-cert-file must be configured. Verify that the apiServerArguments section has the tls-cert-file configured in the config configmap in the openshift-kube-apiserver namespace similar to:
"apiServerArguments":{
...
"tls-cert-file": [
  "/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt"
],
...
}
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["tls-cert-file"]) | .apiServerArguments["tls-cert-file"][] | select(test("/etc/kubernetes/certs/server/tls.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["tls-cert-file"]) | .apiServerArguments["tls-cert-file"][] | select(test("{{.var_apiserver_tls_cert}}"))]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#bca394347bab5b9902f1d1568d4f5d6e5498b01ec27ddf8231443e376b18757d file.
Rationale:
API Server communication contains sensitive parameters that should remain encrypted in transit. Configure the API Server to serve only HTTPS traffic.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_tls_cert
Identifiers:

CCE-83779-9

References:
nerc-cipCIP-003-8 R4.2, CIP-007-3 R5.1
nistSC-8, SC-8(1), SC-8(2)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095
cis1.2.28
bsiAPP.4.4.A17
pcidss42.2.1, 2.2.5, 2.2.7, 2.2, 4.2.1, 4.2

Rule   Use Strong Cryptographic Ciphers on the API Server   [ref]

To ensure that the API Server is configured to only use strong cryptographic ciphers, verify the openshift-kube-apiserver configmap contains the following set of ciphers, with no additions:
"servingInfo":{
  ...
  "cipherSuites": [
    "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
    "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
    "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
    "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
    "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
    "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
  ],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Warning:  Once configured, API Server clients that cannot support modern cryptographic ciphers will not be able to make connections to the API server.
Rationale:
TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided. By default, OpenShift supports a number of TLS ciphersuites including some that have security concerns, weakening the protection provided.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_tls_cipher_suites
References:
nistCM-6
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.32
bsiAPP.4.4.A17
pcidss42.2.1, 2.2.5, 2.2.7, 2.2, 4.2.1, 4.2

Rule   Configure the Certificate Key for the API Server   [ref]

To ensure the API Server utilizes its own TLS certificates, the tls-private-key-file must be configured. Verify that the apiServerArguments section has the tls-private-key-file configured in the config configmap in the openshift-kube-apiserver namespace similar to:
"apiServerArguments":{
...
"tls-private-key-file": [
  "/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key"
],
...
}
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["tls-private-key-file"]) | .apiServerArguments["tls-private-key-file"][] | select(test("/etc/kubernetes/certs/server/tls.key"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["tls-private-key-file"]) | .apiServerArguments["tls-private-key-file"][] | select(test("{{.var_apiserver_tls_private_key}}"))]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#8c69c1fe6742f70a3a16c09461f57a19ef2a695143301cede2f2f5d307aa3508 file.
Rationale:
API Server communication contains sensitive parameters that should remain encrypted in transit. Configure the API Server to serve only HTTPS traffic.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_tls_private_key
Identifiers:

CCE-84282-3

References:
nerc-cipCIP-003-8 R4.2, CIP-007-3 R5.1
nistSC-8, SC-8(1), SC-8(2)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095
cis1.2.28
bsiAPP.4.4.A17
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Ensure APIServer is configured with secure tlsSecurityProfile   [ref]

The configuration tlsSecurityProfile specifies TLS configurations to be used while establishing connections with the externally exposed servers. Though secure transport mode is used for establishing connections, the protocols used may not always be strong enough to avoid interception and manipulation of the data in transport. TLS Security profile configured should not make use of any protocols, ciphers, and algorithms with known security vulnerabilities.

tlsSecurityProfile can be configured to use one of custom, intermediate, modern, or old profile. Profile Old should be avoided at all times and when using custom profile one should be extremely careful as invalid configurations can be catastrophic. It is always advised to use highly secure intermediate or modern profiles and if unset a default is chosen.

Update tlsSecurityProfile to Intermediate using the following command:

oc patch apiservers.config.openshift.io cluster --type 'json' --patch '[{"op": "add", "path": "/spec/tlsSecurityProfile/intermediate", "value": {}}, {"op": "replace", "path": "/spec/tlsSecurityProfile/type", "value": "Intermediate"}'

For more information, follow OpenShift documentation: the relevant documentation.

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/apiservers/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/apiservers/cluster file.
Rationale:
The authenticity and integrity of the container platform and communication between nodes and components must be secure. If an insecure protocol, cipher, or algorithms is used, during transmission of data, the data can be intercepted and manipulated. To thwart the manipulation of the data during transmission secure protocol, cipher and algorithms must be used.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_api_server_tls_security_profile
Identifiers:

CCE-86232-6

References:
nistSC-8, SC-8(1)
app-srg-ctrSRG-APP-000014-CTR-000040, SRG-APP-000560-CTR-001340, CNTR-OS-000020
pcidss44.2.1, 4.2
stigrefSV-257506r921461_rule

Rule   Disable Token-based Authentication   [ref]

To ensure OpenShift does not accept token-based authentication, follow the OpenShift documentation and configure alternate mechanisms for authentication. Then, edit the API Server pod specification file Edit the openshift-kube-apiserver configmap and remove the token-auth-file parameter:
"apiServerArguments":{
  ...
  "token-auth-file":[
    "/path/to/any/file"
  ],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson]{{else}}[.data."config.yaml" | fromjson]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#ffe65d9fac11909686e59349c6a0111aaf57caa26bd2db3e7dcb1a0a22899145 file.
Rationale:
The token-based authentication utilizes static tokens to authenticate requests to the API Server. The tokens are stored in clear-text in a file on the API Server, and cannot be revoked or rotated without restarting the API Server.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_api_server_token_auth
Identifiers:

CCE-83481-2

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.3
pcidss42.2.1, 2.2

Rule   Ensure that Audit Log Forwarding Is Enabled   [ref]

OpenShift audit works at the API server level, logging all requests coming to the server. Audit is on by default and the best practice is to ship audit logs off the cluster for retention. The cluster-logging-operator is able to do this with the
ClusterLogForwarders
resource. The forementioned resource can be configured to logs to different third party systems. For more information on this, please reference the official documentation: https://docs.openshift.com/container-platform/latest/observability/logging/logging-6.0/log6x-clf.html
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/observability.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders API endpoint to the local /kubernetes-api-resources/apis/observability.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders file true /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance API endpoint to the local /kubernetes-api-resources/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance file true .
Rationale:
Retaining logs ensures the ability to go back in time to investigate or correlate any events. Offloading audit logs from the cluster ensures that an attacker that has access to the cluster will not be able to tamper with the logs because of the logs being stored off-site.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_audit_log_forwarding_enabled
Identifiers:

CCE-84076-9

References:
nerc-cipCIP-003-8 R5.2, CIP-004-6 R2.2.2, CIP-004-6 R2.2.3, CIP-004-6 R3.3, CIP-007-3 R.1.3, CIP-007-3 R5, CIP-007-3 R5.1.1, CIP-007-3 R5.2, CIP-007-3 R5.3.1, CIP-007-3 R5.3.2, CIP-007-3 R5.3.3, CIP-007-3 R6.5
nistAC-2(12), AU-3(2), AU-5(1), AU-6, AU-6(1), AU-6(3), AU-9(2), SI-4(16), AU-4(1), AU-11, AU-7, AU-7(1), SI-4(20)
pcidssReq-2.2, Req-10.5.3, Req-10.5.4
app-srg-ctrSRG-APP-000092-CTR-000165, SRG-APP-000111-CTR-000220, SRG-APP-000358-CTR-000805, CNTR-OS-000170, CNTR-OS-000220
cis1.2.21
pcidss42.2.1, 2.2
stigrefSV-257519r921500_rule, SV-257524r921515_rule

Rule   Ensure that Audit Log Webhook Is Configured   [ref]

Audit is on by default and the best practice is to ship audit logs off an cluster for retention. HyperShift is able to do this with the a audit webhook, which is configured in the HostedCluster custom resource. The forementioned resource can be configured to log to different third party systems. For more information on this, please reference the official documentation: https://hypershift-docs.netlify.app/reference/api/
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/hypershift.openshift.io/v1beta1/namespaces/{{.hypershift_namespace_prefix}}/hostedclusters/{{.hypershift_cluster}} API endpoint, filter with with the jq utility using the following filter .items[].spec and persist it to the local /kubernetes-api-resources/apis/observability.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders#f2f30ae784cdbafba1677a19b3ff10c0996ce5e98fcc7e07a58d37b03a771e89 file. true
Rationale:
Retaining logs ensures the ability to go back in time to investigate or correlate any events. Offloading audit logs from the cluster ensures that an attacker that has access to the cluster will not be able to tamper with the logs because of the logs being stored off-site.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_audit_log_forwarding_webhook
Identifiers:

CCE-86103-9

References:
pcidssReq-2.2, Req-10.5.3, Req-10.5.4
cis1.2.21
pcidss42.2.1, 2.2
Group   Authentication   Group contains 5 rules
[ref]   In cloud workloads, there are many ways to create and configure to multiple authentication services. Some of these authentication methods by not be secure or common methodologies, or they may not be secure by default. This section introduces mechanisms for configuring authentication systems Kubernetes.

Rule   Configure An Identity Provider   [ref]

For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed. Understanding authentication | Authentication | OpenShift Container Platform

The OpenShift Container Platform includes a built-in OAuth server for token-based authentication. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. It is recommended for an administrator to configure OAuth to specify an identity provider after the cluster is installed. User access to the cluster is managed through the identity provider. Understanding identity provider configuration | Authentication | OpenShift Container Platform

OpenShift includes built-in role based access control (RBAC) to determine whether a user is allowed to perform a given action within the cluster. Roles can have cluster scope or local (i.e. project) scope. Using RBAC to define and apply permissions | Authentication | OpenShift Container Platform

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/apis/hypershift.openshift.io/v1beta1/namespaces/{{.hypershift_namespace_prefix}}/hostedclusters/{{.hypershift_cluster}}{{else}}/apis/config.openshift.io/v1/oauths/cluster{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.spec.configuration.oauth{{else}}.spec{{end}} and persist it to the local /kubernetes-api-resources/apis/config.openshift.io/v1/oauths/cluster#489c53adb0325a207f2120d4dee0ef775dad56dceaa74bafc10bf32c1da46e9e file.
Rationale:

With any authentication mechanism the ability to revoke credentials if they are compromised or no longer required, is a key control. Kubernetes client certificate authentication does not allow for this due to a lack of support for certificate revocation.

OpenShift's built-in OAuth server allows credential revocation by relying on the Identity provider, as well as giving the administrators the ability to revoke any tokens given to a specific user.

Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_idp_is_configured
Identifiers:

CCE-84088-4

References:
nerc-cipCIP-004-6 R2.2.2, CIP-004-6 R2.2.3, CIP-007-3 R.1.3, CIP-007-3 R5, CIP-007-3 R5.1, CIP-007-3 R5.1.1, CIP-007-3 R5.1.2, CIP-007-3 R5.1.3, CIP-007-3 R5.2, CIP-007-3 R5.2.1, CIP-007-3 R5.2.3, CIP-007-3 R5.3.1, CIP-007-3 R5.3.2, CIP-007-3 R5.3.3
nistAC-2, AC-2(1), AC-2(2), AC-2(3), AC-2(4), AC-2(5), AC-2(6), AC-2(7), AC-2(8), AC-7, AC-12(1), IA-2(8), IA-2(9), SC-12(1)
pcidssReq-2.2, Req-8.1.1
app-srg-ctrSRG-APP-000023-CTR-000055, CNTR-OS-000030, CNTR-OS-000040, CNTR-OS-000440
cis3.1.1
pcidss42.2.1, 2.2, 8.2.1, 8.2, 8.3
stigrefSV-257507r921464_rule, SV-257508r921467_rule, SV-257542r921569_rule

Rule   Configure OAuth tokens to expire after a set period of inactivity   [ref]

You can configure OAuth tokens to expire after a set period of inactivity. By default, no token inactivity timeout is set.

The inactivity timeout can be either set in the OAuth server configuration or in any of the OAuth clients. The client settings override the OAuth server setting.

To set the OAuth server inactivity timeout, edit the OAuth server object: oc edit oauth cluster and set the .spec.tokenConfig.accessTokenInactivityTimeout parameter to the desired value:

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
...
spec:
   tokenConfig:
     accessTokenInactivityTimeout: 10m0s 
Please note that the OAuth server converts the value internally to a human-readable format,
so that e.g. setting accessTokenInactivityTimeout=600s would be converted by the OAuth
server to accessTokenInactivityTimeout=10m0s.
For more information on configuring the OAuth server, consult the OpenShift documentation: https://docs.openshift.com/container-platform/4.7/authentication/configuring-oauth-clients.html

To edit the OAuth client inactivity timeout, edit the OAuth client object: oc edit oauthclient $clientname and set the top-level accessTokenInactivityTimeoutSeconds attribute.

apiVersion: oauth.openshift.io/v1
grantMethod: auto
kind: OAuthClient
metadata:
...
accessTokenInactivityTimeoutSeconds: 600 
For more information on configuring the OAuth clients, consult the OpenShift documentation: https://access.redhat.com/documentation/en-us/openshift_container_platform/4.7/html-single/authentication_and_authorization/index#oauth-token-inactivity-timeout_configuring-internal-oauth

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/oauth.openshift.io/v1/oauthclients API endpoint to the local /kubernetes-api-resources/apis/oauth.openshift.io/v1/oauthclients file /apis/config.openshift.io/v1/oauths/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/oauths/cluster file .
Rationale:
Terminating an idle session within a short time period reduces the window of opportunity for unauthorized personnel to take control of a session that has been left unattended.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_oauth_or_oauthclient_inactivity_timeout
Identifiers:

CCE-83702-1

References:
nerc-cipCIP-004-6 R2.2.3, CIP-007-3 R5.1, CIP-007-3 R5.2, CIP-007-3 R5.3.1, CIP-007-3 R5.3.2, CIP-007-3 R5.3.3
nistAC-2(5), IA-5(13), SC-10
app-srg-ctrSRG-APP-000190-CTR-000500, CNTR-OS-000400, CNTR-OS-000490
pcidss48.2.8, 8.2
stigrefSV-257540r921563_rule, SV-257544r921575_rule

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  tokenConfig:
    accessTokenInactivityTimeout: {{.var_oauth_inactivity_timeout}}

Rule   Configure OAuth tokens to expire after a set period of inactivity   [ref]

You can configure OAuth tokens to have have a custom duration. By default, the tokens are valid for 24 hours (86400 seconds).

The maximum age can be either set in the OAuth server configuration or in any of the OAuth clients. The client settings override the OAuth server setting.

To set the OAuth server token max age, edit the OAuth server object: oc edit oauth cluster and set the .spec.tokenConfig.accessTokenMaxAgeSeconds parameter to the desired value:

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
...
spec:
   tokenConfig:
     accessTokenMaxAgeSeconds: 28800

To set the OAuth client token max age, edit the OAuth client object: oc edit oauthclient $clientname and set the top-level accessTokenMaxAgeSeconds attribute.

apiVersion: oauth.openshift.io/v1
grantMethod: auto
kind: OAuthClient
metadata:
...
accessTokenMaxAgeSeconds: 28800
For more information on configuring the OAuth server, consult the OpenShift documentation: https://docs.openshift.com/container-platform/4.7/authentication/configuring-internal-oauth.html

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/oauth.openshift.io/v1/oauthclients API endpoint to the local /kubernetes-api-resources/apis/oauth.openshift.io/v1/oauthclients file /apis/config.openshift.io/v1/oauths/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/oauths/cluster file .
Rationale:
Setting a token maximum age to a shorter time period reduces the window of opportunity for unauthorized personnel to take control of the session.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_oauth_or_oauthclient_token_maxage
Identifiers:

CCE-84162-7

References:
nistAC-12
app-srg-ctrSRG-APP-000400-CTR-000960, CNTR-OS-000760
stigrefSV-257562r921629_rule

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  tokenConfig:
    accessTokenMaxAgeSeconds: {{.var_oauth_token_maxage}}

Rule   Do Not Use htpasswd-based IdP   [ref]

For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed. Understanding authentication | Authentication | OpenShift Container Platform

The OpenShift Container Platform includes a built-in OAuth server for token-based authentication. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. It is recommended for an administrator to configure OAuth to specify an identity provider after the cluster is installed. User access to the cluster is managed through the identity provider. Understanding identity provider configuration | Authentication | OpenShift Container Platform

However, not all Identity Providers supported by OpenShift provide the same level of capabilities. As an example, the htpasswd Identity Provider only checks the username and password match and provides no means of 2FA, account lockout or notification mechanism. This rule therefore only allows a subset of identity providers.

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/apis/hypershift.openshift.io/v1beta1/namespaces/{{.hypershift_namespace_prefix}}/hostedclusters/{{.hypershift_cluster}}{{else}}/apis/config.openshift.io/v1/oauths/cluster{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.spec.configuration.oauth{{else}}.spec{{end}} and persist it to the local /kubernetes-api-resources/apis/config.openshift.io/v1/oauths/cluster#489c53adb0325a207f2120d4dee0ef775dad56dceaa74bafc10bf32c1da46e9e file.
Rationale:

With any authentication mechanism the ability to revoke credentials if they are compromised or no longer required, is a key control. Kubernetes client certificate authentication does not allow for this due to a lack of support for certificate revocation.

OpenShift's built-in OAuth server allows credential revocation by relying on the Identity provider, as well as giving the administrators the ability to revoke any tokens given to a specific user.

In addition, using an external Identity provider allows for setting up notifications on account creation or deletion, multi-factor authentication, disabling inactive accounts or other features required by different compliance standards.

Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_ocp_idp_no_htpasswd
Identifiers:

CCE-84209-6

References:
nerc-cipCIP-003-8 R5.1.1, CIP-003-8 R5.3, CIP-004-6 R2.2.2, CIP-004-6 R2.2.3, CIP-004-6 R2.3, CIP-007-3 R.1.3, CIP-007-3 R5, CIP-007-3 R5.1, CIP-007-3 R5.1.1, CIP-007-3 R5.1.2, CIP-007-3 R5.1.3, CIP-007-3 R5.2, CIP-007-3 R5.2.1, CIP-007-3 R5.2.3, CIP-007-3 R5.3.1, CIP-007-3 R5.3.2, CIP-007-3 R5.3.3
nistAC-2(1), AC-2(2), AC-2(3), AC-2(4), AC-2(7), AC-2(8), AC-7, IA-2, IA-2(1), IA-2(2), IA-2(3), IA-2(5), IA-2(8), IA-2(9), IA-2(12), IA-5(4), SC-12(1)
app-srg-ctrSRG-APP-000023-CTR-000055, CNTR-OS-000030, CNTR-OS-000040, CNTR-OS-000440
pcidss48.3.4, 8.3
stigrefSV-257507r921464_rule, SV-257508r921467_rule, SV-257542r921569_rule

Rule   Only Use LDAP-based IdPs with TLS   [ref]

For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed. Understanding authentication | Authentication | OpenShift Container Platform

The OpenShift Container Platform includes a built-in OAuth server for token-based authentication. Developers and administrators obtain OAuth access tokens to authenticate themselves to the API. It is recommended for an administrator to configure OAuth to specify an identity provider after the cluster is installed. User access to the cluster is managed through the identity provider. Understanding identity provider configuration | Authentication | OpenShift Container Platform

If the identity provider is LDAP, setting the insecure flag to true would mean that passwords, such as the one used to authenticate the OAuth proxy to the LDAP server would be transmitted in the clear, potentially allowing an attacker to read the password if they captured the network traffic.

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/apis/hypershift.openshift.io/v1beta1/namespaces/{{.hypershift_namespace_prefix}}/hostedclusters/{{.hypershift_cluster}}{{else}}/apis/config.openshift.io/v1/oauths/cluster{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.spec.configuration.oauth{{else}}.spec{{end}} and persist it to the local /kubernetes-api-resources/apis/config.openshift.io/v1/oauths/cluster#489c53adb0325a207f2120d4dee0ef775dad56dceaa74bafc10bf32c1da46e9e file.
Rationale:
Transmitting authentication tokens as clear-text may leak them to an attacker.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_ocp_no_ldap_insecure
Identifiers:

CCE-83699-9

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R5.1.1, CIP-003-8 R5.3, CIP-004-6 R2.2.3, CIP-004-6 R2.3, CIP-007-3 R5.1, CIP-007-3 R5.1.2, CIP-007-3 R5.2, CIP-007-3 R5.3.1, CIP-007-3 R5.3.2, CIP-007-3 R5.3.3
nistIA-2(8), IA-2(9), SC-8
pcidssReq-2.3
app-srg-ctrSRG-APP-000023-CTR-000055, CNTR-OS-000030, CNTR-OS-000040
pcidss42.2.7, 2.2, 8.3.2, 8.3
stigrefSV-257507r921464_rule, SV-257508r921467_rule
Group   OpenShift Controller Settings   Group contains 6 rules
[ref]   This section contains recommendations for the kube-controller-manager configuration

Rule   Ensure Controller insecure port argument is unset   [ref]

To ensure the Controller Manager service is bound to secure loopback address and a secure port, set the RotateKubeletServerCertificate option to true in the openshift-kube-controller-manager configmap on the master node(s):
"extendedArguments": {
...
  "port": ["0"],
...
It is also acceptable for a system to deprecate the insecure port:
"extendedArguments": {
...
...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Dkube-controller-manager{{else}}/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[[.items[0].spec.containers[0].args[] | select(. | match("--port=[1-9]*[1-9]+") )] | length | if . == 0 then true else false end]{{else}}[.data."config.yaml" | fromjson | if .extendedArguments["port"]!=null then .extendedArguments["port"]==["0"] else true end]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config#9f09cca56dc1e9f9605eb5a94aed74de554fd209513a9222e4fe9c0ed669aeee file.
Rationale:
The Controller Manager API service is used for health and metrics information and is available without authentication or encryption. As such, it should only be bound to a localhost interface to minimize the cluster's attack surface.
Severity: 
low
Rule ID:xccdf_org.ssgproject.content_rule_controller_insecure_port_disabled
Identifiers:

CCE-83578-5

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.3.5
pcidss42.2.1, 2.2

Rule   Ensure that the RotateKubeletServerCertificate argument is set   [ref]

To enforce kubelet server certificate rotation on the Controller Manager, set the RotateKubeletServerCertificate option to true in the openshift-kube-controller-manager configmap on the master node(s):
"extendedArguments": {
...
  "feature-gates": [
  ...
    "RotateKubeletServerCertificate=true",
  ...
...
This feature gate is enabled by default as of Kubernetes 1.12.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Dkube-controller-manager{{else}}/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.items[0].spec.containers[0].args{{else}}.data."config.yaml" | fromjson | .extendedArguments["feature-gates"]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config#4cbbbf49b93400715e43dc698f6484799805c502ad3aeb8285de579753b54d31 file.
Warning:  In OpenShift 4, the kubelet certification rotation is enabled by default. Openshift v4 automatically generates a new kube-apiserver-to-kubelet-signer CA certificates at 292 days, removes old CA certificate after 365 days, and the kubelet-client, kubelet-server certs are auto-rotated once every month. Hence, this rule is deprecated and not-applicable. ref: https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_and_compliance/certificate-types-and-descriptions#purpose-5
Rationale:
Enabling kubelet certificate rotation causes the kubelet to both request a serving certificate after bootstrapping its client credentials and rotate the certificate as its existing credentials expire. This automated periodic rotation ensures that there are no downtimes due to expired certificates and thus addressing the availability in the C/I/A security triad.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_controller_rotate_kubelet_server_certs
Identifiers:

CCE-83730-2

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325

Rule   Ensure Controller secure-port argument is set   [ref]

To ensure the Controller Manager service is bound to secure loopback address using a secure port, set the RotateKubeletServerCertificate option to true in the openshift-kube-controller-manager configmap on the master node(s):
"extendedArguments": {
...
  "secure-port": ["10257"],
...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Dkube-controller-manager{{else}}/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[[.items[0].spec.containers[0].args[] | select(. | match("--secure-port=10257") )] | length | if . ==1 then true else false end]{{else}}[.data."config.yaml" | fromjson | if .extendedArguments["secure-port"][]=="10257" then true else false end]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-controller-manager/configmaps/config#8241ce1009dc5dd166436d0311b60b96aa3a2f591ba43a26e2b9d0bfc9071414 file.
Rationale:
The Controller Manager API service is used for health and metrics information and is available without authentication or encryption. As such, it should only be bound to a localhost interface to minimize the cluster's attack surface.
Severity: 
low
Rule ID:xccdf_org.ssgproject.content_rule_controller_secure_port
Identifiers:

CCE-83861-5

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.3.5
pcidss42.2.1, 2.2
Group   OpenShift etcd Settings   Group contains 8 rules
[ref]   Contains rules that check correct OpenShift etcd settings.

Rule   Disable etcd Self-Signed Certificates   [ref]

To ensure the etcd service is not using self-signed certificates, run the following command:
$ oc get cm/etcd-pod -n openshift-etcd -o yaml
The etcd pod configuration contained in the configmap should not contain the --auto-tls=true flag.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Detcd{{else}}/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.items[0].spec.containers[0].command | join(" ")]{{else}}[.data."pod.yaml"]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod#72b7530e9fb0f39686f598b00d791485841e98be902ba16431a5629726dd7027 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection. Using self-signed certificates ensures that the certificates are never validated against a certificate authority and could lead to compromised and invalidated data.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_etcd_auto_tls
Identifiers:

CCE-84199-9

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis2.3
pcidss42.2.1, 2.2

Rule   Ensure That The etcd Client Certificate Is Correctly Set   [ref]

To ensure the etcd service is serving TLS to clients, make sure the etcd-pod* ConfigMaps in the openshift-etcd namespace contain the following argument for the etcd binary in the etcd pod:
--cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-[a-z]+/etcd-serving-NODE_NAME.crt
. Note that the
[a-z]+
is being used since the directory might change between OpenShift versions.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Detcd{{else}}/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.items[0].spec.containers[0].command | join(" ")]{{else}}[.data."pod.yaml"]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod#72b7530e9fb0f39686f598b00d791485841e98be902ba16431a5629726dd7027 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_etcd_cert_file
Identifiers:

CCE-83553-8

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis2.1
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Enable The Client Certificate Authentication   [ref]

To ensure the etcd service is serving TLS to clients, make sure the etcd-pod* ConfigMaps in the openshift-etcd namespace contain the following argument for the etcd binary in the etcd pod:
oc get -nopenshift-etcd cm etcd-pod -oyaml | grep "\-\-client-cert-auth="
the parameter should be set to true.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Detcd{{else}}/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.items[0].spec.containers[0].command | join(" ")]{{else}}[.data."pod.yaml"]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod#72b7530e9fb0f39686f598b00d791485841e98be902ba16431a5629726dd7027 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_etcd_client_cert_auth
Identifiers:

CCE-84077-7

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis2.2
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Ensure That The etcd Key File Is Correctly Set   [ref]

To ensure the etcd service is serving TLS to clients, make sure the etcd-pod* ConfigMaps in the openshift-etcd namespace contain the following argument for the etcd binary in the etcd pod:
oc get -nopenshift-etcd cm etcd-pod -oyaml | grep "\-\-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-[a-z]+/etcd-serving-NODE_NAME.key"
. Note that the
[a-z]+
is being used since the directory might change between OpenShift versions.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Detcd{{else}}/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.items[0].spec.containers[0].command | join(" ")]{{else}}[.data."pod.yaml"]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod#72b7530e9fb0f39686f598b00d791485841e98be902ba16431a5629726dd7027 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_etcd_key_file
Identifiers:

CCE-83745-0

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis2.1
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Disable etcd Peer Self-Signed Certificates   [ref]

To ensure the etcd service is not using self-signed certificates, run the following command:
$ oc get cm/etcd-pod -n openshift-etcd -o yaml
The etcd pod configuration contained in the configmap should not contain the --peer-auto-tls=true flag.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Detcd{{else}}/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.items[0].spec.containers[0].command | join(" ")]{{else}}[.data."pod.yaml"]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod#72b7530e9fb0f39686f598b00d791485841e98be902ba16431a5629726dd7027 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection. Using self-signed certificates ensures that the certificates are never validated against a certificate authority and could lead to compromised and invalidated data.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_etcd_peer_auto_tls
Identifiers:

CCE-84184-1

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis2.6
pcidss42.2.1, 2.2

Rule   Ensure That The etcd Peer Client Certificate Is Correctly Set   [ref]

To ensure the etcd service is serving TLS to peers, make sure the etcd-pod* ConfigMaps in the openshift-etcd namespace contain the following argument for the etcd binary in the etcd pod:
--peer-cert-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-[a-z]+/etcd-peer-NODE_NAME.crt
Note that the
[a-z]+
is being used since the directory might change between OpenShift versions.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Detcd{{else}}/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.items[0].spec.containers[0].command | join(" ")]{{else}}[.data."pod.yaml"]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod#72b7530e9fb0f39686f598b00d791485841e98be902ba16431a5629726dd7027 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_etcd_peer_cert_file
Identifiers:

CCE-83847-4

References:
nerc-cipCIP-003-8 R4.2, CIP-007-3 R5.1
nistSC-8, SC-8(1), SC-8(2)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095
cis2.4
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Enable The Peer Client Certificate Authentication   [ref]

To ensure the etcd service is serving TLS to clients, make sure the etcd-pod* ConfigMaps in the openshift-etcd namespace contain the following argument for the etcd binary in the etcd pod:
oc get -nopenshift-etcd cm etcd-pod -oyaml | grep "\-\-peer-client-cert-auth="
the parameter should be set to true.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Detcd{{else}}/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.items[0].spec.containers[0].command | join(" ")]{{else}}[.data."pod.yaml"]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod#72b7530e9fb0f39686f598b00d791485841e98be902ba16431a5629726dd7027 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_etcd_peer_client_cert_auth
Identifiers:

CCE-83465-5

References:
nerc-cipCIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistCM-6, CM-6(1), SC-8, SC-8(1)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000516-CTR-001325
cis2.5
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Ensure That The etcd Peer Key File Is Correctly Set   [ref]

To ensure the etcd service is serving TLS to peers, make sure the etcd-pod* ConfigMaps in the openshift-etcd namespace contain the following argument for the etcd binary in the etcd pod:
oc get -nopenshift-etcd cm etcd-pod -oyaml | grep "\-\-peer-key-file=/etc/kubernetes/static-pod-certs/secrets/etcd-all-[a-z]+/etcd-peer-NODE_NAME.key"
Note that the
[a-z]+
is being used since the directory might change between OpenShift versions.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/pods?labelSelector=app%3Detcd{{else}}/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.items[0].spec.containers[0].command | join(" ")]{{else}}[.data."pod.yaml"]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod#72b7530e9fb0f39686f598b00d791485841e98be902ba16431a5629726dd7027 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_etcd_peer_key_file
Identifiers:

CCE-83711-2

References:
nerc-cipCIP-003-8 R4.2, CIP-007-3 R5.1
nistSC-8, SC-8(1), SC-8(2)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095
cis2.4
pcidss42.2.1, 2.2.5, 2.2.7, 2.2
Group   Kubernetes - General Security Practices   Group contains 18 rules
[ref]   Contains evaluations for general security practices for operating a Kubernetes environment.

Rule   Ensure the alert receiver is configured   [ref]

In OpenShift Container Platform, an alert is fired when the conditions defined in an alerting rule are true. An alert provides a notification that a set of circumstances are apparent within a cluster. Firing alerts can be viewed in the Alerting UI in the OpenShift Container Platform web console by default. After an installation, you can configure OpenShift Container Platform to send alert notifications to external systems so that designate personnel can be alerted in real time. OpenShift provides multiple alert receivers integrations to send realtime alerts to different services such as email, slack, pagerduty, webhooks, etc. [1][2] [1]https://docs.openshift.com/container-platform/latest/post_installation_configuration/configuring-alert-notifications.html#configuring-alert-receivers_configuring-alert-notifications [2]https://docs.openshift.com/container-platform/latest/monitoring/managing-alerts.html#applying-custom-alertmanager-configuration_managing-alerts
Rationale:
Alerts are a critical part of the OpenShift Container Platform. Proper configuration of receiver is required so that alerts can be sent to external systems in real-time to notify users of potential issues.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_alert_receiver_configured
Identifiers:

CCE-85911-6

References:
nistAU-5(2)
pcidss410.7.2, 10.7

Rule   Ensure the notification is enabled for Compliance Operator   [ref]

The OpenShift platform provides the Compliance Operator for administrators to monitor compliance state of a cluster and provides them with an overview of gaps and ways to remediate them, and this control ensures proper notification alert is enabled for Compliance Operator so that system administrators and security personnel are notified about the alerts on compliance status.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/monitoring.coreos.com/v1/prometheusrules API endpoint, filter with with the jq utility using the following filter [.items[] | select(.metadata.name =="compliance") | .metadata.name] and persist it to the local /kubernetes-api-resources/apis/monitoring.coreos.com/v1/prometheusrules#072ed9f332a070eff46523f9b3fed7228157202473b723cca2cc376c9def8a2b file.
Rationale:
Compliance alert enables OpenShift administrators to be informed on the system compliance status
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_compliance_notification_enabled
Identifiers:

CCE-86032-0

References:
nistSI-6, SI-4(24)

Rule   Ensure the notification is enabled for file integrity operator   [ref]

The OpenShift platform provides the File Integrity Operator to monitor for unwanted file changes, and this control ensures proper notification alert is enabled so that system administrators and security personnel are notified about the alerts
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/monitoring.coreos.com/v1/prometheusrules API endpoint, filter with with the jq utility using the following filter [.items[] | select(.metadata.name =="file-integrity") | .metadata.name] and persist it to the local /kubernetes-api-resources/apis/monitoring.coreos.com/v1/prometheusrules#dda8d6e19f5a89264301ce56ece4df115a14d8a85e3ae6bd3cd8eccd234252c5 file.
Rationale:
File Integrity Operator is able to send out alerts
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_file_integrity_notification_enabled
Identifiers:

CCE-90572-9

References:
nistSI-6, SI-7(2), SI-4(24)
pcidssReq-11.5.1, Req-12.10.5
bsiAPP.4.4.A17
pcidss411.5.2

Rule   Apply Security Context to Your Pods and Containers   [ref]

Apply Security Context to your Pods and Containers
Rationale:
A security context defines the operating system security settings (uid, gid, capabilities, SELinux role, etc..) applied to a container. When designing your containers and pods, make sure that you configure the security context for your pods, containers, and volumes. A security context is a property defined in the deployment yaml. It controls the security parameters that will be assigned to the pod/container/volume. There are two levels of security context: pod level security context, and container level security context.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_general_apply_scc
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.7.3
bsiSYS.1.6.A18, SYS.1.6.A21
pcidss42.2.1, 2.2

Rule   Manage Image Provenance Using ImagePolicyWebhook   [ref]

OpenShift administrators can control which images can be imported, tagged, and run in a cluster. There are two facilities for this purpose: (1) Allowed Registries, allowing administrators to restrict image origins to known external registries; and (2) ImagePolicy Admission plug-in which lets administrators specify specific images which are allowed to run on the OpenShift cluster. Configure an Image policy per the Image Policy chapter in the OpenShift documentation: https://docs.openshift.com/container-platform/4.4/openshift_images/image-configuration.html
Rationale:
Image Policy ensures that only approved container images are allowed to be ran on the OpenShift platform.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_general_configure_imagepolicywebhook
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325

Rule   The default namespace should not be used   [ref]

Kubernetes provides a default namespace, where objects are placed if no namespace is specified for them. Placing objects in this namespace makes application of RBAC and other controls more difficult.
Rationale:
Resources in a Kubernetes cluster should be segregated by namespace, to allow for security controls to be applied at that level and to make it easier to manage resources.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_general_default_namespace_use
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.7.4
pcidss42.2.1, 2.2

Rule   Ensure Seccomp Profile Pod Definitions   [ref]

Enable default seccomp profiles in your pod definitions.
Rationale:
Seccomp (secure computing mode) is used to restrict the set of system calls applications can make, allowing cluster administrators greater control over the security of workloads running in the cluster. Kubernetes disables seccomp profiles by default for historical reasons. You should enable it to ensure that the workloads have restricted actions available within the container.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_general_default_seccomp_profile
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.7.2
pcidss42.2.1, 2.2

Rule   Create administrative boundaries between resources using namespaces   [ref]

Use namespaces to isolate your Kubernetes objects.
Rationale:
Limiting the scope of user permissions can reduce the impact of mistakes or malicious activities. A Kubernetes namespace allows you to partition created resources into logically named groups. Resources created in one namespace can be hidden from other namespaces. By default, each resource created by a user in Kubernetes cluster runs in a default namespace, called default. You can create additional namespaces and attach resources and users to them. You can use Kubernetes Authorization plugins to create policies that segregate access to namespace resources between different users.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_general_namespaces_in_use
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.7.1
pcidss42.2.1, 2.2

Rule   Ensure that GitOps Operator is deployed   [ref]

Red Hat OpenShift GitOps is a declarative continuous delivery platform based on Argo CD. It enables teams to adopt GitOps principles for managing cluster configurations and automating secure and repeatable application delivery across hybrid multi-cluster Kubernetes environments. Following GitOps and infrastructure as code principles, you can store the configuration of clusters and applications in Git repositories and use Git workflows to roll them out to the target clusters.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/pipelines.openshift.io/v1alpha1/gitopsservices?limit=5 API endpoint to the local /kubernetes-api-resources/apis/pipelines.openshift.io/v1alpha1/gitopsservices?limit=5 file.
Rationale:
GitOps provides a mean to track system configuration changes
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_gitops_operator_exists
Identifiers:

CCE-86134-4

References:
nistCM-3(6), MA-2(2)
bsiSYS.1.6.A20

Rule   Ensure that the kubeadmin secret has been removed   [ref]

The kubeadmin user is meant to be a temporary user used for bootstrapping purposes. It is preferable to assign system administrators whose users are backed by an Identity Provider.
Make sure to remove the user as described in the documentation
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /api/v1/namespaces/kube-system/secrets/kubeadmin API endpoint to the local /kubernetes-api-resources/api/v1/namespaces/kube-system/secrets/kubeadmin file.
Rationale:
The kubeadmin user has an auto-generated password and a self-signed certificate, and has effectively
cluster-admin
permissions; therefore, it's considered a security liability.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_kubeadmin_removed
Identifiers:

CCE-90387-2

References:
nerc-cipCIP-004-6 R2.2.2, CIP-004-6 R2.2.3, CIP-007-3 R.1.3, CIP-007-3 R2, CIP-007-3 R5, CIP-007-3 R5.1.1, CIP-007-3 R5.1.3, CIP-007-3 R5.2.1, CIP-007-3 R5.2.3, CIP-007-3 R6.1, CIP-007-3 R6.2, CIP-007-3 R6.3, CIP-007-3 R6.4
nistAC-2(2), AC-2(7), AC-2(9), AC-2(10), AC-12(1), IA-2(5), MA-4, SC-12(1)
pcidssReq-2.1
app-srg-ctrSRG-APP-000023-CTR-000055, CNTR-OS-000030, CNTR-OS-000040, CNTR-OS-000440
cis3.1.1, 5.1.1
bsiAPP.4.4.A3
pcidss42.2.1, 2.2.2, 2.2, 8.2.2, 8.2, 8.3
stigrefSV-257507r921464_rule, SV-257508r921467_rule, SV-257542r921569_rule

Rule   Ensure that the OpenShift MOTD is set   [ref]

To configure OpenShift's MOTD, create a ConfigMap called motd in the openshift namespace. The object should look as follows:
---
apiVersion: v1
kind: ConfigMap
metadata:
 name: motd
 namespace: openshift
data:
  message: "A relevant MOTD"
Where message is a mandatory key. The DoD required text is either:

You are accessing a U.S. Government (USG) Information System (IS) that is provided for USG-authorized use only. By using this IS (which includes any device attached to this IS), you consent to the following conditions:
-The USG routinely intercepts and monitors communications on this IS for purposes including, but not limited to, penetration testing, COMSEC monitoring, network operations and defense, personnel misconduct (PM), law enforcement (LE), and counterintelligence (CI) investigations.
-At any time, the USG may inspect and seize data stored on this IS.
-Communications using, or data stored on, this IS are not private, are subject to routine monitoring, interception, and search, and may be disclosed or used for any USG-authorized purpose.
-This IS includes security measures (e.g., authentication and access controls) to protect USG interests -- not for your personal benefit or privacy.
-Notwithstanding the above, using this IS does not constitute consent to PM, LE or CI investigative searching or monitoring of the content of privileged communications, or work product, related to personal representation or services by attorneys, psychotherapists, or clergy, and their assistants. Such communications and work product are private and confidential. See User Agreement for details.


OR:

I've read & consent to terms in IS user agreement.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /api/v1/namespaces/openshift/configmaps/motd API endpoint to the local /kubernetes-api-resources/api/v1/namespaces/openshift/configmaps/motd file.
Rationale:
Display of a standardized and approved use notification before granting access to the system ensures privacy and security notification verbiage used is consistent with applicable federal laws, Executive Orders, directives, policies, regulations, standards, and guidance.

System use notifications are required only for access via login interfaces with human users and are not required when such human interfaces do not exist.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_openshift_motd_exists
Identifiers:

CCE-84200-5

References:
nistAC-8
app-srg-ctrSRG-APP-000068-CTR-000120, CNTR-OS-000130
stigrefSV-257516r921491_rule

Rule   Ensure that all daemonsets has resource limits   [ref]

When deploying an application, it is important to tune based on memory and CPU consumption, allocating enough resources for the application to function properly. Images provided by OpenShift Dedicated behave properly within the confines of the memory they are allocated. However, any application images must pay attention to the specific resources required to ensure they are available. If the node where a Pod is running has enough of a resource available, it's possible (and allowed) for a container to use more resource than its request for that resource specifies. However, a container is not allowed to use more than its resource limit.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/apps/v1/daemonsets?limit=500 API endpoint, filter with with the jq utility using the following filter [ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.namespace != "rhacs-operator" and ({{if ne .var_daemonset_limit_namespaces_exempt_regex "None"}}.metadata.namespace | test("{{.var_daemonset_limit_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | select( .spec.template.spec.containers[].resources.requests.cpu == null or .spec.template.spec.containers[].resources.requests.memory == null or .spec.template.spec.containers[].resources.limits.cpu == null or .spec.template.spec.containers[].resources.limits.memory == null ) | .metadata.name ] and persist it to the local /kubernetes-api-resources/apis/apps/v1/daemonsets?limit=500#a1c11605c6b91d6d3f26cad5f67056df41a5d43e06bf3588455f537ad5e90a55 file.
Rationale:
Resource requests/limits provide constraints that limit aggregate resource consumption per container. This helps prevent resource starvation. When deploying your application, it is important to tune based on memory and CPU consumption, allocating enough resources for the application to function properly.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_resource_requests_limits_in_daemonset
References:
nistSC-6

Rule   Ensure that all deployments has resource limits   [ref]

When deploying an application, it is important to tune based on memory and CPU consumption, allocating enough resources for the application to function properly. Images provided by OpenShift Dedicated behave properly within the confines of the memory they are allocated. However, any application images must pay attention to the specific resources required to ensure they are available. If the node where a Pod is running has enough of a resource available, it's possible (and allowed) for a container to use more resource than its request for that resource specifies. However, a container is not allowed to use more than its resource limit.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/apps/v1/deployments?limit=500 API endpoint, filter with with the jq utility using the following filter [ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.namespace != "rhacs-operator" and ({{if ne .var_deployment_limit_namespaces_exempt_regex "None"}}.metadata.namespace | test("{{.var_deployment_limit_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | select( .spec.template.spec.containers[].resources.requests.cpu == null or .spec.template.spec.containers[].resources.requests.memory == null or .spec.template.spec.containers[].resources.limits.cpu == null or .spec.template.spec.containers[].resources.limits.memory == null ) | .metadata.name ] and persist it to the local /kubernetes-api-resources/apis/apps/v1/deployments?limit=500#bc8e70ebc8291e061a2917bbee5d3d3a2e56d8e6c8c7cf1bcc891e0d2606a26f file.
Rationale:
Resource requests/limits provide constraints that limit aggregate resource consumption per container. This helps prevent resource starvation. When deploying your application, it is important to tune based on memory and CPU consumption, allocating enough resources for the application to function properly.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_resource_requests_limits_in_deployment
References:
nistSC-6

Rule   Ensure that all statefulsets has resource limits   [ref]

When deploying an application, it is important to tune based on memory and CPU consumption, allocating enough resources for the application to function properly. Images provided by OpenShift Dedicated behave properly within the confines of the memory they are allocated. However, any application images must pay attention to the specific resources required to ensure they are available. If the node where a Pod is running has enough of a resource available, it's possible (and allowed) for a container to use more resource than its request for that resource specifies. However, a container is not allowed to use more than its resource limit.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/apps/v1/statefulsets?limit=500 API endpoint, filter with with the jq utility using the following filter [ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.namespace != "rhacs-operator" and ({{if ne .var_statefulset_limit_namespaces_exempt_regex "None"}}.metadata.namespace | test("{{.var_statefulset_limit_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | select( .spec.template.spec.containers[].resources.requests.cpu == null or .spec.template.spec.containers[].resources.requests.memory == null or .spec.template.spec.containers[].resources.limits.cpu == null or .spec.template.spec.containers[].resources.limits.memory == null ) | .metadata.name ] and persist it to the local /kubernetes-api-resources/apis/apps/v1/statefulsets?limit=500#f44f38d4fe5c9464618dbed2411d4af523db0c3dc3823f05c263774ec150adf6 file.
Rationale:
Resource requests/limits provide constraints that limit aggregate resource consumption per container. This helps prevent resource starvation. When deploying your application, it is important to tune based on memory and CPU consumption, allocating enough resources for the application to function properly.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_resource_requests_limits_in_statefulset
References:
nistSC-6

Rule   Ensure workloads use resource requests and limits   [ref]

There are two ways to enable resource requests and limits. To create either: A multi-project quota, defined by a ClusterResourceQuota object, allows quotas to be shared across multiple projects. Resources used in each selected project are aggregated and that aggregate is used to limit resources across all the selected projects. A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that might be consumed by resources in that project. We want to make sure either a ClusterResourceQuota is used in a cluster or a ResourceQuota is used per namespaces.

To configure ClusterResourceQuota, follow the directions in the documentation

To configure ResourceQuota Per Project, follow the directions in the documentation

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /api/v1/resourcequotas API endpoint, filter with with the jq utility using the following filter [.items[] | select((.metadata.namespace | startswith("openshift") | not) and (.metadata.namespace | startswith("kube-") | not) and .metadata.namespace != "default" and .metadata.namespace != "rhacs-operator" and ({{if ne .var_resource_requests_quota_per_project_exempt_regex "None"}}.metadata.namespace | test("{{.var_resource_requests_quota_per_project_exempt_regex}}") | not{{else}}true{{end}})) | .metadata.namespace] | unique and persist it to the local /kubernetes-api-resources/api/v1/resourcequotas#4326a181a1e3e8a8e02ffb58e7d3ca9e62ed0e144a5277b1f7551fdbcfeca0a8 file.
  • /api/v1/namespaces API endpoint, filter with with the jq utility using the following filter [.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default" and .metadata.name != "rhacs-operator" and ({{if ne .var_resource_requests_quota_per_project_exempt_regex "None"}}.metadata.name | test("{{.var_resource_requests_quota_per_project_exempt_regex}}") | not{{else}}true{{end}}))] and persist it to the local /kubernetes-api-resources/api/v1/namespaces#3ae63defe5cbb61225edb84d8e19f601be933d063305c1ea1e0381297c6258d6 file.
  • /apis/quota.openshift.io/v1/clusterresourcequotas API endpoint, filter with with the jq utility using the following filter [.items[] | .metadata.name] and persist it to the local /kubernetes-api-resources/apis/quota.openshift.io/v1/clusterresourcequotas#8de615d314dbafe1ae4ce3d7c1a604bd1bafcac867393e7256ecb869e6d752a8 file.
Rationale:
Resource quotas provide constraints that limit aggregate resource consumption per project. This helps prevent resource starvation. When deploying your application, it is important to tune based on memory and CPU consumption, allocating enough resources for the application to function properly.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_resource_requests_quota
Identifiers:

CCE-90588-5

References:
nistSC-5, SC-5(1), SC-5(2)

Rule   This is a helper rule to fetch the required api resource for detecting HyperShift OCP version   [ref]

no description
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/hypershift.openshift.io/v1beta1/namespaces/{{.hypershift_namespace_prefix}}/hostedclusters/{{.hypershift_cluster}} API endpoint, filter with with the jq utility using the following filter [.status.version.history[].version] and persist it to the local /kubernetes-api-resources/hypershift/version file. This rule will be a hidden rule true true
Rationale:
no rationale
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_version_detect_in_hypershift

Rule   This is a helper rule to fetch the required api resource for detecting OCP version   [ref]

no description
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{.ocp_version_api_path}} API endpoint, filter with with the jq utility using the following filter {{.ocp_version_yaml_path}} and persist it to the local /kubernetes-api-resources/ocp/version file. This rule will be a hidden rule true true
Rationale:
no rationale
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_version_detect_in_ocp
Group   Kubernetes Kubelet Settings   Group contains 5 rules
[ref]   The Kubernetes Kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod. The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.

Rule   Ensure That The kubelet Client Certificate Is Correctly Set   [ref]

To ensure the kubelet TLS client certificate is configured, edit the kubelet configuration file /etc/kubernetes/kubelet.conf and configure the kubelet certificate file.
tlsCertFile: /path/to/TLS/cert.key
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["kubelet-client-certificate"]) | .apiServerArguments["kubelet-client-certificate"][] | select(test("/etc/kubernetes/certs/kubelet/tls.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["kubelet-client-certificate"]) | .apiServerArguments["kubelet-client-certificate"][] | select(test("{{.var_apiserver_kubelet_client_cert}}"))]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#e5500055b4aa2fcf00dc09ad0e66e44b6b42d67f8d53d1e72ff81b32f0e09865 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_kubelet_configure_tls_cert
Identifiers:

CCE-83396-2

References:
nerc-cipCIP-003-8 R4.2, CIP-007-3 R5.1
nistSC-8, SC-8(1), SC-8(2)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095
cis4.2.9
bsiAPP.4.4.A17
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   Ensure That The kubelet Client Certificate Is Correctly Set   [ref]

To ensure the kubelet TLS client certificate is configured, edit the kubelet configuration file /etc/kubernetes/kubelet.conf and configure the kubelet certificate file.
tlsCertFile: /path/to/TLS/cert.key
Note that this particular rule is only valid for OCP releases up to and
including 4.8
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /api/v1/namespaces/openshift-kube-apiserver/configmaps/config API endpoint to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_kubelet_configure_tls_cert_pre_4_9
Identifiers:

CCE-90615-6

References:
nistSC-8, SC-8(1), SC-8(2)
cis4.2.10

Rule   Ensure that the Ingress Controller only makes use of Strong Cryptographic Ciphers   [ref]

Ensure that the Ingress Controller is configured to only use strong cryptographic ciphers.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default API endpoint to the local /kubernetes-api-resources/apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default file.
Rationale:
TLS ciphers have had a number of known vulnerabilities and weaknesses, which can reduce the protection provided by them. By default Kubernetes supports a number of TLS ciphersuites including some that have security concerns, weakening the protection provided.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_kubelet_configure_tls_cipher_suites_ingresscontroller
References:
cis4.2.12
pcidss42.2.1, 2.2

---
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
  name: default
  namespace: openshift-ingress-operator
spec:
  tlsSecurityProfile:
    custom:
      ciphers:
      - ECDHE-ECDSA-AES128-GCM-SHA256
      - ECDHE-RSA-AES128-GCM-SHA256
      - ECDHE-ECDSA-CHACHA20-POLY1305
      - ECDHE-RSA-AES256-GCM-SHA384
      - ECDHE-RSA-CHACHA20-POLY1305
      - ECDHE-ECDSA-AES256-GCM-SHA384
      - TLS_AES_128_GCM_SHA256
      - TLS_AES_256_GCM_SHA384
      - TLS_CHACHA20_POLY1305_SHA256
      minTLSVersion: VersionTLS12
    type: Custom

Rule   Ensure That The kubelet Server Key Is Correctly Set   [ref]

To ensure the kubelet TLS private server key certificate is configured, edit the kubelet configuration file /etc/kubernetes/kubelet.conf and configure the kubelet private key file.
tlsPrivateKeyFile: /path/to/TLS/private.key
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["kubelet-client-key"]) | .apiServerArguments["kubelet-client-key"][] | select(test("/etc/kubernetes/certs/kubelet/tls.key"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["kubelet-client-key"]) | .apiServerArguments["kubelet-client-key"][] | select(test("{{.var_apiserver_kubelet_client_key}}"))]{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#1e2b7c1158e0b9a602cb20d62c82b4660907bb57b63dac11c6c7c64211c49c69 file.
Rationale:
Without cryptographic integrity protections, information can be altered by unauthorized users without detection.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_kubelet_configure_tls_key
Identifiers:

CCE-90614-9

References:
nerc-cipCIP-003-8 R4.2, CIP-007-3 R5.1
nistSC-8, SC-8(1), SC-8(2)
pcidssReq-2.2, Req-2.2.3, Req-2.3
app-srg-ctrSRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095
cis4.2.9
bsiAPP.4.4.A17
pcidss42.2.1, 2.2.5, 2.2.7, 2.2

Rule   kubelet - Disable the Read-Only Port   [ref]

To disable the read-only port, edit the kubelet configuration Edit the openshift-kube-apiserver configmap and set the kubelet-read-only-port parameter to 0:
"apiServerArguments":{
  ...
  "kubelet-read-only-port":[
    "0"
  ],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430 file.
Rationale:
OpenShift disables the read-only port (10255) on all nodes by setting the read-only port kubelet flag to 0. This ensures only authenticated connections are able to receive information about the OpenShift system.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_kubelet_disable_readonly_port
Identifiers:

CCE-83427-5

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis4.2.5
pcidss42.2.1, 2.2
Group   OpenShift - Logging Settings   Group contains 5 rules
[ref]   Contains evaluations for the cluster's logging configuration settings.

Rule   Ensure that Audit Log Errors Emit Alerts   [ref]

OpenShift audit works at the API server level, logging all requests coming to the server. However, if API server instance is unable to write errors, an alert must be issued in order for the organization to take a relevant action. e.g. shutting down that instance.

Kubernetes by default has metrics that enable one to write such alerts:

  • apiserver_audit_event_total
  • apiserver_audit_error_total
Such an example is shipped in OCP 4.9+
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: audit-errors
  namespace: openshift-kube-apiserver
spec:
  groups:
  - name: apiserver-audit
    rules:
    - alert: AuditLogError
      annotations:
        summary: |-
          An API Server instance was unable to write audit logs. This could be
          triggered by the node running out of space, or a malicious actor
          tampering with the audit logs.
        description: An API Server had an error writing to an audit log.
      expr: |
        sum by (apiserver,instance)(rate(apiserver_audit_error_total{apiserver=~".+-apiserver"}[5m])) / sum by (apiserver,instance) (rate(apiserver_audit_event_total{apiserver=~".+-apiserver"}[5m])) > 0
      for: 1m
      labels:
        severity: warning

For more information, consult the official Kubernetes documentation.

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/monitoring.coreos.com/v1/prometheusrules API endpoint, filter with with the jq utility using the following filter [.items[].spec.groups[].rules[].expr] and persist it to the local /kubernetes-api-resources/apis/monitoring.coreos.com/v1/prometheusrules#5fd5244e3dcae63319f7e86b918cb8ea6ce1b4124670ccb43750d7a75ca03cb7 file.
Rationale:
When there are errors writing audit logs, security events will not be logged by that specific API Server instance. Security Incident Response teams use these audit logs, amongst other artifacts, to determine the impact of security breaches or events. Without these logs, it becomes very difficult to assess a situation and do appropriate root cause analysis in such incidents.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_audit_error_alert_exists
Identifiers:

CCE-90744-4

References:
nerc-cipCIP-003-8 R5.1.1, CIP-003-8 R5.3, CIP-004-6 R2.3, CIP-007-3 R5.1, CIP-007-3 R5.1.1, CIP-007-3 R5.1.2
nistAU-5
app-srg-ctrSRG-APP-000109-CTR-000215, CNTR-OS-000200, CNTR-OS-000210
pcidss410.7.2, 10.7
stigrefSV-257522r921509_rule, SV-257523r921512_rule

---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: audit-errors
  namespace: openshift-kube-apiserver
spec:
  groups:
  - name: apiserver-audit
    rules:
    - alert: AuditLogError
      annotations:
        summary: |-
          An API Server instance was unable to write audit logs. This could be
          triggered by the node running out of space, or a malicious actor
          tampering with the audit logs.
        description: An API Server had an error writing to an audit log.
      expr: |
        sum by (apiserver,instance)(rate(apiserver_audit_error_total{apiserver=~".+-apiserver"}[5m])) / sum by (apiserver,instance) (rate(apiserver_audit_event_total{apiserver=~".+-apiserver"}[5m])) > 0
      for: 1m
      labels:
        severity: warning

Rule   Ensure that Audit Log Forwarding Uses TLS   [ref]

OpenShift audit works at the API server level, logging all requests coming to the server. Audit is on by default and the best practice is to ship audit logs off the cluster for retention using a secure protocol.

The cluster-logging-operator is able to do this with the

ClusterLogForwarders
resource. The forementioned resource can be configured to logs to different third party systems. For more information on this, please reference the official documentation: https://docs.openshift.com/container-platform/latest/observability/logging/logging-6.0/log6x-clf.html

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the . This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance API endpoint, filter with with the jq utility using the following filter try [.spec.outputs[].url] catch [] and persist it to the local /kubernetes-api-resources/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance#71786452ba18c51ba8ad51472a078619e2e8b52a86cd75087af5aab42400f6c0 file. true
  • /apis/observability.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders API endpoint, filter with with the jq utility using the following filter try [.items[].spec.outputs[][]|objects|.url] catch [] and persist it to the local /kubernetes-api-resources/apis/observability.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders#276ca288b4b843cf4e126b67f8b26ab42dabe7c97d51fdb82eb7085a9e5522b1 file. true
Rationale:
It is necessary to ensure that any configured output uses the TLS protocol to receive logs in order to ensure the confidentiality, integrity and authenticity of the logs.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_audit_log_forwarding_uses_tls
Identifiers:

CCE-90688-3

References:
nerc-cipCIP-003-8 R5.2, CIP-004-6 R3.3, CIP-007-3 R6.5
nistAU-9, AU-9(2), AU-9(3), AU-10
app-srg-ctrSRG-APP-000126-CTR-000275, CNTR-OS-000340, CNTR-OS-000510
stigrefSV-257536r921551_rule, SV-257546r921581_rule

Rule   Ensure that API server audit logging is enabled   [ref]

OpenShift has the ability to audit API server requests. Audit provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators, or other components of the system. Audit works at the API server level, logging all requests coming to the server. Verify that audit logging is enabled by checking that the API server audit log configuration is not set to `None`, which explicitly disables the functionality. For more information on how to configure the audit profile, please visit the documentation
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/apiservers/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/apiservers/cluster file.
Rationale:
Logging is an important detective control for all systems, to detect potential unauthorised access.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_audit_logging_enabled
Identifiers:

CCE-90619-8

References:
nerc-cipCIP-003-8 R4, CIP-003-8 R4.1, CIP-003-8 R4.2, CIP-003-8 R5.2, CIP-003-8 R6, CIP-004-6 R2.2.2, CIP-004-6 R2.2.3, CIP-004-6 R3.3, CIP-007-3 R.1.3, CIP-007-3 R5, CIP-007-3 R5.1.1, CIP-007-3 R5.2, CIP-007-3 R5.3.1, CIP-007-3 R5.3.2, CIP-007-3 R5.3.3, CIP-007-3 R6.5
nistAU-2, AU-3, AU-3(1), AU-6, AU-6(1), AU-7, AU-7(1), AU-8, AU-8(1), AU-9, AU-12, AU-12(1), AU-12(3), CM-5(1), SI-11, SI-12, SI-4(20), SI-4(23)
pcidssReq-2.2, Req-12.5.5
app-srg-ctrSRG-APP-000089-CTR-000150, SRG-APP-000090-CTR-000155, SRG-APP-000101-CTR-000205
cis3.2.1
pcidss42.2.1, 2.2, 10.2.1.3, 10.2.1, 10.2

Rule   Ensure that the cluster's audit profile is properly set   [ref]

OpenShift can audit the details of requests made to the API server through the standard Kubernetes audit capabilities.

In OpenShift, auditing of the API Server is on by default. Audit provides a security-relevant chronological set of records documenting the sequence of activities that have affected system by individual users, administrators, or other components of the system. Audit works at the API server level, logging all requests coming to the server. Each audit log contains two entries:

The request line containing:

  • A Unique ID allowing to match the response line (see #2)
  • The source IP of the request
  • The HTTP method being invoked
  • The original user invoking the operation
  • The impersonated user for the operation (self meaning himself)
  • The impersonated group for the operation (lookup meaning user's group)
  • The namespace of the request or none
  • The URI as requested

The response line containing:

  • The aforementioned unique ID
  • The response code

For more information on how to configure the audit profile, please visit the documentation

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/apiservers/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/apiservers/cluster file.
Rationale:
Logging is an important detective control for all systems, to detect potential unauthorised access.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_audit_profile_set
Identifiers:

CCE-83577-7

References:
nerc-cipCIP-003-8 R4, CIP-003-8 R4.1, CIP-003-8 R4.2, CIP-003-8 R5.2, CIP-003-8 R6, CIP-004-6 R2.2.2, CIP-004-6 R2.2.3, CIP-004-6 R3.3, CIP-007-3 R.1.3, CIP-007-3 R5, CIP-007-3 R5.1.1, CIP-007-3 R5.2, CIP-007-3 R5.3.1, CIP-007-3 R5.3.2, CIP-007-3 R5.3.3, CIP-007-3 R6.5
nistAU-2, AU-3, AU-3(1), AU-6, AU-6(1), AU-7, AU-7(1), AU-8, AU-8(1), AU-9, AU-12, AU-12(1), AU-12(3), CM-5(1), SI-11, SI-12, SI-4(20), SI-4(23)
pcidssReq-2.2, Req-12.5.5
app-srg-ctrSRG-APP-000089-CTR-000150, SRG-APP-000090-CTR-000155, SRG-APP-000101-CTR-000205, CNTR-OS-000150
cis3.2.2
pcidss42.2.1, 2.2, 10.2.2, 10.2
stigrefSV-257517r921494_rule

---
apiVersion: config.openshift.io/v1
kind: APIServer
metadata:
  name: cluster
spec:
  audit:
    profile: {{.var_openshift_audit_profile}}

Rule   Ensure that OpenShift Logging Operator is scanning the cluster   [ref]

OpenShift Logging Operator provides ability to aggregate all the logs from the OpenShift Container Platform cluster, such as node system audit logs, application container logs, and infrastructure logs. OpenShift Logging aggregates these logs from throughout OpenShift cluster and stores them in a default log store. [1] [1]https://docs.openshift.com/container-platform/latest/observability/logging/logging-6.0/log6x-about.html
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance API endpoint to the local /kubernetes-api-resources/apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance file /apis/observability.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders API endpoint to the local /kubernetes-api-resources/apis/observability.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders file .
Rationale:
OpenShift Logging Operator is able to collect, aggregate, and manage logs.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_cluster_logging_operator_exist
Identifiers:

CCE-85918-1

References:
nistAU-3(2)
app-srg-ctrSRG-APP-000092-CTR-000165, SRG-APP-000111-CTR-000220, SRG-APP-000358-CTR-000805, CNTR-OS-000170, CNTR-OS-000220
stigrefSV-257519r921500_rule, SV-257524r921515_rule
Group   Kubernetes - Network Configuration and Firewalls   Group contains 10 rules
[ref]   Most systems must be connected to a network of some sort, and this brings with it the substantial risk of network attack. This section discusses the security impact of decisions about networking which must be made when configuring a system.

This section also discusses firewalls, network access controls, and other network security frameworks, which allow system-level rules to be written that can limit an attackers' ability to connect to your system. These rules can specify that network traffic should be allowed or denied from certain IP addresses, hosts, and networks. The rules can also specify which of the system's network services are available to particular hosts or networks.

Rule   Ensure that cluster-wide proxy is set   [ref]

Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available.

The Proxy object is used to manage the cluster-wide egress proxy. Setting this will ensure that containers get the appropriate environment variables set to ensure traffic goes to the proxy per organizational requirements.

For more information, see the relevant documentation.

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/proxies/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/proxies/cluster file.
Rationale:
External networks tend to be outside of organizational control. By ensuring that egress traffic goes through an authorized proxy, one is able to ensure that expected and safe traffic is coming out, and malicious actors aren't leaking sensitive information, or calling back from a central command center to get further instructions upon intrusion.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_cluster_wide_proxy_set
Identifiers:

CCE-90765-9

References:
nerc-cipCIP-004-6 R2.2.4, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistSC-7(8)

Rule   Ensure that the CNI in use supports Network Policies   [ref]

There are a variety of CNI plugins available for Kubernetes. If the CNI in use does not support Network Policies it may not be possible to effectively restrict traffic in the cluster. OpenShift supports Kubernetes NetworkPolicy using a Kubernetes Container Network Interface (CNI) plug-in.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/operator.openshift.io/v1/networks/cluster API endpoint, filter with with the jq utility using the following filter [.spec.defaultNetwork.type] and persist it to the local /kubernetes-api-resources/apis/operator.openshift.io/v1/networks/cluster#35e33d6dc1252a03495b35bd1751cac70041a511fa4d282c300a8b83b83e3498 file.
Rationale:
Kubernetes network policies are enforced by the CNI plugin in use. As such it is important to ensure that the CNI plugin supports both Ingress and Egress network policies.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_configure_network_policies
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-1.1.4, Req-1.2, Req-2.2
app-srg-ctrSRG-APP-000038-CTR-000105, CNTR-OS-000100
cis5.3.1
bsiAPP.4.4.A7, SYS.1.6.A5, SYS.1.6.A21
pcidss41.4.1, 1.4, 2.2.1, 2.2
stigrefSV-257514r921485_rule

Rule   Ensure that HyperShift Hosted Namespaces have Network Policies defined.   [ref]

Use network policies to isolate traffic in your cluster network.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/networking.k8s.io/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/networkpolicies API endpoint, filter with with the jq utility using the following filter [.items[] | .metadata.name] and persist it to the local /kubernetes-api-resources/apis/networking.k8s.io/v1/namespaces/networkpolicies#08ccf6ea6e29d378349cc36918df58d5e6172cd458ede2bf03fe4266ee1b6d6a file.
Rationale:
Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_configure_network_policies_hypershift_hosted
Identifiers:

CCE-86104-7

References:
pcidssReq-1.1.4, Req-1.2, Req-1.2.1, Req-1.3.1, Req-1.3.2, Req-2.2
cis5.3.2
pcidss41.2.6, 1.2, 1.3.1, 1.3, 1.4.1, 1.4, 2.2.1, 2.2

Rule   Ensure that application Namespaces have Network Policies defined.   [ref]

Use network policies to isolate traffic in your cluster network.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/networking.k8s.io/v1/networkpolicies API endpoint, filter with with the jq utility using the following filter [.items[] | select((.metadata.namespace | startswith("openshift") | not) and (.metadata.namespace | startswith("kube-") | not) and .metadata.namespace != "default" and ({{if ne .var_network_policies_namespaces_exempt_regex "None"}}.metadata.namespace | test("{{.var_network_policies_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | .metadata.namespace] | unique and persist it to the local /kubernetes-api-resources/apis/networking.k8s.io/v1/networkpolicies#7400bb301fff2f7fc7b1b0fb7448b8e3f15222a8d23f992204315b19eeefa72f file.
  • /api/v1/namespaces API endpoint, filter with with the jq utility using the following filter [.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default" and ({{if ne .var_network_policies_namespaces_exempt_regex "None"}}.metadata.name | test("{{.var_network_policies_namespaces_exempt_regex}}") | not{{else}}true{{end}}))] and persist it to the local /kubernetes-api-resources/api/v1/namespaces#f673748db2dd4e4f0ad55d10ce5e86714c06da02b67ddb392582f71ef81efab2 file.
Rationale:
Running different applications on the same Kubernetes cluster creates a risk of one compromised application attacking a neighboring application. Network segmentation is important to ensure that containers can communicate only with those they are supposed to. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces
References:
nerc-cipCIP-003-8 R4, CIP-003-8 R4.2, CIP-003-8 R5, CIP-003-8 R6, CIP-004-6 R2.2.4, CIP-004-6 R3, CIP-007-3 R2, CIP-007-3 R2.1, CIP-007-3 R2.2, CIP-007-3 R2.3, CIP-007-3 R5.1, CIP-007-3 R6.1
nistAC-4, AC-4(21), CA-3(5), CM-6, CM-6(1), CM-7, CM-7(1), SC-7, SC-7(3), SC-7(5), SC-7(8), SC-7(12), SC-7(13), SC-7(18), SC-7(10), SI-4(22)
pcidssReq-1.1.4, Req-1.2, Req-1.2.1, Req-1.3.1, Req-1.3.2, Req-2.2
app-srg-ctrSRG-APP-000038-CTR-000105, CNTR-OS-000100
cis5.3.2
bsiAPP.4.4.A7, SYS.1.6.A5, SYS.1.6.A21
pcidss41.2.6, 1.2, 1.3.1, 1.3, 1.4.1, 1.4, 2.2.1, 2.2
stigrefSV-257514r921485_rule

Rule   Ensure that the default Ingress CA (wildcard issuer) has been replaced   [ref]

Check that the default Ingress CA has been replaced.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/proxies/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/proxies/cluster file.
Rationale:
OpenShift auto-generates several PKIs to serve TLS on different endpoints of the system. It is possible and necessary to configure a custom PKI which allows external clients to trust the endpoints. The Ingress Operator is the component responsible for enabling external access to OpenShift Container Platform cluster services. The aforementioned operator creates an internal CA and issues a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well. The certificate and key would need to be replaced since a certificate coming from a trusted provider is needed. https://docs.openshift.com/container-platform/latest/security/certificates/replacing-default-ingress-certificate.html
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_default_ingress_ca_replaced
References:
nerc-cipCIP-007-3 R5.1
nistSC-17

Rule   Ensure that the default Ingress certificate has been replaced   [ref]

Check that the default Ingress certificate has been replaced.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default API endpoint to the local /kubernetes-api-resources/apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default file.
Rationale:
OpenShift auto-generates several PKIs to serve TLS on different endpoints of the system. It is possible and necessary to configure a custom PKI which allows external clients to trust the endpoints. The Ingress Operator is the component responsible for enabling external access to OpenShift Container Platform cluster services. The aforementioned operator creates an internal CA and issues a wildcard certificate that is valid for applications under the .apps sub-domain. Both the web console and CLI use this certificate as well. The certificate and key would need to be replaced since a certificate coming from a trusted provider is needed. https://docs.openshift.com/container-platform/latest/security/certificates/replacing-default-ingress-certificate.html
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_ingress_controller_certificate
References:
nistSC-12
pcidss44.2.1, 4.2

Rule   Ensure IngressController is configured to use secure tlsSecurityProfile   [ref]

The configuration tlsSecurityProfile specifies TLS configurations to be used while establishing connections with the externally exposed servers. Though secure transport mode is used for establishing connections, the protocols used may not always be strong enough to avoid interception and manipulation of the data in transport. TLS Security profile configured should not make use of any protocols, ciphers, and algorithms with known security vulnerabilities.

tlsSecurityProfile can be configured to use one of custom, intermediate, modern, or old profile. Profile Old should be avoided at all times and when using custom profile one should be extremely careful as invalid configurations can be catastrophic. It is always advised to use highly secure intermediate or modern profiles and if unset profile configured in apiservers.config.openshift.io/cluster resource will be used as default.

To update tlsSecurityProfile to Intermediate use the following command:

oc patch -n openshift-ingress-operator ingresscontrollers.operator.openshift.io default --type 'json' --patch '[{"op": "add", "path": "/spec/tlsSecurityProfile/intermediate", "value": {}}, {"op": "replace", "path": "/spec/tlsSecurityProfile/type", "value": "Intermediate"}'

For more information, follow OpenShift documentation: the relevant documentation.

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default API endpoint to the local /kubernetes-api-resources/apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default file.
Rationale:
The authenticity and integrity of the container platform and communication between nodes and components must be secure. If an insecure protocol, cipher, or algorithms is used, during transmission of data, the data can be intercepted and manipulated. To thwart the manipulation of the data during transmission secure protocol, cipher and algorithms must be used.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_ingress_controller_tls_security_profile
Identifiers:

CCE-86234-2

References:
nistSC-8, SC-8(1)
app-srg-ctrSRG-APP-000014-CTR-000040, SRG-APP-000560-CTR-001340, CNTR-OS-000020
pcidss44.2.1, 4.2
stigrefSV-257506r921461_rule

Rule   Ensure that all Routes has IP whitelist annotation   [ref]

OpenShift has an option to set the IP whitelist for Routes [1] when creating new Routes. All routes outside the openshift namespaces and the kube namespaces should use the IP whitelist annotations. Requests from IP addresses that are not in the whitelist are dropped. [1] https://docs.openshift.com/container-platform/latest/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/route.openshift.io/v1/routes?limit=500 API endpoint, filter with with the jq utility using the following filter [.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.annotations["haproxy.router.openshift.io/ip_whitelist"] | not) | .metadata.name] and persist it to the local /kubernetes-api-resources/apis/route.openshift.io/v1/routes?limit=500#aec152a4446d7917fcbebee892a2ec3fbdef3b71cc0784c9457b2e54fd64dd3b file.
Rationale:
The usage of IP whitelist for Routes provides basic protection against unwanted access.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_route_ip_whitelist
Identifiers:

CCE-90596-8

References:
nistSC-7(5)

Rule   Ensure that all OpenShift Routes prefer TLS   [ref]

OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. TLS must be used to protect these communications. OpenShift Routes provide the ability to configure the needed TLS settings. With these, one is able to configure that any request coming from the outside must use TLS. To verify this, ensure that every Route in the system has a policy of Disable or Redirect to ensure a secure endpoint is used. The aforementioned policy will be set in a Routes .spec.tls.insecureEdgeTerminationPolicy setting.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/route.openshift.io/v1/routes API endpoint, filter with with the jq utility using the following filter [.items[] | select(.spec.tls.insecureEdgeTerminationPolicy != null) | select(.spec.tls.insecureEdgeTerminationPolicy | test("^(^$|None|Redirect)$"; "") | not) | .metadata.name] and persist it to the local /kubernetes-api-resources/apis/route.openshift.io/v1/routes#7e8388627b1179db3e5e6aa75ac4f55c09c2a68f1f3e8888e0e96bb139a21b61 file.
Rationale:
Using clear-text in communications coming to or from outside the cluster's network may leak sensitive information.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_routes_protected_by_tls
Identifiers:

CCE-84225-2

References:
nerc-cipCIP-003-8 R4, CIP-003-8 R4.2, CIP-003-8 R5, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R7.1
nistAC-4, AC-4(21), AC-17(3), SC-8, SC-8(1), SC-8(2), SI-4, SI-4(22)
pcidssReq-6.5.4
app-srg-ctrSRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095
pcidss44.2.1, 4.2

Rule   Ensure that all Routes has rate limit enabled   [ref]

OpenShift has an option to set the rate limit for Routes [1] when creating new Routes. All routes outside the openshift namespaces and the kube namespaces should use the rate-limiting annotations. [1] https://docs.openshift.com/container-platform/4.9/networking/routes/route-configuration.html#nw-route-specific-annotations_route-configuration
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/route.openshift.io/v1/routes?limit=500 API endpoint, filter with with the jq utility using the following filter [.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.annotations["haproxy.router.openshift.io/rate-limit-connections"] == "true" | not) | .metadata.name] and persist it to the local /kubernetes-api-resources/apis/route.openshift.io/v1/routes?limit=500#842fa6716f17342d62e70f2755db709b9d7a161cf0338ea8bfae9b06dab5e6cc file.
Rationale:
The usage of rate limit for Routes provides basic protection against distributed denial-of-service (DDoS) attacks.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_routes_rate_limit
Identifiers:

CCE-90779-0

References:
nistSC-5, SC-5(1), SC-5(2)
app-srg-ctrSRG-APP-000246-CTR-000605, SRG-APP-000435-CTR-001070, CNTR-OS-000620, CNTR-OS-000630, CNTR-OS-000800, CNTR-OS-000810
stigrefSV-257554r921605_rule, SV-257555r921608_rule, SV-257565r921638_rule, SV-257566r921641_rule
Group   OpenShift API Server   Group contains 3 rules
[ref]   This section contains recommendations for openshift-apiserver configuration.

Rule   Configure the OpenShift API Server Maximum Retained Audit Logs   [ref]

To configure how many rotations of audit logs are retained, edit the openshift-apiserver configmap and set the audit-log-maxbackup parameter to 10 or to an organizationally appropriate value:
"apiServerArguments":{
  ...
  "audit-log-maxbackup": [10],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/openshift-apiserver{{else}}/api/v1/namespaces/openshift-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.yaml"{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-apiserver/configmaps/config#45ae2c88fe28d39a42f19e165a1612353224e9663eb369000e03c7efcd10ef59 file.
Rationale:
OpenShift automatically rotates the log files. Retaining old log files ensures OpenShift Operators will have sufficient log data available for carrying out any investigation or correlation. For example, if the audit log size is set to 100 MB and the number of retained log files is set to 10, OpenShift Operators would have approximately 1 GB of log data to use during analysis.
Severity: 
low
Rule ID:xccdf_org.ssgproject.content_rule_ocp_api_server_audit_log_maxbackup
Identifiers:

CCE-83977-9

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.22
pcidss42.2.1, 2.2

Rule   Configure OpenShift API Server Maximum Audit Log Size   [ref]

To rotate audit logs upon reaching a maximum size, edit the openshift-apiserver configmap and set the audit-log-maxsize parameter to an appropriate size in MB. For example, to set it to 100 MB:
"apiServerArguments":{
  ...
  "audit-log-maxsize": ["100"],
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • {{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/openshift-apiserver{{else}}/api/v1/namespaces/openshift-apiserver/configmaps/config{{end}} API endpoint, filter with with the jq utility using the following filter {{if ne .hypershift_cluster "None"}}.data."config.yaml"{{else}}.data."config.yaml" | fromjson{{end}} and persist it to the local /kubernetes-api-resources/api/v1/namespaces/openshift-apiserver/configmaps/config#45ae2c88fe28d39a42f19e165a1612353224e9663eb369000e03c7efcd10ef59 file.
Rationale:
OpenShift automatically rotates log files. Retaining old log files ensures that OpenShift Operators have sufficient log data available for carrying out any investigation or correlation. If you have set file size of 100 MB and the number of old log files to keep as 10, there would be approximately 1 GB of log data available for use in analysis.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_ocp_api_server_audit_log_maxsize
Identifiers:

CCE-83687-4

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.23
pcidss42.2.1, 2.2

Rule   Configure the Audit Log Path   [ref]

To enable auditing on the OpenShift API Server, the audit log path must be set. Edit the openshift-apiserver configmap and set the audit-log-path to a suitable path and file where audit logs should be written. For example:
"apiServerArguments":{
  ...
  "audit-log-path":"/var/log/openshift-apiserver/audit.log",
  ...
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /api/v1/namespaces/openshift-apiserver/configmaps/config API endpoint to the local /kubernetes-api-resources/api/v1/namespaces/openshift-apiserver/configmaps/config file.
Rationale:
Auditing of the API Server is not enabled by default. Auditing the API Server provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by users, administrators, or other system components.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_openshift_api_server_audit_log_path
Identifiers:

CCE-83547-0

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.2.20
pcidss42.2.1, 2.2
Group   Role-based Access Control   Group contains 6 rules
[ref]   Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action.

Rule   Profiling is protected by RBAC   [ref]

Ensure that the cluster-debugger cluster role includes the /debug/pprof resource URL. This demonstrates that profiling is protected by RBAC, with a specific cluster role to allow access.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger API endpoint to the local /kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger file.
Rationale:
Profiling allows for the identification of specific performance bottlenecks. It generates a significant amount of program data that could potentially be exploited to uncover system and program details. If you are not experiencing any bottlenecks and do not need the profiler for troubleshooting purposes, it is recommended to turn it off to reduce the potential attack surface. To ensure the collected data is not exploited, profiling endpoints are secured via RBAC (see cluster-debugger role). By default, the profiling endpoints are accessible only by users bound to cluster-admin or cluster-debugger role. Profiling can not be disabled.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_rbac_debug_role_protects_pprof
Identifiers:

CCE-84182-5

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis1.3.1
pcidss42.2.1, 2.2

Rule   Ensure that the RBAC setup follows the principle of least privilege   [ref]

Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. If users or groups exist that are bound to roles they must not have, modify the user or group permissions using the following cluster and local role binding commands: Remove a User from a Cluster RBAC role by executing the following: oc adm policy remove-cluster-role-from-user role username Remove a Group from a Cluster RBAC role by executing the following: oc adm policy remove-cluster-role-from-group role groupname Remove a User from a Local RBAC role by executing the following: oc adm policy remove-role-from-user role username Remove a Group from a Local RBAC role by executing the following: oc adm policy remove-role-from-group role groupname NOTE: For additional information. https://docs.openshift.com/container-platform/latest/authentication/using-rbac.html
Rationale:
Controlling and limiting users access to system services and resources is key to securing the platform and limiting the intentional or unintentional comprimising of the system and its services. OpenShift provides a robust RBAC policy system that allows for authorization policies to be as detailed as needed. Additionally there are two layers of RBAC policies, the first is Cluster RBAC policies which administrators can control who has what access to cluster level services. The other is Local RBAC policies, which allow project developers/administrators to control what level of access users have to a given project or namespace.
Severity: 
high
Rule ID:xccdf_org.ssgproject.content_rule_rbac_least_privilege
Identifiers:

CCE-90678-4

References:
nistAC-3, CM-5(6), IA-2, IA-2(5), AC-6(10), CM-11(2), CM-5(1), CM-7(5)(b)
app-srg-ctrSRG-APP-000033-CTR-000090, SRG-APP-000033-CTR-000095, SRG-APP-000033-CTR-000100, SRG-APP-000133-CTR-000290, SRG-APP-000133-CTR-000295, SRG-APP-000133-CTR-000300, SRG-APP-000133-CTR-000305, SRG-APP-000133-CTR-000310, SRG-APP-000148-CTR-000350, SRG-APP-000153-CTR-000375, SRG-APP-000340-CTR-000770, SRG-APP-000378-CTR-000880, SRG-APP-000378-CTR-000885, SRG-APP-000378-CTR-000890, SRG-APP-000380-CTR-000900, SRG-APP-000386-CTR-000920, CNTR-OS-000090
cis5.2.10
bsiAPP.4.4.A3, APP.4.4.A7, APP.4.4.A9, SYS.1.6.A8, SYS.1.6.A19
pcidss42.2.1, 2.2
stigrefSV-257513r921482_rule

Rule   Ensure that the cluster-admin role is only used where required   [ref]

The RBAC role cluster-admin provides wide-ranging powers over the environment and should be used only where and when needed.
Rationale:
Kubernetes provides a set of default roles where RBAC is used. Some of these roles such as cluster-admin provide wide-ranging privileges which should only be applied where absolutely necessary. Roles such as cluster-admin allow super-user access to perform any action on any resource. When used in a ClusterRoleBinding, it gives full control over every resource in the cluster and in all namespaces. When used in a RoleBinding, it gives full control over every resource in the rolebinding's namespace, including the namespace itself.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_rbac_limit_cluster_admin
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1), CM-8(3)
pcidssReq-2.2, Req-7.1.2, Req-10.5.1
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.1.1
bsiSYS.1.6.A8
pcidss42.2.1, 2.2, 10.3.1, 10.3

Rule   Limit Access to Kubernetes Secrets   [ref]

The Kubernetes API stores secrets, which may be service account tokens for the Kubernetes API or credentials used by workloads in the cluster. Access to these secrets should be restricted to the smallest possible group of users to reduce the risk of privilege escalation. To restrict users from secrets, remove get, list, and watch access to unauthorized users to secret objects in the cluster.
Rationale:
Inappropriate access to secrets stored within the Kubernetes cluster can allow for an attacker to gain additional access to the Kubernetes cluster or external resources whose credentials are stored as secrets.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_rbac_limit_secrets_access
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.1.2
bsiSYS.1.6.A8
pcidss42.2.1, 2.2

Rule   Minimize Access to Pod Creation   [ref]

The ability to create pods in a namespace can provide a number of opportunities for privilege escalation. Where applicable, remove create access to pod objects in the cluster.
Rationale:
The ability to create pods in a cluster opens up the cluster for privilege escalation.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_rbac_pod_creation_access
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.1.4
pcidss42.2.1, 2.2

Rule   Minimize Wildcard Usage in Cluster and Local Roles   [ref]

Kubernetes Cluster and Local Roles provide access to resources based on sets of objects and actions that can be taken on those objects. It is possible to set either of these using a wildcard * which matches all items. This violates the principle of least privilege and leaves a cluster in a more vulnerable state to privilege abuse.
Rationale:
The principle of least privilege recommends that users are provided only the access required for their role and nothing more. The use of wildcard rights grants is likely to provide excessive rights to the Kubernetes API.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_rbac_wildcard_use
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.1.3
bsiAPP.4.4.A9, SYS.1.6.A8
pcidss42.2.1, 2.2
Group   Kubernetes - Registry Security Practices   Group contains 4 rules
[ref]   Contains evaluations for Kubernetes registry security practices, and cluster-wide registry configuration.

Rule   Allowed registries are configured   [ref]

The configuration registrySources.allowedRegistries determines the permitted registries that the OpenShift container runtime can access for builds and pods. This configuration setting ensures that all registries other than those specified are blocked. You can set the allowed repositories by applying the following manifest using
oc patch
, e.g. if you save the following snippet to
/tmp/allowed-registries-patch.yaml
spec:
  registrySources:
    allowedRegistries:
    - my-trusted-registry.internal.example.com
you would call
oc patch image.config.openshift.io cluster --patch="$(cat /tmp/allowed-registries-patch.yaml)" --type=merge
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/images/cluster file.
Rationale:
Allowed registries should be configured to restrict the registries that the OpenShift container runtime can access, and all other registries should be blocked.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_ocp_allowed_registries
References:
nistCM-5(3), CM-7(2), CM-7(5), CM-11
app-srg-ctrSRG-APP-000456-CTR-001125, CNTR-OS-000890, CNTR-OS-000900
cis5.5.1
bsiSYS.1.6.A6
pcidss42.2.1, 2.2
stigrefSV-257571r921656_rule, SV-257572r921659_rule

Rule   Allowed registries for import are configured   [ref]

The configuration allowedRegistriesForImport limits the container image registries from which normal users may import images. This is important to control, as a user who can stand up a malicious registry can then import content which claims to include the SHAs of legitimate content layers. You can set the allowed repositories for import by applying the following manifest using
oc patch
, e.g. if you save the following snippet to
/tmp/allowed-import-registries-patch.yaml
spec:
  allowedRegistriesForImport:
  - domainName: my-trusted-registry.internal.example.com
    insecure: false
you would call
oc patch image.config.openshift.io cluster --patch="$(cat /tmp/allowed-import-registries-patch.yaml)" --type=merge
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/images/cluster file.
Rationale:
Allowed registries for import should be specified to limit the registries from which users may import images.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_ocp_allowed_registries_for_import
References:
nistCM-5(3), CM-7(2), CM-7(5), CM-11
app-srg-ctrSRG-APP-000456-CTR-001125, CNTR-OS-000890, CNTR-OS-000900
cis5.5.1
bsiSYS.1.6.A6
pcidss42.2.1, 2.2
stigrefSV-257571r921656_rule, SV-257572r921659_rule

Rule   Check configured allowed registries for import uses secure protocol   [ref]

The configuration allowedRegistriesForImport limits the container image registries from which normal users may import images. This is a list of the registries that can be trusted to contain valid images and the image location configured is assumed to be secured unless configured otherwise. It is important to allow only secure registries to avoid man in the middle attacks, as the insecure image import request can be impersonated and could lead to fetching malicious content. List all the allowed repositories for import configured with insecure set to true using the following command:
oc get image.config.openshift.io/cluster -o json | jq '.spec | (.allowedRegistriesForImport[])? | select(.insecure==true)'
Remove or edit the listed registries having insecure set by using the command:
oc edit image.config.openshift.io/cluster
For more information, follow the relevant documentation.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/images/cluster file.
Rationale:
Configured list of allowed registries for import should be from the secure source.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_ocp_insecure_allowed_registries_for_import
Identifiers:

CCE-86235-9

References:
nistCM-5(3)
app-srg-ctrSRG-APP-000014-CTR-000035, CNTR-OS-000010
cis5.5.1
bsiAPP.4.4.A12, SYS.1.6.A6
pcidss42.2.1, 2.2
stigrefSV-257505r921458_rule

Rule   Check if any insecure registry sources is configured   [ref]

The configuration registrySources.insecureRegistries determines the insecure registries that the OpenShift container runtime can access for builds and pods. This configuration setting is for accessing the configured registries without TLS validation which could lead to security breaches and should be avoided. Remove any insecureRegistries configured using the following command:
oc patch image.config.openshift.io cluster --type=json -p "[{'op': 'remove', 'path': '/spec/registrySources/insecureRegistries'}]"
For more information, follow the relevant documentation.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/images/cluster file.
Rationale:
Insecure registries should not be configured, which would restrict the possibilities of OpenShift container runtime accessing registries which cannot be validated.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_ocp_insecure_registries
Identifiers:

CCE-86123-7

References:
nistCM-5(3)
app-srg-ctrSRG-APP-000014-CTR-000035, CNTR-OS-000010
cis5.5.1
bsiAPP.4.4.A12, SYS.1.6.A6
pcidss42.2.1, 2.2
stigrefSV-257505r921458_rule
Group   OpenShift - Risk Assessment Settings   Group contains 1 rule
[ref]   Contains evaluations for the cluster's risk assessment configuration settings.

Rule   Ensure that Compliance Operator is scanning the cluster   [ref]

The Compliance Operator scans the hosts and the platform (OCP) configurations for software flaws and improper configurations according to different compliance benchmarks. It uses OpenSCAP as a backend, which is a known and certified tool to do such scans.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/compliance.openshift.io/v1alpha1/scansettingbindings?limit=5 API endpoint to the local /kubernetes-api-resources/apis/compliance.openshift.io/v1alpha1/scansettingbindings?limit=5 file.
Rationale:
Vulnerability scanning and risk management are important detective controls for all systems, to detect potential flaws and unauthorised access.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scansettingbinding_exists
Identifiers:

CCE-83697-3

References:
nerc-cipCIP-003-8 R1.3, CIP-003-8 R4.3, CIP-003-8 R6, CIP-004-6 4.1, CIP-004-6 4.2, CIP-004-6 R3, CIP-004-6 R4, CIP-004-6 R4.2, CIP-005-6 R1, CIP-005-6 R1.1, CIP-005-6 R1.2, CIP-007-3 R3, CIP-007-3 R3.1, CIP-007-3 R6.1, CIP-007-3 R8.4
nistCM-6, CM-6(1), RA-5, RA-5(5), SA-4(8)
pcidssReq-2.2.4
app-srg-ctrSRG-APP-000472-CTR-001170, CNTR-OS-000910
bsiAPP.4.4.A13
pcidss42.2.1, 2.2.6, 2.2
stigrefSV-257573r921662_rule
Group   Security Context Constraints (SCC)   Group contains 9 rules
[ref]   Similar to the way that RBAC resources control user access, administrators can use Security Context Constraints (SCCs) to control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with in order to be accepted into the system.

Rule   Drop Container Capabilities   [ref]

Containers should not enable more capabilities than needed as this opens the door for malicious use. To disable the capabilities, the appropriate Security Context Constraints (SCCs) should set all capabilities as * or a list of capabilities in requiredDropCapabilities.
Rationale:
By default, containers run with a default set of capabilities as assigned by the Container Runtime which can include dangerous or highly privileged capabilities. Capabilities should be dropped unless absolutely critical for the container to run software as added capabilities that are not required allow for malicious containers or attackers.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scc_drop_container_capabilities
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.2.9
bsiAPP.4.4.A9
pcidss42.2.1, 2.2

Rule   Limit Container Capabilities   [ref]

Containers should not enable more capabilites than needed as this opens the door for malicious use. To enable only the required capabilities, the appropriate Security Context Constraints (SCCs) should set capabilities as a list in allowedCapabilities.

In case an SCC outside the default allow list in the variable var-sccs-with-allowed-capabilities-regex is being flagged, create a TailoredProfile and add the additional SCC to the regular expression in the variable var-sccs-with-allowed-capabilities-regex. An example allowing an SCC named additional follows:

apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
  name: cis-additional-scc
spec:
  description: Allows an additional scc
  setValues:
  - name: ocp4-var-sccs-with-allowed-capabilities-regex
    rationale: Allow our own custom SCC
    value: ^privileged$|^hostnetwork-v2$|^restricted-v2$|^nonroot-v2$|^additional$
  extends: ocp4-cis
  title: Modified CIS allowing one more SCC

Finally, reference this TailoredProfile in a ScanSettingBinding For more information on Tailoring the Compliance Operator, please consult the OpenShift documentation: https://docs.openshift.com/container-platform/latest/security/compliance_operator/co-scans/compliance-operator-tailor.html

Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the following:
  • /apis/security.openshift.io/v1/securitycontextconstraints API endpoint, filter with with the jq utility using the following filter [.items[] | select(.metadata.name | test("{{.var_sccs_with_allowed_capabilities_regex}}"; "") | not) | select(.allowedCapabilities != null) | .metadata.name] and persist it to the local /kubernetes-api-resources/apis/security.openshift.io/v1/securitycontextconstraints#821f2c3e5c4100d5609ac6070d8a51075295651614a05c65a5f744ba751c15b0 file.
Rationale:
By default, containers run with a default set of capabilities as assigned by the Container Runtime which can include dangerous or highly privileged capabilities. Capabilities should be dropped unless absolutely critical for the container to run software as added capabilities that are not required allow for malicious containers or attackers.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scc_limit_container_allowed_capabilities
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.2.8
bsiAPP.4.4.A9
pcidss42.2.1, 2.2

Rule   Limit Access to the Host IPC Namespace   [ref]

Containers should not be allowed access to the host's Interprocess Communication (IPC) namespace. To prevent containers from getting access to a host's IPC namespace, the appropriate Security Context Constraints (SCCs) should set allowHostIPC to false.
Rationale:
A container running in the host's IPC namespace can use IPC to interact with processes outside the container potentially allowing an attacker to exploit a host process thereby enabling an attacker to exploit other services.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scc_limit_ipc_namespace
Identifiers:

CCE-84042-1

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325, CNTR-OS-000660
cis5.2.3
bsiAPP.4.4.A4, APP.4.4.A9, SYS.1.6.A18, SYS.1.6.A21
pcidss42.2.1, 2.2
stigrefSV-257557r921614_rule

Rule   Limit Use of the CAP_NET_RAW   [ref]

Containers should not enable more capabilities than needed as this opens the door for malicious use. CAP_NET_RAW enables a container to launch a network attack on another container or cluster. To disable the CAP_NET_RAW capability, the appropriate Security Context Constraints (SCCs) should set NET_RAW in requiredDropCapabilities.
Rationale:
By default, containers run with a default set of capabilities as assigned by the Container Runtime which can include dangerous or highly privileged capabilities. If the CAP_NET_RAW is enabled, it may be misused by malicious containers or attackers.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scc_limit_net_raw_capability
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.2.7
bsiAPP.4.4.A4, APP.4.4.A9, SYS.1.6.A18, SYS.1.6.A21
pcidss42.2.1, 2.2

Rule   Limit Access to the Host Network Namespace   [ref]

Containers should not be allowed access to the host's network namespace. To prevent containers from getting access to a host's network namespace, the appropriate Security Context Constraints (SCCs) should set allowHostNetwork to false.
Rationale:
A container running in the host's network namespace could access the host network traffic to and from other pods potentially allowing an attacker to exploit pods and network traffic.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scc_limit_network_namespace
Identifiers:

CCE-83492-9

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000142-CTR-000330, CNTR-OS-000660
cis5.2.4
bsiAPP.4.4.A4, APP.4.4.A9, SYS.1.6.A18, SYS.1.6.A21
pcidss42.2.1, 2.2
stigrefSV-257557r921614_rule

Rule   Limit Containers Ability to Escalate Privileges   [ref]

Containers should be limited to only the privileges required to run and should not be allowed to escalate their privileges. To prevent containers from escalating privileges, the appropriate Security Context Constraints (SCCs) should set allowPrivilegeEscalation to false.
Rationale:
Privileged containers have access to more of the Linux Kernel capabilities and devices. If a privileged container were compromised, an attacker would have full access to the container and host.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scc_limit_privilege_escalation
Identifiers:

CCE-83447-3

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.2.5
bsiAPP.4.4.A9
pcidss42.2.1, 2.2

Rule   Limit Privileged Container Use   [ref]

Containers should be limited to only the privileges required to run. To prevent containers from running as privileged containers, the appropriate Security Context Constraints (SCCs) should set allowPrivilegedContainer to false.
Rationale:
Privileged containers have access to all Linux Kernel capabilities and devices. If a privileged container were compromised, an attacker would have full access to the container and host.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scc_limit_privileged_containers
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000342-CTR-000775, SRG-APP-000142-CTR-000330, CNTR-OS-000660
cis5.2.1
bsiAPP.4.4.A4, APP.4.4.A9, SYS.1.6.A18, SYS.1.6.A21
pcidss42.2.1, 2.2
stigrefSV-257557r921614_rule

Rule   Limit Access to the Host Process ID Namespace   [ref]

Containers should not be allowed access to the host's process ID namespace. To prevent containers from getting access to a host's process ID namespace, the appropriate Security Context Constraints (SCCs) should set allowHostPID to false.
Rationale:
A container running in the host's PID namespace can inspect processes running outside the container which can be used to escalate privileges outside of the container.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scc_limit_process_id_namespace
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325, CNTR-OS-000660
cis5.2.2
bsiAPP.4.4.A4, APP.4.4.A9, SYS.1.6.A18, SYS.1.6.A21
pcidss42.2.1, 2.2
stigrefSV-257557r921614_rule

Rule   Limit Container Running As Root User   [ref]

Containers should run as a random non-privileged user. To prevent containers from running as root user, the appropriate Security Context Constraints (SCCs) should set .runAsUser.type to MustRunAsRange.
Rationale:
It is strongly recommended that containers running on OpenShift should support running as any arbitrary UID. OpenShift will then assign a random, non-privileged UID to the running container instance. This avoids the risk from containers running with specific uids that could map to host service accounts, or an even greater risk of running as root level service. OpenShift uses the default security context constraints (SCC), restricted, to prevent containers from running as root or other privileged user ids. Pods may be configured to use an scc policy that allows the container to run as a specific uid, including root(0) when approved. Only a cluster administrator may grant the change of an scc policy.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scc_limit_root_containers
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000342-CTR-000775, CNTR-OS-000660
cis5.2.6
bsiAPP.4.4.A4, APP.4.4.A9, SYS.1.6.A18, SYS.1.6.A21
pcidss42.2.1, 2.2
stigrefSV-257557r921614_rule
Group   OpenShift - Kubernetes - Scheduler Settings   Group contains 2 rules
[ref]   Contains evaluations for kube-scheduler configuration settings.

Rule   Verify that the scheduler API service is protected by RBAC   [ref]

Do not bind the scheduler service to non-loopback insecure addresses.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger API endpoint to the local /kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger file.
Rationale:
The Scheduler API service which runs on port 10251/TCP by default is used for health and metrics information and is available without authentication or encryption. As such it should only be bound to a localhost interface, to minimize the cluster's attack surface
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scheduler_profiling_protected_by_rbac
References:
cis1.4.1
pcidss42.2.1, 2.2

Rule   Verify that the scheduler API service is protected by RBAC   [ref]

Do not bind the scheduler service to non-loopback insecure addresses.
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger API endpoint to the local /kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-debugger file.
Rationale:
The Scheduler API service which runs on port 10251/TCP by default is used for health and metrics information and is available without authentication or encryption. As such it should only be bound to a localhost interface, to minimize the cluster's attack surface
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_scheduler_service_protected_by_rbac
References:
cis1.4.2
pcidss42.2.1, 2.2
Group   Kubernetes Secrets Management   Group contains 2 rules
[ref]   Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Such information might otherwise be put in a Pod specification or in an image.

Rule   Consider external secret storage   [ref]

Consider the use of an external secrets storage and management system, instead of using Kubernetes Secrets directly, if you have more complex secret management needs. Ensure the solution requires authentication to access secrets, has auditing of access to and use of secrets, and encrypts secrets. Some solutions also make it easier to rotate secrets.
Rationale:
Kubernetes supports secrets as first-class objects, but care needs to be taken to ensure that access to secrets is carefully limited. Using an external secrets provider can ease the management of access to secrets, especially where secrets are used across both Kubernetes and non-Kubernetes environments.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_secrets_consider_external_storage
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.4.2
bsiSYS.1.6.A8
pcidss42.2.1, 2.2

Rule   Do Not Use Environment Variables with Secrets   [ref]

Secrets should be mounted as data volumes instead of environment variables.
Rationale:
Environment variables are subject and very susceptible to malicious hijacking methods by an adversary, as such, environment variables should never be used for secrets.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_secrets_no_environment_variables
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis5.4.1
bsiSYS.1.6.A8
pcidss42.2.1, 2.2
Group   Kubernetes - Worker Node Settings   Group contains 3 rules
[ref]   Contains evaluations for the worker node configuration settings.

Rule   Verify Group Who Owns The Worker Proxy Kubeconfig File   [ref]

To ensure the Kubernetes ConfigMap is mounted into the sdn daemonset pods with the correct ownership, make sure that the sdn-config ConfigMap is mounted using a ConfigMap at the /config mount point and that the sdn container points to that configuration using the --proxy-config command line option. Run:
 oc get -nopenshift-sdn ds sdn -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "sdn")'
and ensure the --proxy-config parameter points to /config/kube-proxy-config.yaml and that the config mount point is mounted from the sdn-config ConfigMap.
Rationale:
The kubeconfig file for kube-proxy provides permissions to the kube-proxy service. The proxy kubeconfig file contains information about the administrative configuration of the OpenShift cluster that is configured on the system. Protection of this file is critical for OpenShift security. The file is provided via a ConfigMap mount, so the kubelet itself makes sure that the file permissions are appropriate for the container taking it into use.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_file_groupowner_proxy_kubeconfig
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis4.1.4
pcidss42.2.1, 2.2

Rule   Verify User Who Owns The Worker Proxy Kubeconfig File   [ref]

To ensure the Kubernetes ConfigMap is mounted into the sdn daemonset pods with the correct ownership, make sure that the sdn-config ConfigMap is mounted using a ConfigMap at the /config mount point and that the sdn container points to that configuration using the --proxy-config command line option. Run:
 oc get -nopenshift-sdn ds sdn -ojson | jq -r '.spec.template.spec.containers[] | select(.name == "sdn")'
and ensure the --proxy-config parameter points to /config/kube-proxy-config.yaml and that the config mount point is mounted from the sdn-config ConfigMap.
Rationale:
The kubeconfig file for kube-proxy provides permissions to the kube-proxy service. The proxy kubeconfig file contains information about the administrative configuration of the OpenShift cluster that is configured on the system. Protection of this file is critical for OpenShift security. The file is provided via a ConfigMap mount, so the kubelet itself makes sure that the file permissions are appropriate for the container taking it into use.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_file_owner_proxy_kubeconfig
References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis4.1.4
pcidss42.2.1, 2.2

Rule   Verify Permissions on the Worker Proxy Kubeconfig File   [ref]

To ensure the Kubernetes ConfigMap is mounted into the sdn daemonset pods with the correct permissions, make sure that the sdn-config ConfigMap is mounted using restrictive permissions. Check that the config VolumeMount mounts the sdn-config configMap with permissions set to 420:
{
"configMap": {
  "defaultMode": 420,
  "name": "sdn-config"
  },
"name": "config"
}
Warning:  This rule's check operates on the cluster configuration dump. Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/apps/v1/namespaces/openshift-sdn/daemonsets/sdn API endpoint to the local /kubernetes-api-resources/apis/apps/v1/namespaces/openshift-sdn/daemonsets/sdn file.
Rationale:
The kube-proxy kubeconfig file controls various parameters of the kube-proxy service in the worker node. If used, you should restrict its file permissions to maintain the integrity of the file. The file should be writable by only the administrators on the system. The kube-proxy runs with the kubeconfig parameters configured as a Kubernetes ConfigMap instead of a file. In this case, there is no proxy kubeconfig file. But appropriate permissions still need to be set in the ConfigMap mount.
Severity: 
medium
Rule ID:xccdf_org.ssgproject.content_rule_file_permissions_proxy_kubeconfig
Identifiers:

CCE-84047-0

References:
nerc-cipCIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1
nistCM-6, CM-6(1)
pcidssReq-2.2
app-srg-ctrSRG-APP-000516-CTR-001325
cis4.1.3
pcidss42.2.1, 2.2
Red Hat and Red Hat Enterprise Linux are either registered trademarks or trademarks of Red Hat, Inc. in the United States and other countries. All other names are registered trademarks or trademarks of their respective companies.