Group
Guide to the Secure Configuration of Red Hat OpenShift Container Platform 4
Group contains 15 groups and 77 rules |
Group
Kubernetes Settings
Group contains 14 groups and 77 rules |
[ref]
Each section of this configuration guide includes information about the
configuration of a Kubernetes cluster and a set of recommendations for
hardening the configuration. For each hardening recommendation, information
on how to implement the control and/or how to verify or audit the control
is provided. In some cases, remediation information is also provided.
Some of the settings in the hardening guide are in place by default. The
audit information for these settings is provided in order to verify that
the cluster administrator has not made changes that would be less secure.
A small number of items require configuration.
Finally, there are some recommendations that require decisions by the
system operator, such as audit log size, retention, and related settings. |
Group
System and Software Integrity
Group contains 3 rules |
[ref]
System and software integrity can be gained by installing antivirus, increasing
system encryption strength with FIPS, verifying installed software, enabling SELinux,
installing an Intrusion Prevention System, etc. However, installing or enabling integrity
checking tools cannot prevent intrusions, but they can detect that an intrusion
may have occurred. Requirements for integrity checking may be highly dependent on
the environment in which the system will be used. |
Rule
Ensure that Cluster Version Operator is deployed
[ref] | Integrity of the OpenShift platform is handled to start by the cluster version operator.
Cluster Version Operator will by default GPG verify the integrity of the release
image before applying it. [1]
This rule checks if Cluster Version Operator is deployed and available in the system.
[1] https://github.com/openshift/machine-config-operator/blob/master/docs/OSUpgrades.md#questions-and-answers Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/config.openshift.io/v1/clusterversions/version
API endpoint, filter with with the jq utility using the following filter
[.status.conditions[] | select(.type=="Available") | .status]
and persist it to the local
/kubernetes-api-resources/apis/config.openshift.io/v1/clusterversions/version#588c29ac9d4c67b1444308c5ba310271832895fee54701f7d0cb6cbced390443
file.
| Rationale: | Integrity check prevent a malicious actor from using a unauthorized system image, hence it will ensure the
image has not been tampered with, or corrupted. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_cluster_version_operator_exists | Identifiers: | CCE-90670-1 | References: | | |
|
Rule
Ensure that Cluster Version Operator verifies integrity
[ref] | Integrity of the OpenShift platform is handled to start by the cluster version operator.
Cluster Version Operator will by default GPG verify the integrity of the release
image before applying it. This rule check if there is an unverified cluster image. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/config.openshift.io/v1/clusterversions/version
API endpoint, filter with with the jq utility using the following filter
[.status.history[0:-1]|.[]|.verified]
and persist it to the local
/kubernetes-api-resources/apis/config.openshift.io/v1/clusterversions/version#69adcfd65c8b8d723e4a7118c170f634cebbb349e9b554dd15001e6551a586f8
file.
| Rationale: | Unverified cluster image will compromise the system integrity. Integrity check prevent
a malicious actor from using a unauthorized system image, hence it will ensure the
image has not been tampered with, or corrupted. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_cluster_version_operator_verify_integrity | Identifiers: | CCE-90671-9 | References: | | |
|
Rule
Ensure that File Integrity Operator is scanning the cluster
[ref] | The File Integrity Operator
continually runs file integrity checks on the cluster nodes.
It deploys a daemon set that initializes and runs privileged AIDE
containers on each node, providing a status object with a log of files
that are modified during the initial run of the daemon set pods. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/fileintegrity.openshift.io/v1alpha1/fileintegrities?limit=5 API endpoint to the local /kubernetes-api-resources/apis/fileintegrity.openshift.io/v1alpha1/fileintegrities?limit=5 file. | Rationale: | File integrity scanning able to detect potential and unauthorised access. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_file_integrity_exists | Identifiers: | CCE-83657-7 | References: | nerc-cip | CIP-003-8 R4.2, CIP-003-8 R6, CIP-007-3 R4, CIP-007-3 R4.1, CIP-007-3 R4.2 | nist | SC-4(23), SI-6, SI-7, SI-7(1), CM-6(a), SI-7(2), SI-4(24) | pcidss | Req-10.5.5, Req-11.5 | app-srg-ctr | SRG-APP-000516-CTR-001325 | bsi | APP.4.4.A17 | pcidss4 | 10.3.4, 10.3, 11.5.2 |
| |
|
Group
Kubernetes - Account and Access Control
Group contains 4 rules |
[ref]
In traditional Unix security, if an attacker gains
shell access to a certain login account, they can perform any action
or access any file to which that account has access. The same
idea applies to cloud technology such as Kubernetes. Therefore,
making it more difficult for unauthorized people to gain shell
access to accounts, particularly to privileged accounts, is a
necessary part of securing a system. This section introduces
mechanisms for restricting access to accounts under
Kubernetes. |
Rule
Ensure no ClusterRoleBindings set for default Service Account
[ref] | Using the default service account prevents accurate application
rights review and audit tracing. Instead of default , create
a new and unique service account and associate the required ClusterRoleBindings. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=10000
API endpoint, filter with with the jq utility using the following filter
[.items[] | select ( .subjects[]?.name == "default" ) | select(.subjects[].namespace | startswith("kube-") or startswith("openshift-") | not) | .metadata.name ] | unique
and persist it to the local
/kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=10000#79f26213dd0c77369a3262d04a79b049cbe657d6816bca60a341434f2ee8d280
file.
| Rationale: | Kubernetes provides a default service account which is used by
cluster workloads where no specific service account is assigned to the pod.
Where access to the Kubernetes API from a pod is required, a specific service account
should be created for that pod, and rights granted to that service account.
This increases auditability of service account rights and access making it
easier and more accurate to trace potential malicious behaviors to a specific
service account and project. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_accounts_no_clusterrolebindings_default_service_account | References: | | |
|
Rule
Ensure no RoleBindings set for default Service Account
[ref] | Using the default service account prevents accurate application
rights review and audit tracing. Instead of default , create
a new and unique service account and associate the required RoleBindings. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/rbac.authorization.k8s.io/v1/rolebindings?limit=10000
API endpoint, filter with with the jq utility using the following filter
[.items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select ( .subjects[]?.name == "default" ) | .metadata.namespace + "/" + .metadata.name ] | unique
and persist it to the local
/kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/rolebindings?limit=10000#00c457af66396f1f78aa74c5eb2177980c74419afc15a46a16b38a8712ca0b70
file.
| Rationale: | Kubernetes provides a default service account which is used by
cluster workloads where no specific service account is assigned to the pod.
Where access to the Kubernetes API from a pod is required, a specific service account
should be created for that pod, and rights granted to that service account.
This increases auditability of service account rights and access making it
easier and more accurate to trace potential malicious behaviors to a specific
service account and project. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_accounts_no_rolebindings_default_service_account | References: | | |
|
Rule
Restrict Automounting of Service Account Tokens
[ref] | Service accounts tokens should not be mounted in pods except where the workload
running in the pod explicitly needs to communicate with the API server.
To ensure pods do not automatically mount tokens, set
automountServiceAccountToken to false . | Rationale: | Mounting service account tokens inside pods can provide an avenue
for privilege escalation attacks where an attacker is able to
compromise a single pod in the cluster. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_accounts_restrict_service_account_tokens | References: | | |
|
Rule
Ensure Usage of Unique Service Accounts
[ref] | Using the default service account prevents accurate application
rights review and audit tracing. Instead of default , create
a new and unique service account with the following command:
$ oc create sa service_account_name
where service_account_name is the name of a service account
that is needed in the project namespace. | Rationale: | Kubernetes provides a default service account which is used by
cluster workloads where no specific service account is assigned to the pod.
Where access to the Kubernetes API from a pod is required, a specific service account
should be created for that pod, and rights granted to that service account.
This increases auditability of service account rights and access making it
easier and more accurate to trace potential malicious behaviors to a specific
service account and project. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_accounts_unique_service_account | References: | | |
|
Group
OpenShift Kube API Server
Group contains 10 rules |
[ref]
This section contains recommendations for kube-apiserver configuration. |
Rule
Ensure that anonymous requests to the API Server are authorized
[ref] | By default, anonymous access to the OpenShift API is enabled, but at
the same time, all requests must be authorized. If no authentication
mechanism is used, the request is assigned the system:anonymous
virtual user and the system:unauthenticated virtual group.
This allows the authorization layer to determine which requests, if any,
is an anonymous user authorized to make.
To verify the authorization rules for anonymous requests run the following:
$ oc describe clusterrolebindings
and inspect the bindings of the system:anonymous
virtual user and the system:unauthenticated virtual group.
To test that an anonymous request is authorized to access the readyz
endpoint, run:
$ oc get --as="system:anonymous" --raw='/readyz?verbose'
In contrast, a request to list all projects should not be authorized:
$ oc get --as="system:anonymous" projects
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterrolebindings API endpoint to the local /kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterrolebindings file. | Rationale: | When enabled, requests that are not rejected by other configured
authentication methods are treated as anonymous requests. These requests
are then served by the API server. If you are using RBAC authorization,
it is generally considered reasonable to allow anonymous access to the
API Server for health checks and discovery purposes, and hence this
recommendation is not scored. However, you should consider whether
anonymous discovery is an acceptable risk for your purposes. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_anonymous_auth | References: | | |
|
Rule
Configure the Client Certificate Authority for the API Server
[ref] | Certificates must be provided to fully setup TLS client certificate
authentication. To ensure the API Server utilizes its own TLS certificates, the
clientCA must be configured. Verify
that servingInfo has the clientCA configured in
the openshift-kube-apiserver
config configmap
to something similar to:
"apiServerArguments": {
...
"client-ca-file": [
"/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt"
],
...
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["client-ca-file"]) | .apiServerArguments["client-ca-file"][] | select(test("/etc/kubernetes/certs/client-ca/ca.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["client-ca-file"]) | .apiServerArguments["client-ca-file"][] | select(test("{{.var_apiserver_client_ca}}"))]{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#d56e72c377d8f85e0601a704d4218064a0ea4a2235ceee82d20db6cdafc74608
file.
| Rationale: | API Server communication contains sensitive parameters that should remain
encrypted in transit. Configure the API Server to serve only HTTPS traffic.
If -clientCA is set, any request presenting a client
certificate signed by one of the authorities in the client-ca-file
is authenticated with an identity corresponding to the CommonName of
the client certificate. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_client_ca | Identifiers: | CCE-84284-9 | References: | nerc-cip | CIP-003-8 R4.2, CIP-007-3 R5.1 | nist | SC-8, SC-8(1), SC-8(2) | pcidss | Req-2.2, Req-2.2.3, Req-2.3 | app-srg-ctr | SRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095 | cis | 1.2.29 | bsi | APP.4.4.A17 | pcidss4 | 2.2.1, 2.2.5, 2.2.7, 2.2 |
| |
|
Rule
Configure the Encryption Provider Cipher
[ref] |
When you enable etcd encryption, the following OpenShift API server
and Kubernetes API server resources are encrypted:
- Secrets
- ConfigMaps
- Routes
- OAuth access tokens
- OAuth authorize tokens
When you enable etcd encryption, encryption keys are created. These
keys are rotated on a weekly basis. You must have these keys in order
to restore from an etcd backup.
To ensure the correct cipher, set the encryption type to aescbc or
aesgcm in the apiserver object which configures the API
server itself.
spec:
encryption:
type: aescbc
For more information, follow
the relevant documentation.
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/apis/hypershift.openshift.io/v1beta1/namespaces/{{.hypershift_namespace_prefix}}/hostedclusters/{{.hypershift_cluster}}{{else}}/apis/config.openshift.io/v1/apiservers/cluster{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}[.spec.secretEncryption.type]{{else}}[.spec.encryption.type]{{end}}
and persist it to the local
/kubernetes-api-resources/apis/config.openshift.io/v1/apiservers/cluster#a1d4b20a86b76e7e2d634dbeff420b1a80df6800836dad1b552314d1b24a18cb
file.
| Rationale: | etcd is a highly available key-value store used by OpenShift deployments
for persistent storage of all REST API objects. These objects are
sensitive in nature and should be encrypted at rest to avoid any
disclosures. Where etcd encryption is used, it is important to ensure that the
appropriate set of encryption providers is used. Currently, aescbc
and aesgcm are the only types supported by OCP. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_encryption_provider_cipher | Identifiers: | CCE-83585-0 | References: | | |
|
Rule
Ensure that the --kubelet-https argument is set to true
[ref] | The kube-apiserver ensures https to the kubelet by default. The apiserver
flag "--kubelet-https" is deprecated and should be either set to "true" or
omitted from the argument list. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430
file.
| Rationale: | Connections from the kube-apiserver to kubelets could potentially carry
sensitive data such as secrets and keys. It is thus important to use
in-transit encryption for any communication between the apiserver and
kubelets. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_https_for_kubelet_conn | References: | nerc-cip | CIP-003-8 R4.2, CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R5.1, CIP-007-3 R6.1 | nist | CM-6, CM-6(1), SC-8, SC-8(1) | pcidss | Req-2.2, Req-2.3 | app-srg-ctr | SRG-APP-000516-CTR-001325 | cis | 1.2.4 | bsi | APP.4.4.A17 | pcidss4 | 2.2.1, 2.2.7, 2.2 |
| |
|
Rule
Configure the kubelet Certificate File for the API Server
[ref] | To enable certificate based kubelet authentication,
edit the config configmap in the openshift-kube-apiserver
namespace and set the below parameter in the config.yaml key if
it is not already configured:
"apiServerArguments":{
...
"kubelet-client-certificate":"/etc/kubernetes/static-pod-resources/secrets/kubelet-client/tls.crt",
...
}
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["kubelet-client-certificate"]) | .apiServerArguments["kubelet-client-certificate"][] | select(test("/etc/kubernetes/certs/kubelet/tls.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["kubelet-client-certificate"]) | .apiServerArguments["kubelet-client-certificate"][] | select(test("{{.var_apiserver_kubelet_client_cert}}"))]{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#e5500055b4aa2fcf00dc09ad0e66e44b6b42d67f8d53d1e72ff81b32f0e09865
file.
| Rationale: | By default the API Server does not authenticate itself to the kubelet's
HTTPS endpoints. Requests from the API Server are treated anonymously.
Configuring certificate-based kubelet authentication ensures that the
API Server authenticates itself to kubelets when submitting requests. | Severity: | high | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_kubelet_client_cert | Identifiers: | CCE-84080-1 | References: | | |
|
Rule
Configure the kubelet Certificate Key for the API Server
[ref] | To enable certificate based kubelet authentication,
edit the config configmap in the openshift-kube-apiserver
namespace and set the below parameter in the config.yaml key if
it is not already configured:
"apiServerArguments":{
...
"kubelet-client-key":"/etc/kubernetes/static-pod-resources/secrets/kubelet-client/tls.key",
...
}
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["kubelet-client-key"]) | .apiServerArguments["kubelet-client-key"][] | select(test("/etc/kubernetes/certs/kubelet/tls.key"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["kubelet-client-key"]) | .apiServerArguments["kubelet-client-key"][] | select(test("{{.var_apiserver_kubelet_client_key}}"))]{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#1e2b7c1158e0b9a602cb20d62c82b4660907bb57b63dac11c6c7c64211c49c69
file.
| Rationale: | By default the API Server does not authenticate itself to the kubelet's
HTTPS endpoints. Requests from the API Server are treated anonymously.
Configuring certificate-based kubelet authentication ensures that the
API Server authenticates itself to kubelets when submitting requests. | Severity: | high | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_kubelet_client_key | Identifiers: | CCE-83591-8 | References: | | |
|
Rule
Configure the Certificate for the API Server
[ref] | To ensure the API Server utilizes its own TLS certificates, the
tls-cert-file must be configured. Verify
that the apiServerArguments section has the tls-cert-file configured in
the config configmap in the openshift-kube-apiserver namespace
similar to:
"apiServerArguments":{
...
"tls-cert-file": [
"/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.crt"
],
...
}
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["tls-cert-file"]) | .apiServerArguments["tls-cert-file"][] | select(test("/etc/kubernetes/certs/server/tls.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["tls-cert-file"]) | .apiServerArguments["tls-cert-file"][] | select(test("{{.var_apiserver_tls_cert}}"))]{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#bca394347bab5b9902f1d1568d4f5d6e5498b01ec27ddf8231443e376b18757d
file.
| Rationale: | API Server communication contains sensitive parameters that should remain
encrypted in transit. Configure the API Server to serve only HTTPS
traffic. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_tls_cert | Identifiers: | CCE-83779-9 | References: | nerc-cip | CIP-003-8 R4.2, CIP-007-3 R5.1 | nist | SC-8, SC-8(1), SC-8(2) | pcidss | Req-2.2, Req-2.2.3, Req-2.3 | app-srg-ctr | SRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095 | cis | 1.2.28 | bsi | APP.4.4.A17 | pcidss4 | 2.2.1, 2.2.5, 2.2.7, 2.2, 4.2.1, 4.2 |
| |
|
Rule
Use Strong Cryptographic Ciphers on the API Server
[ref] | To ensure that the API Server is configured to only use strong
cryptographic ciphers, verify the openshift-kube-apiserver
configmap contains the following set of ciphers, with no additions:
"servingInfo":{
...
"cipherSuites": [
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256",
],
...
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}.data."config.json" | fromjson{{else}}.data."config.yaml" | fromjson{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#54842ba5cf821644f2727625c1518eba2de6e6b7ae318043d0bf7ccc9570e430
file.
Warning:
Once configured, API Server clients that cannot support modern
cryptographic ciphers will not be able to make connections to the API
server. | Rationale: | TLS ciphers have had a number of known vulnerabilities and weaknesses,
which can reduce the protection provided. By default, OpenShift supports
a number of TLS ciphersuites including some that have security concerns,
weakening the protection provided. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_tls_cipher_suites | References: | | |
|
Rule
Configure the Certificate Key for the API Server
[ref] | To ensure the API Server utilizes its own TLS certificates, the
tls-private-key-file must be configured. Verify
that the apiServerArguments section has the tls-private-key-file configured in
the config configmap in the openshift-kube-apiserver namespace
similar to:
"apiServerArguments":{
...
"tls-private-key-file": [
"/etc/kubernetes/static-pod-certs/secrets/service-network-serving-certkey/tls.key"
],
...
}
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["tls-private-key-file"]) | .apiServerArguments["tls-private-key-file"][] | select(test("/etc/kubernetes/certs/server/tls.key"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["tls-private-key-file"]) | .apiServerArguments["tls-private-key-file"][] | select(test("{{.var_apiserver_tls_private_key}}"))]{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#8c69c1fe6742f70a3a16c09461f57a19ef2a695143301cede2f2f5d307aa3508
file.
| Rationale: | API Server communication contains sensitive parameters that should remain
encrypted in transit. Configure the API Server to serve only HTTPS
traffic. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_tls_private_key | Identifiers: | CCE-84282-3 | References: | nerc-cip | CIP-003-8 R4.2, CIP-007-3 R5.1 | nist | SC-8, SC-8(1), SC-8(2) | pcidss | Req-2.2, Req-2.2.3, Req-2.3 | app-srg-ctr | SRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095 | cis | 1.2.28 | bsi | APP.4.4.A17 | pcidss4 | 2.2.1, 2.2.5, 2.2.7, 2.2 |
| |
|
Rule
Ensure APIServer is not configured with Old tlsSecurityProfile
[ref] | The configuration tlsSecurityProfile specifies TLS configurations
to be used while establishing connections with the externally exposed
servers. Though secure transport mode is used for establishing connections,
the protocols used may not always be strong enough to avoid interception and
manipulation of the data in transport. TLS Security profile Old should be
avoided, as it supports vulnerable protocols, ciphers, and algorithms which
could lead to security breaches.
Update tlsSecurityProfile from Old to Intermediate using the following command:
oc patch apiservers.config.openshift.io cluster --type 'json' --patch '[{"op": "add", "path": "/spec/tlsSecurityProfile/intermediate", "value": {}}, {"op": "replace", "path": "/spec/tlsSecurityProfile/type", "value": "Intermediate"}, {"op": "remove", "path": "/spec/tlsSecurityProfile/old"}]'
For more information, follow
OpenShift documentation:
the relevant documentation. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/apiservers/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/apiservers/cluster file. | Rationale: | The authenticity and integrity of the container platform and communication
between nodes and components must be secure. If an insecure protocol,
cipher, or algorithms is used, during transmission of data, the data can be
intercepted and manipulated. To thwart the manipulation of the data during
transmission secure protocol, cipher and algorithms must be used. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_api_server_tls_security_profile_not_old | Identifiers: | CCE-86223-5 | References: | | |
|
Group
OpenShift - Confinement
Group contains 1 rule |
[ref]
Contains evaluations to configure and assess the confinement of the cluster's
applications and workloads. |
Rule
Make sure the Security Profiles Operator is installed
[ref] | Security Profiles Operator provides a way to define secure computing (seccomp) profiles and
SELinux profiles as custom resources that are syncrhonized to every node in a given namespace.
Using security profiles can increase security at the container level in your cluster.
Seccomp security profiles list the syscalls a process can make, and SELinux security profiles
provide a label-based system that restricts access and usage of processes, applications, and
files. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/operators.coreos.com/v1alpha1/namespaces/openshift-security-profiles/subscriptions/security-profiles-operator-sub API endpoint to the local /kubernetes-api-resources/apis/operators.coreos.com/v1alpha1/namespaces/openshift-security-profiles/subscriptions/security-profiles-operator-sub file. | Rationale: | An application that runs with privileges can be attacked to have its privileges exploited.
Confining applications limit the actions an attacker can perform when they are compromised. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_security_profiles_operator_exists | Identifiers: | CCE-86168-2 | References: | | |
|
Group
OpenShift etcd Settings
Group contains 1 rule |
[ref]
Contains rules that check correct OpenShift etcd settings. |
Rule
Configure Recurring Backups For etcd
[ref] |
Back up your clusters etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost.
For more information, follow
the relevant documentation.
| Rationale: | While etcd automatically recovers from temporary failures, issues may arise if an etcd cluster loses more than (N-1)/2 or when an update goes wrong.
Recurring backups of etcd enable you to recover from a disastrous fail. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_etcd_backup | Identifiers: | CCE-88188-8 | References: | | |
|
Group
Kubernetes - General Security Practices
Group contains 23 rules |
[ref]
Contains evaluations for general security practices for operating a Kubernetes environment. |
Rule
Ensure the notification is enabled for file integrity operator
[ref] | The OpenShift platform provides the File Integrity Operator to monitor for unwanted
file changes, and this control ensures proper notification alert is enabled so that system
administrators and security personnel are notified about the alerts Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/monitoring.coreos.com/v1/prometheusrules
API endpoint, filter with with the jq utility using the following filter
[.items[] | select(.metadata.name =="file-integrity") | .metadata.name]
and persist it to the local
/kubernetes-api-resources/apis/monitoring.coreos.com/v1/prometheusrules#dda8d6e19f5a89264301ce56ece4df115a14d8a85e3ae6bd3cd8eccd234252c5
file.
| Rationale: | File Integrity Operator is able to send out alerts | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_file_integrity_notification_enabled | Identifiers: | CCE-90572-9 | References: | | |
|
Rule
Apply Security Context to Your Pods and Containers
[ref] | Apply Security Context to your Pods and Containers | Rationale: | A security context defines the operating system security settings (uid, gid,
capabilities, SELinux role, etc..) applied to a container. When designing your
containers and pods, make sure that you configure the security context for your
pods, containers, and volumes. A security context is a property defined in the
deployment yaml. It controls the security parameters that will be assigned to
the pod/container/volume. There are two levels of security context: pod level
security context, and container level security context. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_general_apply_scc | References: | | |
|
Rule
A Backup Solution Has To Be Installed
[ref] | Backup and Restore are fundamental practices when it comes to disaster recovery. By utilizing a Backup Software you are able to backup (and restore) data, which is lost, if your cluster crashes beyong recoverability.
There are multiple Backup Solutions on the Market which diverge in Features. Thus some of them might only backup your cluster, others might also be able to backup VMs or PVCs running in your Cluster. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/apiextensions.k8s.io/v1/customresourcedefinitions?limit=500
API endpoint, filter with with the jq utility using the following filter
[.items[] | if select(.metadata.name | test("{{.var_backup_solution_crds_regex}}"))!=null then true else false end]
and persist it to the local
/kubernetes-api-resources/apis/apiextensions.k8s.io/v1/customresourcedefinitions?limit=500#5ecc4195e33073b3ebac60dfafee48c8b29e3e0f551fd8fc26ba9271eb059d88
file.
| Rationale: | Backup and Recovery abilities are a necessity to recover from a disaster. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_general_backup_solution_installed | Identifiers: | CCE-90185-0 | References: | | |
|
Rule
Each Namespace should only host one application
[ref] | Use namespaces to isolate your Kubernetes objects. | Rationale: | Assigning a dedicated namespace to an application (or parts of an application)
allows you to granularly control the access to this application on a kubernetes
level. It also allows you control the network flow from and to other namespaces
more easily. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_general_namespace_separation | Identifiers: | CCE-90279-1 | References: | bsi | APP.4.4.A1, SYS.1.6.A3 |
| |
|
Rule
Create Network Boundaries between Functional Different Nodes
[ref] | Use different Networks for Control Plane, Worker and Individual Application Services. | Rationale: | Separation on a Network level might help to hinder lateral movement of an attacker and subsequently reduce the impact of an attack. It might also enable you to provide additional external network control (like firewalls). | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_general_network_separation | Identifiers: | CCE-86851-3 | References: | bsi | APP.4.4.A7, APP.4.4.A14, SYS.1.6.A3, SYS.1.6.A5 |
| |
|
Rule
Create Boundaries between Resources using Nodes or Clusters
[ref] | Use Nodes or Clusters to isolate Workloads with high protection requirements.
Run the following command and review the pods and how they are deployed on Nodes.
$ oc get pod -o=custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,APP:.metadata.labels.app\.kubernetes\.io/name,NODE:.spec.nodeName --all-namespaces | grep -v "openshift-"
You can use labels or other data as custom field which helps you to identify parts of an application.
Ensure that Applications with high protection requirements are not colocated on Nodes or in Clusters
with workloads of lower protection requirements. | Rationale: | Assigning workloads with high protection requirements to specific nodes creates and additional
boundary (the node) between workloads of high protection requirements and workloads which might
follow less strict requirements. An adversary which attacked a lighter protected workload now has
additional obstacles for their movement towards the higher protected workloads. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_general_node_separation | Identifiers: | CCE-88903-0 | References: | bsi | APP.4.4.A14, APP.4.4.A15, SYS.1.6.A3 |
| |
|
Rule
Ensure that GitOps Operator is deployed
[ref] | Red Hat OpenShift GitOps is a declarative continuous delivery platform based on
Argo CD. It enables teams to adopt GitOps principles for managing cluster configurations
and automating secure and repeatable application delivery across hybrid multi-cluster
Kubernetes environments.
Following GitOps and infrastructure as code principles, you can store the configuration of
clusters and applications in Git repositories and use Git workflows to roll them out to
the target clusters. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/pipelines.openshift.io/v1alpha1/gitopsservices?limit=5 API endpoint to the local /kubernetes-api-resources/apis/pipelines.openshift.io/v1alpha1/gitopsservices?limit=5 file. | Rationale: | GitOps provides a mean to track system configuration changes | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_gitops_operator_exists | Identifiers: | CCE-86134-4 | References: | | |
|
Rule
Ensure that the LifecycleAndUtilization Profile for the Kube Descheduler Operator is Enabled
[ref] | If there is an increased risk of external influences and a very high need for protection, pods should be stopped and restarted regularly.
No pod should run for more than 24 hours. The availability of the applications in the pod should be ensured. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/operator.openshift.io/v1/kubedeschedulers
API endpoint, filter with with the jq utility using the following filter
[ .items[].spec | if any(.profiles[]; . =="LifecycleAndUtilization") and .deschedulingIntervalSeconds <= {{.kube_descheduler_interval}} and .mode == "Automatic" then true else false end]
and persist it to the local
/kubernetes-api-resources/apis/operator.openshift.io/v1/kubedeschedulers#6292f8a18dd8e868e60870514f32e1f873d9929c729e745b909e4bd834c20922
file.
| Rationale: | If there is an increased risk of external influences and a very high need for protection, pods should be stopped and restarted regularly. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_kube_descheduler_lifecycle_policy | References: | | |
|
Rule
Ensure that the Kube Descheduler operator is deployed
[ref] | If there is an increased risk of external influences and a very high need for protection, pods should be stopped and restarted regularly.
No pod should run for more than 24 hours. The availability of the applications in the pod should be ensured. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/operators.coreos.com/v1alpha1/subscriptions API endpoint to the local /kubernetes-api-resources/apis/operators.coreos.com/v1alpha1/subscriptions file. | Rationale: | If there is an increased risk of external influences and a very high need for protection, pods should be stopped and restarted regularly. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_kube_descheduler_operator_exists | References: | | |
|
Rule
Set Pod Lifetime for the Deschedulers
[ref] | If there is an increased risk of external influences and a very high need for protection, pods should be stopped and restarted regularly.
No pod should run for more than 24 hours. The availability of the applications in the pod should be ensured. | Rationale: | If there is an increased risk of external influences and a very high need for protection, pods should be stopped and restarted regularly. With this an attacker who gained control over a pod loses it and the pod gets restarted from a known good state (the image). | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_kube_descheduler_podlifetime | References: | | |
|
Rule
Ensure that the kubeadmin secret has been removed
[ref] | The kubeadmin user is meant to be a temporary user used for
bootstrapping purposes. It is preferable to assign system
administrators whose users are backed by an Identity Provider.
Make sure to remove the user as
described in the documentation
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /api/v1/namespaces/kube-system/secrets/kubeadmin API endpoint to the local /kubernetes-api-resources/api/v1/namespaces/kube-system/secrets/kubeadmin file. | Rationale: | The kubeadmin user has an auto-generated password
and a self-signed certificate, and has effectively cluster-admin
permissions; therefore, it's considered a security liability. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_kubeadmin_removed | Identifiers: | CCE-90387-2 | References: | nerc-cip | CIP-004-6 R2.2.2, CIP-004-6 R2.2.3, CIP-007-3 R.1.3, CIP-007-3 R2, CIP-007-3 R5, CIP-007-3 R5.1.1, CIP-007-3 R5.1.3, CIP-007-3 R5.2.1, CIP-007-3 R5.2.3, CIP-007-3 R6.1, CIP-007-3 R6.2, CIP-007-3 R6.3, CIP-007-3 R6.4 | nist | AC-2(2), AC-2(7), AC-2(9), AC-2(10), AC-12(1), IA-2(5), MA-4, SC-12(1) | pcidss | Req-2.1 | app-srg-ctr | SRG-APP-000023-CTR-000055, CNTR-OS-000030, CNTR-OS-000040, CNTR-OS-000440 | cis | 3.1.1, 5.1.1 | bsi | APP.4.4.A3 | pcidss4 | 2.2.1, 2.2.2, 2.2, 8.2.2, 8.2, 8.3 | stigref | SV-257507r921464_rule, SV-257508r921467_rule, SV-257542r921569_rule |
| |
|
Rule
Ensure that all workloads have liveness and readiness probes
[ref] | Configuring Kubernetes liveness and readiness probes is essential for ensuring the security and
reliability of a system. These probes actively monitor container health and readiness, facilitating
automatic actions like restarting or rescheduling unresponsive instances for improved reliability.
They play a proactive role in issue detection, allowing timely problem resolution and contribute
to efficient scaling and traffic distribution. | Rationale: | Many applications running for long periods of time eventually transition to broken states, and
cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy
such situations.
Sometimes, applications are temporarily unable to serve traffic. For example, an application might
need to load large data or configuration files during startup, or depend on external services after
startup. In such cases, you don't want to kill the application, but you don't want to send it
requests either. Kubernetes provides readiness probes to detect and mitigate these situations.
A pod with containers reporting that they are not ready does not receive traffic through Kubernetes
Services. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_liveness_readiness_probe_in_workload | References: | bsi | APP.4.4.A11, SYS.1.6.A3 |
| |
|
Rule
Ensure that project templates autocreate Resource Quotas
[ref] |
Configure a template for newly created projects to use default resource
quotas and make sure this template is referenced from the default
project template.
The OpenShift Container Platform API server automatically provisions
new projects based on the project template that is identified by
the projectRequestTemplate parameter in the cluster’s project
configuration resource.
As a cluster administrator, you can modify the default project template
so that new projects created would satisfy the chosen compliance
standards.
For more information, follow
the relevant documentation.
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/projects/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/projects/cluster file.
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/template.openshift.io/v1/namespaces/openshift-config/templates
API endpoint, filter with with the jq utility using the following filter
[.items[] | any(.objects[]?; .kind == "ResourceQuota") ]
and persist it to the local
/kubernetes-api-resources/apis/template.openshift.io/v1/namespaces/openshift-config/templates#e60f58ef612a182073e9f6fe0ebe9ea96a706422dc65572af8d6aa9839d94f61
file.
| Rationale: |
Running different applications on the same Kubernetes cluster creates a risk of
a "noisy neighbor" when one application monopolizes cluster resources.
A resource quota, defined by a ResourceQuota object, provides constraints
that limit aggregate resource consumption per project. It can limit
the quantity of objects that can be created in a project by type, as
well as the total amount of compute resources and storage that might
be consumed by resources in that project.
Editing the default project template to include ResourceQuotas in
all new namespaces ensures that all namespaces include at least some
ResourceQuotas objects.
Ensuring that the project configuration references
a project template that sets up the required objects for new projects ensures
that all new projects will be set in accordance with centralized settings.
| Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_project_config_and_template_resource_quota | Identifiers: | CCE-90734-5 | References: | nist | SC-5, SC-5(1) | app-srg-ctr | SRG-APP-000246-CTR-000605, SRG-APP-000435-CTR-001070, CNTR-OS-000620, CNTR-OS-000630, CNTR-OS-000800, CNTR-OS-000810 | bsi | SYS.1.6.A15 | stigref | SV-257554r921605_rule, SV-257555r921608_rule, SV-257565r921638_rule, SV-257566r921641_rule |
| |
|
Rule
Ensure that project templates autocreate Resource Quotas
[ref] | Configure a template for newly created projects to use default resource
quotas and make sure this template is referenced from the default
project template.
For more information, follow
the relevant documentation. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/template.openshift.io/v1/namespaces/openshift-config/templates
API endpoint, filter with with the jq utility using the following filter
[.items[] | any(.objects[]?; .kind == "ResourceQuota") ]
and persist it to the local
/kubernetes-api-resources/apis/template.openshift.io/v1/namespaces/openshift-config/templates#e60f58ef612a182073e9f6fe0ebe9ea96a706422dc65572af8d6aa9839d94f61
file.
| Rationale: | Running different applications on the same Kubernetes cluster creates a risk of
a "noisy neighbor" when one application monopolizes cluster resources.
A resource quota, defined by a ResourceQuota object, provides constraints
that limit aggregate resource consumption per project. It can limit
the quantity of objects that can be created in a project by type, as
well as the total amount of compute resources and storage that might
be consumed by resources in that project.
Ensuring that the project configuration references
a project template that sets up the required objects for new projects ensures
that all new projects will be set in accordance with centralized settings. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_project_template_resource_quota | Identifiers: | CCE-86372-0 | References: | nist | SC-5, SC-5(1) | app-srg-ctr | SRG-APP-000246-CTR-000605, SRG-APP-000246-CTR-000605 | bsi | SYS.1.6.A15 |
| |
|
Rule
Ensure that all daemonsets has resource limits
[ref] | When deploying an application, it is important to tune based on memory and CPU consumption,
allocating enough resources for the application to function properly. Images provided by
OpenShift Dedicated behave properly within the confines of the memory they are allocated.
However, any application images must pay attention to the specific resources required to
ensure they are available.
If the node where a Pod is running has enough of a resource available, it's possible (and allowed)
for a container to use more resource than its request for that resource specifies.
However, a container is not allowed to use more than its resource limit. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/apps/v1/daemonsets?limit=500
API endpoint, filter with with the jq utility using the following filter
[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.namespace != "rhacs-operator" and ({{if ne .var_daemonset_limit_namespaces_exempt_regex "None"}}.metadata.namespace | test("{{.var_daemonset_limit_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | select( .spec.template.spec.containers[].resources.requests.cpu == null or .spec.template.spec.containers[].resources.requests.memory == null or .spec.template.spec.containers[].resources.limits.cpu == null or .spec.template.spec.containers[].resources.limits.memory == null ) | .metadata.name ]
and persist it to the local
/kubernetes-api-resources/apis/apps/v1/daemonsets?limit=500#a1c11605c6b91d6d3f26cad5f67056df41a5d43e06bf3588455f537ad5e90a55
file.
| Rationale: | Resource requests/limits provide constraints that limit aggregate resource consumption
per container. This helps prevent resource starvation. When deploying your
application, it is important to tune based on memory and CPU consumption,
allocating enough resources for the application to function properly. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_resource_requests_limits_in_daemonset | References: | | |
|
Rule
Ensure that all deployments has resource limits
[ref] | When deploying an application, it is important to tune based on memory and CPU consumption,
allocating enough resources for the application to function properly. Images provided by
OpenShift Dedicated behave properly within the confines of the memory they are allocated.
However, any application images must pay attention to the specific resources required to
ensure they are available.
If the node where a Pod is running has enough of a resource available, it's possible (and allowed)
for a container to use more resource than its request for that resource specifies.
However, a container is not allowed to use more than its resource limit. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/apps/v1/deployments?limit=500
API endpoint, filter with with the jq utility using the following filter
[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.namespace != "rhacs-operator" and ({{if ne .var_deployment_limit_namespaces_exempt_regex "None"}}.metadata.namespace | test("{{.var_deployment_limit_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | select( .spec.template.spec.containers[].resources.requests.cpu == null or .spec.template.spec.containers[].resources.requests.memory == null or .spec.template.spec.containers[].resources.limits.cpu == null or .spec.template.spec.containers[].resources.limits.memory == null ) | .metadata.name ]
and persist it to the local
/kubernetes-api-resources/apis/apps/v1/deployments?limit=500#bc8e70ebc8291e061a2917bbee5d3d3a2e56d8e6c8c7cf1bcc891e0d2606a26f
file.
| Rationale: | Resource requests/limits provide constraints that limit aggregate resource consumption
per container. This helps prevent resource starvation. When deploying your
application, it is important to tune based on memory and CPU consumption,
allocating enough resources for the application to function properly. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_resource_requests_limits_in_deployment | References: | | |
|
Rule
Ensure that all statefulsets has resource limits
[ref] | When deploying an application, it is important to tune based on memory and CPU consumption,
allocating enough resources for the application to function properly. Images provided by
OpenShift Dedicated behave properly within the confines of the memory they are allocated.
However, any application images must pay attention to the specific resources required to
ensure they are available.
If the node where a Pod is running has enough of a resource available, it's possible (and allowed)
for a container to use more resource than its request for that resource specifies.
However, a container is not allowed to use more than its resource limit. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/apps/v1/statefulsets?limit=500
API endpoint, filter with with the jq utility using the following filter
[ .items[] | select(.metadata.namespace | startswith("kube-") or startswith("openshift-") | not) | select(.metadata.namespace != "rhacs-operator" and ({{if ne .var_statefulset_limit_namespaces_exempt_regex "None"}}.metadata.namespace | test("{{.var_statefulset_limit_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | select( .spec.template.spec.containers[].resources.requests.cpu == null or .spec.template.spec.containers[].resources.requests.memory == null or .spec.template.spec.containers[].resources.limits.cpu == null or .spec.template.spec.containers[].resources.limits.memory == null ) | .metadata.name ]
and persist it to the local
/kubernetes-api-resources/apis/apps/v1/statefulsets?limit=500#f44f38d4fe5c9464618dbed2411d4af523db0c3dc3823f05c263774ec150adf6
file.
| Rationale: | Resource requests/limits provide constraints that limit aggregate resource consumption
per container. This helps prevent resource starvation. When deploying your
application, it is important to tune based on memory and CPU consumption,
allocating enough resources for the application to function properly. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_resource_requests_limits_in_statefulset | References: | | |
|
Rule
Ensure workloads use resource requests and limits
[ref] |
There are two ways to enable resource requests and limits. To create either:
A multi-project quota, defined by a ClusterResourceQuota object, allows quotas
to be shared across multiple projects. Resources used in each selected project
are aggregated and that aggregate is used to limit resources across all the
selected projects.
A resource quota, defined by a ResourceQuota object, provides constraints that
limit aggregate resource consumption per project. It can limit the quantity of
objects that can be created in a project by type, as well as the total amount
of compute resources and storage that might be consumed by resources in that project.
We want to make sure either a ClusterResourceQuota is used in a cluster or a ResourceQuota
is used per namespaces.
To configure ClusterResourceQuota, follow the directions in
the documentation
To configure ResourceQuota Per Project, follow the directions in
the documentation
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/api/v1/resourcequotas
API endpoint, filter with with the jq utility using the following filter
[.items[] | select((.metadata.namespace | startswith("openshift") | not) and (.metadata.namespace | startswith("kube-") | not) and .metadata.namespace != "default" and .metadata.namespace != "rhacs-operator" and ({{if ne .var_resource_requests_quota_per_project_exempt_regex "None"}}.metadata.namespace | test("{{.var_resource_requests_quota_per_project_exempt_regex}}") | not{{else}}true{{end}})) | .metadata.namespace] | unique
and persist it to the local
/kubernetes-api-resources/api/v1/resourcequotas#4326a181a1e3e8a8e02ffb58e7d3ca9e62ed0e144a5277b1f7551fdbcfeca0a8
file.
/api/v1/namespaces
API endpoint, filter with with the jq utility using the following filter
[.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default" and .metadata.name != "rhacs-operator" and ({{if ne .var_resource_requests_quota_per_project_exempt_regex "None"}}.metadata.name | test("{{.var_resource_requests_quota_per_project_exempt_regex}}") | not{{else}}true{{end}}))]
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces#3ae63defe5cbb61225edb84d8e19f601be933d063305c1ea1e0381297c6258d6
file.
/apis/quota.openshift.io/v1/clusterresourcequotas
API endpoint, filter with with the jq utility using the following filter
[.items[] | .metadata.name]
and persist it to the local
/kubernetes-api-resources/apis/quota.openshift.io/v1/clusterresourcequotas#8de615d314dbafe1ae4ce3d7c1a604bd1bafcac867393e7256ecb869e6d752a8
file.
| Rationale: | Resource quotas provide constraints that limit aggregate resource consumption
per project. This helps prevent resource starvation. When deploying your
application, it is important to tune based on memory and CPU consumption,
allocating enough resources for the application to function properly. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_resource_requests_quota | Identifiers: | CCE-90588-5 | References: | nist | SC-5, SC-5(1), SC-5(2) | bsi | SYS.1.6.A15 |
| |
|
Rule
Ensure workloads use cluster resource requests and limits
[ref] |
There are two ways to enable resource requests and limits. To create either:
A multi-project quota, defined by a ClusterResourceQuota object, allows quotas
to be shared across multiple projects. Resources used in each selected project
are aggregated and that aggregate is used to limit resources across all the
selected projects.
A resource quota, defined by a ResourceQuota object, provides constraints that
limit aggregate resource consumption per project. It can limit the quantity of
objects that can be created in a project by type, as well as the total amount
of compute resources and storage that might be consumed by resources in that project.
We want to make sure either a ClusterResourceQuota is used in a cluster or a ResourceQuota
is used per namespaces.
To configure ClusterResourceQuota, follow the directions in
the documentation
To configure ResourceQuota Per Project, follow the directions in
the documentation
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/quota.openshift.io/v1/clusterresourcequotas
API endpoint, filter with with the jq utility using the following filter
[.items[] | .metadata.name]
and persist it to the local
/kubernetes-api-resources/apis/quota.openshift.io/v1/clusterresourcequotas#8de615d314dbafe1ae4ce3d7c1a604bd1bafcac867393e7256ecb869e6d752a8
file.
| Rationale: | Resource quotas provide constraints that limit aggregate resource consumption
per project. This helps prevent resource starvation. When deploying your
application, it is important to tune based on memory and CPU consumption,
allocating enough resources for the application to function properly. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_resource_requests_quota_cluster | Identifiers: | CCE-90581-0 | References: | nist | SC-5, SC-5(1), SC-5(2) | bsi | SYS.1.6.A15 |
| |
|
Rule
Ensure workloads use resource requests and limits per namespace
[ref] |
There are two ways to enable resource requests and limits. To create either:
A multi-project quota, defined by a ClusterResourceQuota object, allows quotas
to be shared across multiple projects. Resources used in each selected project
are aggregated and that aggregate is used to limit resources across all the
selected projects.
A resource quota, defined by a ResourceQuota object, provides constraints that
limit aggregate resource consumption per project. It can limit the quantity of
objects that can be created in a project by type, as well as the total amount
of compute resources and storage that might be consumed by resources in that project.
We want to make sure either a ClusterResourceQuota is used in a cluster or a ResourceQuota
is used per namespaces.
To configure ClusterResourceQuota, follow the directions in
the documentation
To configure ResourceQuota Per Project, follow the directions in
the documentation
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/api/v1/resourcequotas
API endpoint, filter with with the jq utility using the following filter
[.items[] | select((.metadata.namespace | startswith("openshift") | not) and (.metadata.namespace | startswith("kube-") | not) and .metadata.namespace != "default" and .metadata.namespace != "rhacs-operator" and ({{if ne .var_resource_requests_quota_per_project_exempt_regex "None"}}.metadata.namespace | test("{{.var_resource_requests_quota_per_project_exempt_regex}}") | not{{else}}true{{end}})) | .metadata.namespace] | unique
and persist it to the local
/kubernetes-api-resources/api/v1/resourcequotas#4326a181a1e3e8a8e02ffb58e7d3ca9e62ed0e144a5277b1f7551fdbcfeca0a8
file.
/api/v1/namespaces
API endpoint, filter with with the jq utility using the following filter
[.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default" and .metadata.name != "rhacs-operator" and ({{if ne .var_resource_requests_quota_per_project_exempt_regex "None"}}.metadata.name | test("{{.var_resource_requests_quota_per_project_exempt_regex}}") | not{{else}}true{{end}}))]
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces#3ae63defe5cbb61225edb84d8e19f601be933d063305c1ea1e0381297c6258d6
file.
| Rationale: | Resource quotas provide constraints that limit aggregate resource consumption
per project. This helps prevent resource starvation. When deploying your
application, it is important to tune based on memory and CPU consumption,
allocating enough resources for the application to function properly. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_resource_requests_quota_per_project | Identifiers: | CCE-90582-8 | References: | nist | SC-5, SC-5(1), SC-5(2), SI-16 | app-srg-ctr | SRG-APP-000246-CTR-000605, SRG-APP-000435-CTR-001070, SRG-APP-000450-CTR-001105, CNTR-OS-000620, CNTR-OS-000630, CNTR-OS-000800, CNTR-OS-000810, CNTR-OS-000860, CNTR-OS-000870 | bsi | SYS.1.6.A15 | stigref | SV-257554r921605_rule, SV-257555r921608_rule, SV-257565r921638_rule, SV-257566r921641_rule, SV-257568r921647_rule, SV-257569r921650_rule |
| |
|
Rule
Ensure TLS v1.2 is minimum for Openshift APIServer
[ref] | Verify tls version for the openshift APIServer. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/openshift-apiserver{{else}}/api/v1/namespaces/openshift-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}.data."config.yaml"{{else}}.data."config.yaml" | fromjson{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-apiserver/configmaps/config#45ae2c88fe28d39a42f19e165a1612353224e9663eb369000e03c7efcd10ef59
file.
| Rationale: | Use of weak or untested encryption algorithms undermines the purposes of utilizing encryption to
protect data. The system must implement cryptographic modules adhering to the higher
standards approved by the federal government since this provides assurance they have been tested
and validated. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_tls_version_check_apiserver | Identifiers: | CCE-85863-9 | References: | | |
|
Rule
This is a helper rule to fetch the required api resource for detecting HyperShift OCP version
[ref] | no description Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/hypershift.openshift.io/v1beta1/namespaces/{{.hypershift_namespace_prefix}}/hostedclusters/{{.hypershift_cluster}}
API endpoint, filter with with the jq utility using the following filter
[.status.version.history[].version]
and persist it to the local
/kubernetes-api-resources/hypershift/version
file.
This rule will be a hidden rule
true
true
| Rationale: | no rationale | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_version_detect_in_hypershift | |
|
Rule
This is a helper rule to fetch the required api resource for detecting OCP version
[ref] | no description Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{.ocp_version_api_path}}
API endpoint, filter with with the jq utility using the following filter
{{.ocp_version_yaml_path}}
and persist it to the local
/kubernetes-api-resources/ocp/version
file.
This rule will be a hidden rule
true
true
| Rationale: | no rationale | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_version_detect_in_ocp | |
|
Group
Kubernetes Kubelet Settings
Group contains 2 rules |
[ref]
The Kubernetes Kubelet is an agent that runs on each node in the cluster. It
makes sure that containers are running in a pod.
The kubelet takes a set of PodSpecs that are provided through various
mechanisms and ensures that the containers described in those PodSpecs are
running and healthy. The kubelet doesn’t manage containers which were not
created by Kubernetes. |
Rule
Ensure That The kubelet Client Certificate Is Correctly Set
[ref] | To ensure the kubelet TLS client certificate is configured, edit the
kubelet configuration file /etc/kubernetes/kubelet.conf
and configure the kubelet certificate file.
tlsCertFile: /path/to/TLS/cert.key
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["kubelet-client-certificate"]) | .apiServerArguments["kubelet-client-certificate"][] | select(test("/etc/kubernetes/certs/kubelet/tls.crt"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["kubelet-client-certificate"]) | .apiServerArguments["kubelet-client-certificate"][] | select(test("{{.var_apiserver_kubelet_client_cert}}"))]{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#e5500055b4aa2fcf00dc09ad0e66e44b6b42d67f8d53d1e72ff81b32f0e09865
file.
| Rationale: | Without cryptographic integrity protections, information can be
altered by unauthorized users without detection. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_kubelet_configure_tls_cert | Identifiers: | CCE-83396-2 | References: | nerc-cip | CIP-003-8 R4.2, CIP-007-3 R5.1 | nist | SC-8, SC-8(1), SC-8(2) | pcidss | Req-2.2, Req-2.2.3, Req-2.3 | app-srg-ctr | SRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095 | cis | 4.2.9 | bsi | APP.4.4.A17 | pcidss4 | 2.2.1, 2.2.5, 2.2.7, 2.2 |
| |
|
Rule
Ensure That The kubelet Server Key Is Correctly Set
[ref] | To ensure the kubelet TLS private server key certificate is configured, edit the
kubelet configuration file /etc/kubernetes/kubelet.conf
and configure the kubelet private key file.
tlsPrivateKeyFile: /path/to/TLS/private.key
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
{{if ne .hypershift_cluster "None"}}/api/v1/namespaces/{{.hypershift_namespace_prefix}}-{{.hypershift_cluster}}/configmaps/kas-config{{else}}/api/v1/namespaces/openshift-kube-apiserver/configmaps/config{{end}}
API endpoint, filter with with the jq utility using the following filter
{{if ne .hypershift_cluster "None"}}[.data."config.json" | fromjson | select(.apiServerArguments["kubelet-client-key"]) | .apiServerArguments["kubelet-client-key"][] | select(test("/etc/kubernetes/certs/kubelet/tls.key"))]{{else}}[.data."config.yaml" | fromjson | select(.apiServerArguments["kubelet-client-key"]) | .apiServerArguments["kubelet-client-key"][] | select(test("{{.var_apiserver_kubelet_client_key}}"))]{{end}}
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces/openshift-kube-apiserver/configmaps/config#1e2b7c1158e0b9a602cb20d62c82b4660907bb57b63dac11c6c7c64211c49c69
file.
| Rationale: | Without cryptographic integrity protections, information can be
altered by unauthorized users without detection. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_kubelet_configure_tls_key | Identifiers: | CCE-90614-9 | References: | nerc-cip | CIP-003-8 R4.2, CIP-007-3 R5.1 | nist | SC-8, SC-8(1), SC-8(2) | pcidss | Req-2.2, Req-2.2.3, Req-2.3 | app-srg-ctr | SRG-APP-000441-CTR-001090, SRG-APP-000442-CTR-001095 | cis | 4.2.9 | bsi | APP.4.4.A17 | pcidss4 | 2.2.1, 2.2.5, 2.2.7, 2.2 |
| |
|
Group
OpenShift - Master Node Settings
Group contains 1 rule |
[ref]
Contains evaluations for the master node configuration settings. |
Rule
Verify that Control Plane Nodes are not schedulable for workloads
[ref] | -| User workloads should not be colocated with control plane workloads. To ensure that the scheduler won't schedule workloads on the master nodes, the taint "node-role.kubernetes.io/master" with the "NoSchedule" effect is set by default in most cluster configurations (excluding SNO and Compact Clusters).
The scheduling of the master nodes is centrally configurable without reboot via oc edit schedulers.config.openshift.io cluster for details see the Red Hat Solution
https://access.redhat.com/solutions/4564851
If you run a setup, which requires the colocation of control plane and user workload you need to exclude this rule.
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/api/v1/nodes
API endpoint, filter with with the jq utility using the following filter
.items[] | select(.metadata.labels."node-role.kubernetes.io/master" == "" or .metadata.labels."node-role.kubernetes.io/control-plane" == "" ) | .spec.taints[] | select(.key == "node-role.kubernetes.io/master" and .effect == "NoSchedule")
and persist it to the local
/kubernetes-api-resources/api/v1/nodes#17a90876774be6b77ba144848cd6abef7db26fc4cfb1b71314faa3dcda45911c
file.
| Rationale: | -| By separating user workloads and the control plane workloads we can better ensure that there is no ill effects from workload boosts to each other. Furthermore we ensure that an adversary who gets control over a badly secured workload container is not colocated to critical components of the control plane. In some setups it might be necessary to make the control plane schedulable for workloads i.e. Single Node Openshift (SNO) or Compact Cluster (Three Node Cluster) setups. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_master_taint_noschedule | Identifiers: | CCE-88731-5 | References: | | |
|
Group
Kubernetes - Network Configuration and Firewalls
Group contains 6 rules |
[ref]
Most systems must be connected to a network of some
sort, and this brings with it the substantial risk of network
attack. This section discusses the security impact of decisions
about networking which must be made when configuring a system.
This section also discusses firewalls, network access
controls, and other network security frameworks, which allow
system-level rules to be written that can limit an attackers' ability
to connect to your system. These rules can specify that network
traffic should be allowed or denied from certain IP addresses,
hosts, and networks. The rules can also specify which of the
system's network services are available to particular hosts or
networks. |
Rule
Ensure Appropriate Network Policies are Configured
[ref] | Configure Network Policies in any application namespace in an appropriate way, so that
only the required communications are allowed. The Network Policies should precisely define
source and target using label selectors and ports. | Rationale: | By default, all pod to pod traffic within a cluster is allowed. Network
Policy creates a pod- level firewall that can be used to restrict traffic
between sources. Pod traffic is restricted by having a Network Policy that
selects it (through the use of labels). Once there is any Network Policy in a
namespace selecting a particular pod, that pod will reject any connections
that are not allowed by any Network Policy. Other pods in the namespace that
are not selected by any Network Policy will continue to accept all traffic.
Implementing Kubernetes Network Policies with minimal allowed communication enhances security
by reducing entry points and limiting attacker movement within the cluster. It ensures pods and
services communicate only with necessary entities, reducing unauthorized access risks. In case
of a breach, these policies contain compromised pods, preventing widespread malicious activity.
Additionally, they enhance monitoring and detection of anomalous network activities. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_configure_appropriate_network_policies | Identifiers: | CCE-89537-5 | References: | | |
|
Rule
Check Egress IPs Assignable to Nodes
[ref] | Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/api/v1/nodes
API endpoint, filter with with the jq utility using the following filter
[ .items[] | .metadata.labels["k8s.ovn.org/egress-assignable"] != null ]
and persist it to the local
/kubernetes-api-resources/api/v1/nodes#d5749842142262e738c55528e212900732fadfa5c5d7668c27d7547258a7645d
file.
| Rationale: | -| By using egress IPs you can provide a consistent IP to external services and configure special firewall rules which precisely select this IP. This allows for more control on external systems. Furthermore you can bind the IPs to specific nodes, which handle all the network connections to achieve a better separation of duties between the different nodes. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_configure_egress_ip_node_assignable | Identifiers: | CCE-86787-9 | References: | | |
|
Rule
Limiting Network Bandwidth in Pods
[ref] | Network bandwidth, SHOULD be appropriately reserved and limited. | Rationale: | Extend pod configuration with network bandwidth annotations to prevent
a bad actor or a malfunction in the pod to consume all the bandwidth in the cluster.
A network bandwidth limitation on the pod level can mitigate the bearing onto the cluster. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_configure_network_bandwidth | Identifiers: | CCE-87610-2 | References: | | |
|
Rule
Ensure that the CNI in use supports Network Policies
[ref] | There are a variety of CNI plugins available for Kubernetes. If the CNI in
use does not support Network Policies it may not be possible to effectively
restrict traffic in the cluster. OpenShift supports Kubernetes NetworkPolicy
using a Kubernetes Container Network Interface (CNI) plug-in. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/operator.openshift.io/v1/networks/cluster
API endpoint, filter with with the jq utility using the following filter
[.spec.defaultNetwork.type]
and persist it to the local
/kubernetes-api-resources/apis/operator.openshift.io/v1/networks/cluster#35e33d6dc1252a03495b35bd1751cac70041a511fa4d282c300a8b83b83e3498
file.
| Rationale: | Kubernetes network policies are enforced by the CNI plugin in use. As such
it is important to ensure that the CNI plugin supports both Ingress and
Egress network policies. | Severity: | high | Rule ID: | xccdf_org.ssgproject.content_rule_configure_network_policies | References: | nerc-cip | CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1 | nist | CM-6, CM-6(1) | pcidss | Req-1.1.4, Req-1.2, Req-2.2 | app-srg-ctr | SRG-APP-000038-CTR-000105, CNTR-OS-000100 | cis | 5.3.1 | bsi | APP.4.4.A7, APP.4.4.A18, SYS.1.6.A5, SYS.1.6.A21 | pcidss4 | 1.4.1, 1.4, 2.2.1, 2.2 | stigref | SV-257514r921485_rule |
| |
|
Rule
Ensure that application Namespaces have Network Policies defined.
[ref] | Use network policies to isolate traffic in your cluster network. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/networking.k8s.io/v1/networkpolicies
API endpoint, filter with with the jq utility using the following filter
[.items[] | select((.metadata.namespace | startswith("openshift") | not) and (.metadata.namespace | startswith("kube-") | not) and .metadata.namespace != "default" and ({{if ne .var_network_policies_namespaces_exempt_regex "None"}}.metadata.namespace | test("{{.var_network_policies_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | .metadata.namespace] | unique
and persist it to the local
/kubernetes-api-resources/apis/networking.k8s.io/v1/networkpolicies#7400bb301fff2f7fc7b1b0fb7448b8e3f15222a8d23f992204315b19eeefa72f
file.
/api/v1/namespaces
API endpoint, filter with with the jq utility using the following filter
[.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default" and ({{if ne .var_network_policies_namespaces_exempt_regex "None"}}.metadata.name | test("{{.var_network_policies_namespaces_exempt_regex}}") | not{{else}}true{{end}}))]
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces#f673748db2dd4e4f0ad55d10ce5e86714c06da02b67ddb392582f71ef81efab2
file.
| Rationale: | Running different applications on the same Kubernetes cluster creates a risk of one
compromised application attacking a neighboring application. Network segmentation is
important to ensure that containers can communicate only with those they are supposed
to. When a network policy is introduced to a given namespace, all traffic not allowed
by the policy is denied. However, if there are no network policies in a namespace all
traffic will be allowed into and out of the pods in that namespace. | Severity: | high | Rule ID: | xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces | References: | nerc-cip | CIP-003-8 R4, CIP-003-8 R4.2, CIP-003-8 R5, CIP-003-8 R6, CIP-004-6 R2.2.4, CIP-004-6 R3, CIP-007-3 R2, CIP-007-3 R2.1, CIP-007-3 R2.2, CIP-007-3 R2.3, CIP-007-3 R5.1, CIP-007-3 R6.1 | nist | AC-4, AC-4(21), CA-3(5), CM-6, CM-6(1), CM-7, CM-7(1), SC-7, SC-7(3), SC-7(5), SC-7(8), SC-7(12), SC-7(13), SC-7(18), SC-7(10), SI-4(22) | pcidss | Req-1.1.4, Req-1.2, Req-1.2.1, Req-1.3.1, Req-1.3.2, Req-2.2 | app-srg-ctr | SRG-APP-000038-CTR-000105, CNTR-OS-000100 | cis | 5.3.2 | bsi | APP.4.4.A7, APP.4.4.A18, SYS.1.6.A5, SYS.1.6.A21 | pcidss4 | 1.2.6, 1.2, 1.3.1, 1.3, 1.4.1, 1.4, 2.2.1, 2.2 | stigref | SV-257514r921485_rule |
| |
|
Rule
Ensure that project templates autocreate Network Policies
[ref] | Configure a template for newly created projects to use default network
policies and make sure this template is referenced from the default
project template.
The OpenShift Container Platform API server automatically provisions
new projects based on the project template that is identified by
the projectRequestTemplate parameter in the cluster’s project
configuration resource.
As a cluster administrator, you can modify the default project template
so that new projects created would satisfy the chosen compliance
standards.
For more information on configuring default network policies, follow
the relevant documentation. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/projects/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/projects/cluster file.
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/template.openshift.io/v1/namespaces/openshift-config/templates
API endpoint, filter with with the jq utility using the following filter
[.items[] | any(.objects[]?; .kind == "NetworkPolicy") ]
and persist it to the local
/kubernetes-api-resources/apis/template.openshift.io/v1/namespaces/openshift-config/templates#8044d7f899788c96acdbb06244837a64dfa1e0973c59b2ad26596e080e12482d
file.
| Rationale: | Running different applications on the same Kubernetes cluster creates a risk of one
compromised application attacking a neighboring application. Network segmentation is
important to ensure that containers can communicate only with those they are supposed
to. When a network policy is introduced to a given namespace, all traffic not allowed
by the policy is denied.
Editing the default project template to include NetworkPolicies in
all new namespaces ensures that all namespaces include at least some
NetworkPolicy objects.
Ensuring that the project configuration references
a project template that sets up the required objects for new projects ensures
that all new projects will be set in accordance with centralized settings. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_project_config_and_template_network_policy | Identifiers: | CCE-86070-0 | References: | app-srg-ctr | SRG-APP-000039-CTR-000110, CNTR-OS-000110 | bsi | APP.4.4.A7, APP.4.4.A18 | stigref | SV-257515r921488_rule |
| |
|
Group
Role-based Access Control
Group contains 6 rules |
[ref]
Role-based access control (RBAC) objects determine
whether a user is allowed to perform a given action
within a project.
Cluster administrators can use the cluster roles and
bindings to control who has various access levels to
the OpenShift Container Platform platform itself
and all projects.
Developers can use local roles and bindings to control
who has access to their projects. Note that authorization
is a separate step from authentication, which is more
about determining the identity of who is taking the action. |
Rule
Ensure cluster roles are defined in the cluster
[ref] | Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/clusterroles?limit=1000 API endpoint to the local /kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/clusterroles?limit=1000 file. | Rationale: | By defining RBAC cluster roles, one is able to limit the permissions
given to a Service Account, and thus limit the blast radius
that an account compromise would have. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_rbac_cluster_roles_defined | Identifiers: | CCE-86595-6 | References: | | |
|
Rule
Ensure that the RBAC setup follows the principle of least privilege
[ref] | Role-based access control (RBAC) objects determine whether a user is
allowed to perform a given action within a project.
If users or groups exist that are bound to roles they must not have, modify the user or group permissions using the following cluster and local role binding commands:
Remove a User from a Cluster RBAC role by executing the following:
oc adm policy remove-cluster-role-from-user role username
Remove a Group from a Cluster RBAC role by executing the following:
oc adm policy remove-cluster-role-from-group role groupname
Remove a User from a Local RBAC role by executing the following:
oc adm policy remove-role-from-user role username
Remove a Group from a Local RBAC role by executing the following:
oc adm policy remove-role-from-group role groupname
NOTE: For additional information. https://docs.openshift.com/container-platform/latest/authentication/using-rbac.html | Rationale: | Controlling and limiting users access to system services and resources
is key to securing the platform and limiting the intentional or
unintentional comprimising of the system and its services. OpenShift
provides a robust RBAC policy system that allows for authorization
policies to be as detailed as needed. Additionally there are two layers
of RBAC policies, the first is Cluster RBAC policies which administrators
can control who has what access to cluster level services. The other is
Local RBAC policies, which allow project developers/administrators to
control what level of access users have to a given project or namespace. | Severity: | high | Rule ID: | xccdf_org.ssgproject.content_rule_rbac_least_privilege | Identifiers: | CCE-90678-4 | References: | nist | AC-3, CM-5(6), IA-2, IA-2(5), AC-6(10), CM-11(2), CM-5(1), CM-7(5)(b) | app-srg-ctr | SRG-APP-000033-CTR-000090, SRG-APP-000033-CTR-000095, SRG-APP-000033-CTR-000100, SRG-APP-000133-CTR-000290, SRG-APP-000133-CTR-000295, SRG-APP-000133-CTR-000300, SRG-APP-000133-CTR-000305, SRG-APP-000133-CTR-000310, SRG-APP-000148-CTR-000350, SRG-APP-000153-CTR-000375, SRG-APP-000340-CTR-000770, SRG-APP-000378-CTR-000880, SRG-APP-000378-CTR-000885, SRG-APP-000378-CTR-000890, SRG-APP-000380-CTR-000900, SRG-APP-000386-CTR-000920, CNTR-OS-000090 | cis | 5.2.10 | bsi | APP.4.4.A3, APP.4.4.A7, APP.4.4.A9, SYS.1.6.A8, SYS.1.6.A19 | pcidss4 | 2.2.1, 2.2 | stigref | SV-257513r921482_rule |
| |
|
Rule
Ensure that the cluster-admin role is only used where required
[ref] | The RBAC role cluster-admin provides wide-ranging powers over the
environment and should be used only where and when needed. | Rationale: | Kubernetes provides a set of default roles where RBAC is used. Some of these
roles such as cluster-admin provide wide-ranging privileges which should
only be applied where absolutely necessary. Roles such as cluster-admin
allow super-user access to perform any action on any resource. When used in
a ClusterRoleBinding, it gives full control over every resource in the
cluster and in all namespaces. When used in a RoleBinding, it gives full
control over every resource in the rolebinding's namespace, including the
namespace itself. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_rbac_limit_cluster_admin | References: | | |
|
Rule
Limit Access to Kubernetes Secrets
[ref] | The Kubernetes API stores secrets, which may be service
account tokens for the Kubernetes API or credentials used
by workloads in the cluster. Access to these secrets should
be restricted to the smallest possible group of users to
reduce the risk of privilege escalation. To restrict users from
secrets, remove get , list , and watch
access to unauthorized users to secret objects in the cluster. | Rationale: | Inappropriate access to secrets stored within the Kubernetes
cluster can allow for an attacker to gain additional access to
the Kubernetes cluster or external resources whose credentials
are stored as secrets. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_rbac_limit_secrets_access | References: | | |
|
Rule
Ensure roles are defined in the cluster
[ref] | Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/rbac.authorization.k8s.io/v1/roles?limit=1000 API endpoint to the local /kubernetes-api-resources/apis/rbac.authorization.k8s.io/v1/roles?limit=1000 file. | Rationale: | By defining RBAC roles, one is able to limit the permissions
given to a Service Account, and thus limit the blast radius
that an account compromise would have. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_rbac_roles_defined | Identifiers: | CCE-86588-1 | References: | | |
|
Rule
Minimize Wildcard Usage in Cluster and Local Roles
[ref] | Kubernetes Cluster and Local Roles provide access to resources
based on sets of objects and actions that can be taken on
those objects. It is possible to set either of these using a
wildcard * which matches all items. This violates the
principle of least privilege and leaves a cluster in a more
vulnerable state to privilege abuse. | Rationale: | The principle of least privilege recommends that users are
provided only the access required for their role and nothing
more. The use of wildcard rights grants is likely to provide
excessive rights to the Kubernetes API. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_rbac_wildcard_use | References: | | |
|
Group
Kubernetes - Registry Security Practices
Group contains 4 rules |
[ref]
Contains evaluations for Kubernetes registry security practices, and cluster-wide registry configuration. |
Rule
Allowed registries are configured
[ref] | The configuration registrySources.allowedRegistries determines the
permitted registries that the OpenShift container runtime can access for builds
and pods. This configuration setting ensures that all registries other than
those specified are blocked.
You can set the allowed repositories by applying the following manifest using
oc patch , e.g. if you save the following snippet to
/tmp/allowed-registries-patch.yaml
spec:
registrySources:
allowedRegistries:
- my-trusted-registry.internal.example.com
you would call
oc patch image.config.openshift.io cluster --patch="$(cat /tmp/allowed-registries-patch.yaml)" --type=merge
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/images/cluster file. | Rationale: | Allowed registries should be configured to restrict the registries that the
OpenShift container runtime can access, and all other registries should be
blocked. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_ocp_allowed_registries | References: | nist | CM-5(3), CM-7(2), CM-7(5), CM-11 | app-srg-ctr | SRG-APP-000456-CTR-001125, CNTR-OS-000890, CNTR-OS-000900 | cis | 5.5.1 | bsi | SYS.1.6.A6, SYS.1.6.A12 | pcidss4 | 2.2.1, 2.2 | stigref | SV-257571r921656_rule, SV-257572r921659_rule |
| |
|
Rule
Allowed registries for import are configured
[ref] | The configuration allowedRegistriesForImport limits the container
image registries from which normal users may import images. This is important
to control, as a user who can stand up a malicious registry can then import
content which claims to include the SHAs of legitimate content layers.
You can set the allowed repositories for import by applying the following
manifest using oc patch , e.g. if you save the following snippet to
/tmp/allowed-import-registries-patch.yaml
spec:
allowedRegistriesForImport:
- domainName: my-trusted-registry.internal.example.com
insecure: false
you would call
oc patch image.config.openshift.io cluster --patch="$(cat /tmp/allowed-import-registries-patch.yaml)" --type=merge
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/images/cluster file. | Rationale: | Allowed registries for import should be specified to limit the registries
from which users may import images. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_ocp_allowed_registries_for_import | References: | nist | CM-5(3), CM-7(2), CM-7(5), CM-11 | app-srg-ctr | SRG-APP-000456-CTR-001125, CNTR-OS-000890, CNTR-OS-000900 | cis | 5.5.1 | bsi | SYS.1.6.A6, SYS.1.6.A12 | pcidss4 | 2.2.1, 2.2 | stigref | SV-257571r921656_rule, SV-257572r921659_rule |
| |
|
Rule
Check configured allowed registries for import uses secure protocol
[ref] | The configuration allowedRegistriesForImport limits the container
image registries from which normal users may import images. This is a list
of the registries that can be trusted to contain valid images and the image
location configured is assumed to be secured unless configured otherwise. It
is important to allow only secure registries to avoid man in the middle attacks,
as the insecure image import request can be impersonated and could lead to
fetching malicious content.
List all the allowed repositories for import configured with insecure set to true
using the following command:
oc get image.config.openshift.io/cluster -o json | jq '.spec | (.allowedRegistriesForImport[])? | select(.insecure==true)'
Remove or edit the listed registries having insecure set by using the command:
oc edit image.config.openshift.io/cluster
For more information, follow
the relevant documentation. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/images/cluster file. | Rationale: | Configured list of allowed registries for import should be from the secure source. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_ocp_insecure_allowed_registries_for_import | Identifiers: | CCE-86235-9 | References: | | |
|
Rule
Check if any insecure registry sources is configured
[ref] | The configuration registrySources.insecureRegistries determines the
insecure registries that the OpenShift container runtime can access for builds
and pods. This configuration setting is for accessing the configured registries
without TLS validation which could lead to security breaches and should be
avoided.
Remove any insecureRegistries configured using the following command:
oc patch image.config.openshift.io cluster --type=json -p "[{'op': 'remove', 'path': '/spec/registrySources/insecureRegistries'}]"
For more information, follow
the relevant documentation. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/config.openshift.io/v1/images/cluster API endpoint to the local /kubernetes-api-resources/apis/config.openshift.io/v1/images/cluster file. | Rationale: | Insecure registries should not be configured, which would restrict the possibilities of
OpenShift container runtime accessing registries which cannot be validated. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_ocp_insecure_registries | Identifiers: | CCE-86123-7 | References: | | |
|
Group
OpenShift - Risk Assessment Settings
Group contains 3 rules |
[ref]
Contains evaluations for the cluster's risk assessment configuration settings. |
Rule
Enable AutoApplyRemediation for at least One ScanSetting
[ref] | The Compliance Operator
scans the hosts and the platform (OCP)
configurations for software flaws and improper configurations according
to different compliance benchmarks. Compliance Operator allows its
scans to automatically apply remediations for failed rules, if such remediations exist.
Applying remediations automatically should only be done with careful consideration.
The Compliance Operator does not automatically resolve dependency issues that can occur between remediations. Users should perform a rescan after remediations are applied to ensure accurate results. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/compliance.openshift.io/v1alpha1/scansettings API endpoint to the local /kubernetes-api-resources/apis/compliance.openshift.io/v1alpha1/scansettings file. | Rationale: | With enabled AutoApplyRemediation compliance failures get automatically corrected. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scansetting_has_autoapplyremediations | References: | | |
|
Rule
Ensure that Compliance Operator is scanning the cluster
[ref] | The Compliance Operator
scans the hosts and the platform (OCP)
configurations for software flaws and improper configurations according
to different compliance benchmarks. It uses OpenSCAP as a backend,
which is a known and certified tool to do such scans. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the /apis/compliance.openshift.io/v1alpha1/scansettingbindings?limit=5 API endpoint to the local /kubernetes-api-resources/apis/compliance.openshift.io/v1alpha1/scansettingbindings?limit=5 file. | Rationale: | Vulnerability scanning and risk management are important detective controls
for all systems, to detect potential flaws and unauthorised access. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scansettingbinding_exists | Identifiers: | CCE-83697-3 | References: | nerc-cip | CIP-003-8 R1.3, CIP-003-8 R4.3, CIP-003-8 R6, CIP-004-6 4.1, CIP-004-6 4.2, CIP-004-6 R3, CIP-004-6 R4, CIP-004-6 R4.2, CIP-005-6 R1, CIP-005-6 R1.1, CIP-005-6 R1.2, CIP-007-3 R3, CIP-007-3 R3.1, CIP-007-3 R6.1, CIP-007-3 R8.4 | nist | CM-6, CM-6(1), RA-5, RA-5(5), SA-4(8) | pcidss | Req-2.2.4 | app-srg-ctr | SRG-APP-000472-CTR-001170, CNTR-OS-000910 | bsi | APP.4.4.A13 | pcidss4 | 2.2.1, 2.2.6, 2.2 | stigref | SV-257573r921662_rule |
| |
|
Rule
Ensure that Compliance Operator scans are running periodically
[ref] | The Compliance Operator
scans the hosts and the platform (OCP)
configurations for software flaws and improper configurations according
to different compliance benchmarks. Compliance Operator allows its
scans to be scheduled periodically using the ScanSetting
Custom Resource. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/compliance.openshift.io/v1alpha1/scansettings
API endpoint, filter with with the jq utility using the following filter
[.items[]] | map(.schedule != "" and .schedule != null)
and persist it to the local
/kubernetes-api-resources/apis/compliance.openshift.io/v1alpha1/scansettings#c9e8242304a62f077a87b2b045f62b01340e80a8798e58477faa58c06e918211
file.
| Rationale: | Without periodical scanning and verification, security functions may
not operate correctly and this failure may go unnoticed within the container platform. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scansettings_have_schedule | Identifiers: | CCE-90762-6 | References: | | |
|
Group
Security Context Constraints (SCC)
Group contains 11 rules |
[ref]
Similar to the way that RBAC resources control user access,
administrators can use Security Context Constraints (SCCs)
to control permissions for pods. These permissions include
actions that a pod, a collection of containers, can perform
and what resources it can access. You can use SCCs to define
a set of conditions that a pod must run with in order to be
accepted into the system. |
Rule
Drop Container Capabilities
[ref] | Containers should not enable more capabilities than needed as this
opens the door for malicious use. To disable the
capabilities, the appropriate Security Context Constraints (SCCs)
should set all capabilities as * or a list of capabilities in
requiredDropCapabilities . | Rationale: | By default, containers run with a default set of capabilities as assigned
by the Container Runtime which can include dangerous or highly privileged
capabilities. Capabilities should be dropped unless absolutely critical for
the container to run software as added capabilities that are not required
allow for malicious containers or attackers. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_drop_container_capabilities | References: | | |
|
Rule
Limit Container Capabilities
[ref] |
Containers should not enable more capabilites than needed as this
opens the door for malicious use. To enable only the
required capabilities, the appropriate Security Context Constraints (SCCs)
should set capabilities as a list in allowedCapabilities .
In case an SCC outside the default allow list in the variable
var-sccs-with-allowed-capabilities-regex is being flagged,
create a TailoredProfile and add the additional SCC to the
regular expression in the variable var-sccs-with-allowed-capabilities-regex .
An example allowing an SCC named additional follows:
apiVersion: compliance.openshift.io/v1alpha1
kind: TailoredProfile
metadata:
name: cis-additional-scc
spec:
description: Allows an additional scc
setValues:
- name: ocp4-var-sccs-with-allowed-capabilities-regex
rationale: Allow our own custom SCC
value: ^privileged$|^hostnetwork-v2$|^restricted-v2$|^nonroot-v2$|^additional$
extends: ocp4-cis
title: Modified CIS allowing one more SCC
Finally, reference this TailoredProfile in a ScanSettingBinding
For more information on Tailoring the Compliance Operator, please consult the
OpenShift documentation:
https://docs.openshift.com/container-platform/latest/security/compliance_operator/co-scans/compliance-operator-tailor.html
Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/security.openshift.io/v1/securitycontextconstraints
API endpoint, filter with with the jq utility using the following filter
[.items[] | select(.metadata.name | test("{{.var_sccs_with_allowed_capabilities_regex}}"; "") | not) | select(.allowedCapabilities != null) | .metadata.name]
and persist it to the local
/kubernetes-api-resources/apis/security.openshift.io/v1/securitycontextconstraints#821f2c3e5c4100d5609ac6070d8a51075295651614a05c65a5f744ba751c15b0
file.
| Rationale: | By default, containers run with a default set of capabilities as assigned
by the Container Runtime which can include dangerous or highly privileged
capabilities. Capabilities should be dropped unless absolutely critical for
the container to run software as added capabilities that are not required
allow for malicious containers or attackers. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_container_allowed_capabilities | References: | | |
|
Rule
Limit Containers Ability to use the HostDir volume plugin
[ref] | Containers should be allowed to use the hostPath volume type unless
necessary. To prevent containers from using the host filesystem
the appropriate Security Context Constraints (SCCs) should set
allowHostDirVolumePlugin to false . | Rationale: | hostPath volumes allow workloads to access the host filesystem
from the workload. Access to the host filesystem can be used to
escalate privileges and access resources such as keys or access
tokens.
| Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_host_dir_volume_plugin | Identifiers: | CCE-86255-7 | References: | nist | AC-6, AC-6(1) | app-srg-ctr | SRG-APP-000142-CTR-000330, CNTR-OS-000660 | cis | 5.2.12 | bsi | APP.4.4.A4, APP.4.4.A9, SYS.1.6.A16, SYS.1.6.A18, SYS.1.6.A19, SYS.1.6.A21 | stigref | SV-257557r921614_rule |
| |
|
Rule
Limit Containers Ability to bind to privileged ports
[ref] | Containers should be limited to bind to non-privileged ports directly
on the hosts. To prevent containers from binding to privileged ports
on the host the appropriate Security Context Constraints (SCCs)
should set allowHostPorts to false . | Rationale: | Privileged ports are those ports below 1024 and that require system
privileges for their use. If containers are able to use these ports,
the container must be run as a privileged user. The container platform
must stop containers that try to map to these ports directly. Allowing
non-privileged ports to be mapped to the container-privileged port
is the allowable method when a certain port is needed. An example is
mapping port 8080 externally to port 80 in the container. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_host_ports | Identifiers: | CCE-86205-2 | References: | | |
|
Rule
Limit Access to the Host IPC Namespace
[ref] | Containers should not be allowed access to the host's Interprocess Communication (IPC)
namespace. To prevent containers from getting access to a host's
IPC namespace, the appropriate Security Context Constraints (SCCs)
should set allowHostIPC to false . | Rationale: | A container running in the host's IPC namespace can use IPC
to interact with processes outside the container potentially
allowing an attacker to exploit a host process thereby enabling an
attacker to exploit other services. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_ipc_namespace | Identifiers: | CCE-84042-1 | References: | nerc-cip | CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1 | nist | CM-6, CM-6(1) | pcidss | Req-2.2 | app-srg-ctr | SRG-APP-000516-CTR-001325, CNTR-OS-000660 | cis | 5.2.3 | bsi | APP.4.4.A4, APP.4.4.A9, SYS.1.6.A16, SYS.1.6.A18, SYS.1.6.A21 | pcidss4 | 2.2.1, 2.2 | stigref | SV-257557r921614_rule |
| |
|
Rule
Limit Use of the CAP_NET_RAW
[ref] | Containers should not enable more capabilities than needed as this
opens the door for malicious use. CAP_NET_RAW enables a container
to launch a network attack on another container or cluster. To disable the
CAP_NET_RAW capability, the appropriate Security Context Constraints (SCCs)
should set NET_RAW in requiredDropCapabilities . | Rationale: | By default, containers run with a default set of capabilities as assigned
by the Container Runtime which can include dangerous or highly privileged
capabilities. If the CAP_NET_RAW is enabled, it may be misused
by malicious containers or attackers. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_net_raw_capability | References: | | |
|
Rule
Limit Access to the Host Network Namespace
[ref] | Containers should not be allowed access to the host's network
namespace. To prevent containers from getting access to a host's
network namespace, the appropriate Security Context Constraints (SCCs)
should set allowHostNetwork to false . | Rationale: | A container running in the host's network namespace could
access the host network traffic to and from other pods
potentially allowing an attacker to exploit pods and network
traffic. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_network_namespace | Identifiers: | CCE-83492-9 | References: | nerc-cip | CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1 | nist | CM-6, CM-6(1) | pcidss | Req-2.2 | app-srg-ctr | SRG-APP-000142-CTR-000330, CNTR-OS-000660 | cis | 5.2.4 | bsi | APP.4.4.A4, APP.4.4.A9, SYS.1.6.A16, SYS.1.6.A18, SYS.1.6.A21 | pcidss4 | 2.2.1, 2.2 | stigref | SV-257557r921614_rule |
| |
|
Rule
Limit Containers Ability to Escalate Privileges
[ref] | Containers should be limited to only the privileges required
to run and should not be allowed to escalate their privileges.
To prevent containers from escalating privileges,
the appropriate Security Context Constraints (SCCs)
should set allowPrivilegeEscalation to false . | Rationale: | Privileged containers have access to more of the Linux Kernel
capabilities and devices. If a privileged container were
compromised, an attacker would have full access to the container
and host. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_privilege_escalation | Identifiers: | CCE-83447-3 | References: | | |
|
Rule
Limit Privileged Container Use
[ref] | Containers should be limited to only the privileges required
to run. To prevent containers from running as privileged containers,
the appropriate Security Context Constraints (SCCs) should set
allowPrivilegedContainer to false . | Rationale: | Privileged containers have access to all Linux Kernel
capabilities and devices. If a privileged container were
compromised, an attacker would have full access to the container
and host. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_privileged_containers | References: | nerc-cip | CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1 | nist | CM-6, CM-6(1) | pcidss | Req-2.2 | app-srg-ctr | SRG-APP-000342-CTR-000775, SRG-APP-000142-CTR-000330, CNTR-OS-000660 | cis | 5.2.1 | bsi | APP.4.4.A4, APP.4.4.A9, SYS.1.6.A16, SYS.1.6.A18, SYS.1.6.A21 | pcidss4 | 2.2.1, 2.2 | stigref | SV-257557r921614_rule |
| |
|
Rule
Limit Access to the Host Process ID Namespace
[ref] | Containers should not be allowed access to the host's process
ID namespace. To prevent containers from getting access to a host's
process ID namespace, the appropriate Security Context Constraints (SCCs)
should set allowHostPID to false . | Rationale: | A container running in the host's PID namespace can inspect
processes running outside the container which can be used to
escalate privileges outside of the container. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_process_id_namespace | References: | nerc-cip | CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1 | nist | CM-6, CM-6(1) | pcidss | Req-2.2 | app-srg-ctr | SRG-APP-000516-CTR-001325, CNTR-OS-000660 | cis | 5.2.2 | bsi | APP.4.4.A4, APP.4.4.A9, SYS.1.6.A16, SYS.1.6.A18, SYS.1.6.A21 | pcidss4 | 2.2.1, 2.2 | stigref | SV-257557r921614_rule |
| |
|
Rule
Limit Container Running As Root User
[ref] | Containers should run as a random non-privileged user.
To prevent containers from running as root user,
the appropriate Security Context Constraints (SCCs) should set
.runAsUser.type to MustRunAsRange . | Rationale: | It is strongly recommended that containers running on OpenShift
should support running as any arbitrary UID. OpenShift will then assign
a random, non-privileged UID to the running container instance. This
avoids the risk from containers running with specific uids that could
map to host service accounts, or an even greater risk of running as root
level service. OpenShift uses the default security context constraints
(SCC), restricted, to prevent containers from running as root or other
privileged user ids. Pods may be configured to use an scc policy
that allows the container to run as a specific uid, including root(0)
when approved. Only a cluster administrator may grant the change of
an scc policy. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_scc_limit_root_containers | References: | nerc-cip | CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1 | nist | CM-6, CM-6(1) | pcidss | Req-2.2 | app-srg-ctr | SRG-APP-000342-CTR-000775, CNTR-OS-000660 | cis | 5.2.6 | bsi | APP.4.4.A4, APP.4.4.A9, SYS.1.6.A16, SYS.1.6.A18, SYS.1.6.A21 | pcidss4 | 2.2.1, 2.2 | stigref | SV-257557r921614_rule |
| |
|
Group
Kubernetes Secrets Management
Group contains 2 rules |
[ref]
Secrets let you store and manage sensitive information,
such as passwords, OAuth tokens, and ssh keys.
Such information might otherwise be put in a Pod
specification or in an image. |
Rule
Consider external secret storage
[ref] | Consider the use of an external secrets storage and management system,
instead of using Kubernetes Secrets directly, if you have more complex
secret management needs. Ensure the solution requires authentication to
access secrets, has auditing of access to and use of secrets, and encrypts
secrets. Some solutions also make it easier to rotate secrets. | Rationale: | Kubernetes supports secrets as first-class objects, but care needs to be
taken to ensure that access to secrets is carefully limited. Using an
external secrets provider can ease the management of access to secrets,
especially where secrets are used across both Kubernetes and non-Kubernetes
environments. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_secrets_consider_external_storage | References: | | |
|
Rule
Do Not Use Environment Variables with Secrets
[ref] | Secrets should be mounted as data volumes instead of environment
variables. | Rationale: | Environment variables are subject and very susceptible to
malicious hijacking methods by an adversary, as such,
environment variables should never be used for secrets. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_secrets_no_environment_variables | References: | | |
|