Group
Guide to the Secure Configuration of Amazon Elastic Kubernetes Service
Group contains 9 groups and 17 rules |
Group
Kubernetes Settings
Group contains 8 groups and 17 rules |
[ref]
Each section of this configuration guide includes information about the
configuration of a Kubernetes cluster and a set of recommendations for
hardening the configuration. For each hardening recommendation, information
on how to implement the control and/or how to verify or audit the control
is provided. In some cases, remediation information is also provided.
Some of the settings in the hardening guide are in place by default. The
audit information for these settings is provided in order to verify that
the cluster administrator has not made changes that would be less secure.
A small number of items require configuration.
Finally, there are some recommendations that require decisions by the
system operator, such as audit log size, retention, and related settings. |
Group
Kubernetes - Account and Access Control
Group contains 1 rule |
[ref]
In traditional Unix security, if an attacker gains
shell access to a certain login account, they can perform any action
or access any file to which that account has access. The same
idea applies to cloud technology such as Kubernetes. Therefore,
making it more difficult for unauthorized people to gain shell
access to accounts, particularly to privileged accounts, is a
necessary part of securing a system. This section introduces
mechanisms for restricting access to accounts under
Kubernetes. |
Rule
Use Dedicated Service Accounts
[ref] | Kubernetes workloads should not use cluster node service accounts to
authenticate to Amazon EKS APIs. Each Kubernetes workload that needs to
authenticate to other AWS services using AWS IAM should be provisioned with a
dedicated Service account. | Rationale: | Manual approaches for authenticating Kubernetes workloads running on Amazon
EKS against AWS APIs are: storing service account keys as a Kubernetes secret
(which introduces manual key rotation and potential for key compromise); or
use of the underlying nodes' IAM Service account, which violates the
principle of least privilege on a multi-tenanted node, when one pod needs
to have access to a service, but every other pod on the node that uses the
Service account does not. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_dedicated_service_accounts | Identifiers: | CCE-87818-1 | References: | | |
|
Group
Authentication
Group contains 1 rule |
[ref]
In cloud workloads, there are many ways to create and configure
to multiple authentication services. Some of these authentication
methods by not be secure or common methodologies, or they may not
be secure by default. This section introduces mechanisms for
configuring authentication systems Kubernetes. |
Rule
Manage Users with AWS IAM
[ref] | Amazon EKS uses IAM to provide authentication to your Kubernetes cluster
through the AWS IAM Authenticator for Kubernetes. You can configure the stock
kubectl client to work with Amazon EKS by installing the AWS IAM
Authenticator for Kubernetes and modifying your kubectl configuration file to
use it for authentication. | Rationale: | On- and off-boarding users is often difficult to automate and prone to error.
Using a single source of truth for user permissions reduces the number of
locations that an individual must be off-boarded from, and prevents users
gaining unique permissions sets that increase the cost of audit. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_iam_integration | Identifiers: | CCE-86301-9 | References: | | |
|
Group
Kubernetes - General Security Practices
Group contains 1 rule |
[ref]
Contains evaluations for general security practices for operating a Kubernetes environment. |
Rule
Consider Fargate for Untrusted Workloads
[ref] | It is Best Practice to restrict or fence untrusted workloads when running in
a multi-tenant environment. | Rationale: | AWS Fargate is a technology that provides on-demand, right-sized compute
capacity for containers. With AWS Fargate, you no longer have to provision,
configure, or scale groups of virtual machines to run containers. This
removes the need to choose server types, decide when to scale your node
groups, or optimize cluster packing.
You can control which pods start on Fargate and how they run with Fargate
profiles, which are defined as part of your Amazon EKS cluster.
Amazon EKS integrates Kubernetes with AWS Fargate by using controllers that
are built by AWS using the upstream, extensible model provided by Kubernetes.
These controllers run as part of the Amazon EKS managed Kubernetes control
plane and are responsible for scheduling native Kubernetes pods onto Fargate.
The Fargate controllers include a new scheduler that runs alongside the
default Kubernetes scheduler in addition to several mutating and validating
admission controllers. When you start a pod that meets the criteria for
running on Fargate, the Fargate controllers running in the cluster recognize,
update, and schedule the pod onto Fargate.
Each pod running on Fargate has its own isolation boundary and does not share
the underlying kernel, CPU resources, memory resources, or elastic network
interface with another pod. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_fargate | Identifiers: | CCE-89091-3 | References: | | |
|
Group
Kubernetes Kubelet Settings
Group contains 2 rules |
[ref]
The Kubernetes Kubelet is an agent that runs on each node in the cluster. It
makes sure that containers are running in a pod.
The kubelet takes a set of PodSpecs that are provided through various
mechanisms and ensures that the containers described in those PodSpecs are
running and healthy. The kubelet doesn’t manage containers which were not
created by Kubernetes. |
Rule
kubelet - Configure the Client CA Certificate
[ref] | By default, the kubelet is not configured with a CA certificate which
can subject the kubelet to man-in-the-middle attacks.
To configure a client CA certificate, edit the kubelet configuration
file /etc/kubernetes/kubelet/kubelet-config.json
on the kubelet node(s) and set the below parameter:
authentication:
...
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
...
| Rationale: | Not having a CA certificate for the kubelet will subject the kubelet to possible
man-in-the-middle attacks especially on unsafe or untrusted networks.
Certificate validation for the kubelet allows the API server to validate
the kubelet's identity. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_kubelet_configure_client_ca | References: | | |
|
Rule
kubelet - Ensure that the --read-only-port is secured
[ref] | Disable the read-only port. | Rationale: | The Kubelet process provides a read-only API in addition to the main Kubelet API.
Unauthenticated access is provided to this read-only API which could possibly retrieve
potentially sensitive information about the cluster. | Severity: | medium | Rule ID: | xccdf_org.ssgproject.content_rule_kubelet_read_only_port_secured | References: | nerc-cip | CIP-003-8 R6, CIP-004-6 R3, CIP-007-3 R6.1 | nist | CM-6, CM-6(1) | cis | 3.2.4 |
| |
|
Group
OpenShift - Logging Settings
Group contains 1 rule |
[ref]
Contains evaluations for the cluster's logging configuration settings. |
Rule
Ensure Audit Logging is Enabled
[ref] | The audit logs are part of the EKS managed Kubernetes control plane logs that
are managed by Amazon EKS. Amazon EKS is integrated with AWS CloudTrail, a
service that provides a record of actions taken by a user, role, or an AWS
service in Amazon EKS. CloudTrail captures all API calls for Amazon EKS as
events. The calls captured include calls from the Amazon EKS console and code
calls to the Amazon EKS API operations. | Rationale: | Exporting logs and metrics to a dedicated, persistent datastore such as
CloudTrail ensures availability of audit data following a cluster security
event, and provides a central location for analysis of log and metric data
collated from multiple sources. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_audit_logging | Identifiers: | CCE-87445-3 | References: | | |
|
Group
Kubernetes - Network Configuration and Firewalls
Group contains 6 rules |
[ref]
Most systems must be connected to a network of some
sort, and this brings with it the substantial risk of network
attack. This section discusses the security impact of decisions
about networking which must be made when configuring a system.
This section also discusses firewalls, network access
controls, and other network security frameworks, which allow
system-level rules to be written that can limit an attackers' ability
to connect to your system. These rules can specify that network
traffic should be allowed or denied from certain IP addresses,
hosts, and networks. The rules can also specify which of the
system's network services are available to particular hosts or
networks. |
Rule
Ensure that application Namespaces have Network Policies defined.
[ref] | Use network policies to isolate traffic in your cluster network. Warning:
This rule's check operates on the cluster configuration dump.
Therefore, you need to use a tool that can query the OCP API, retrieve the following:
/apis/networking.k8s.io/v1/networkpolicies
API endpoint, filter with with the jq utility using the following filter
[.items[] | select((.metadata.namespace | startswith("openshift") | not) and (.metadata.namespace | startswith("kube-") | not) and .metadata.namespace != "default" and ({{if ne .var_network_policies_namespaces_exempt_regex "None"}}.metadata.namespace | test("{{.var_network_policies_namespaces_exempt_regex}}") | not{{else}}true{{end}})) | .metadata.namespace] | unique
and persist it to the local
/kubernetes-api-resources/apis/networking.k8s.io/v1/networkpolicies#7400bb301fff2f7fc7b1b0fb7448b8e3f15222a8d23f992204315b19eeefa72f
file.
/api/v1/namespaces
API endpoint, filter with with the jq utility using the following filter
[.items[] | select((.metadata.name | startswith("openshift") | not) and (.metadata.name | startswith("kube-") | not) and .metadata.name != "default" and ({{if ne .var_network_policies_namespaces_exempt_regex "None"}}.metadata.name | test("{{.var_network_policies_namespaces_exempt_regex}}") | not{{else}}true{{end}}))]
and persist it to the local
/kubernetes-api-resources/api/v1/namespaces#f673748db2dd4e4f0ad55d10ce5e86714c06da02b67ddb392582f71ef81efab2
file.
| Rationale: | Running different applications on the same Kubernetes cluster creates a risk of one
compromised application attacking a neighboring application. Network segmentation is
important to ensure that containers can communicate only with those they are supposed
to. When a network policy is introduced to a given namespace, all traffic not allowed
by the policy is denied. However, if there are no network policies in a namespace all
traffic will be allowed into and out of the pods in that namespace. | Severity: | high | Rule ID: | xccdf_org.ssgproject.content_rule_configure_network_policies_namespaces | References: | nerc-cip | CIP-003-8 R4, CIP-003-8 R4.2, CIP-003-8 R5, CIP-003-8 R6, CIP-004-6 R2.2.4, CIP-004-6 R3, CIP-007-3 R2, CIP-007-3 R2.1, CIP-007-3 R2.2, CIP-007-3 R2.3, CIP-007-3 R5.1, CIP-007-3 R6.1 | nist | AC-4, AC-4(21), CA-3(5), CM-6, CM-6(1), CM-7, CM-7(1), SC-7, SC-7(3), SC-7(5), SC-7(8), SC-7(12), SC-7(13), SC-7(18), SC-7(10), SI-4(22) | pcidss | Req-1.1.4, Req-1.2, Req-1.2.1, Req-1.3.1, Req-1.3.2, Req-2.2 | app-srg-ctr | SRG-APP-000038-CTR-000105 | cis | 4.3.2 | bsi | APP.4.4.A7, APP.4.4.A18, SYS.1.6.A5, SYS.1.6.A21 | pcidss4 | 1.2.6, 1.2, 1.3.1, 1.3, 1.4.1, 1.4, 2.2.1, 2.2 |
| |
|
Rule
Ensure Network Policy is Enabled
[ref] | Use Network Policy to restrict pod to pod traffic within a cluster and
segregate workloads. | Rationale: | By default, all pod to pod traffic within a cluster is allowed. Network
Policy creates a pod- level firewall that can be used to restrict traffic
between sources. Pod traffic is restricted by having a Network Policy that
selects it (through the use of labels). Once there is any Network Policy in a
namespace selecting a particular pod, that pod will reject any connections
that are not allowed by any Network Policy. Other pods in the namespace that
are not selected by any Network Policy will continue to accept all traffic.
Network Policies are managed via the Kubernetes Network Policy API and
enforced by a network plugin, simply creating the resource without a
compatible network plugin to implement it will have no effect. EKS supports
Network Policy enforcement through the use of Calico. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_configure_network_policy | Identifiers: | CCE-88207-6 | References: | | |
|
Rule
Encrypt Traffic to Load Balancers and Workloads
[ref] | Encrypt traffic to HTTPS load balancers using TLS certificates. | Rationale: | Encrypting traffic between users and your Kubernetes workload is fundamental
to protecting data sent over the web. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_configure_tls | Identifiers: | CCE-89133-3 | References: | | |
|
Rule
Restrict Access to the Control Plane Endpoint
[ref] | Enable Endpoint Private Access to restrict access to the cluster's control
plane to only an allowlist of authorized IPs. | Rationale: | Authorized networks are a way of specifying a restricted range of IP
addresses that are permitted to access your cluster's control plane.
Kubernetes Engine uses both Transport Layer Security (TLS) and authentication
to provide secure access to your cluster's control plane from the public
internet. This provides you the flexibility to administer your cluster from
anywhere; however, you might want to further restrict access to a set of IP
addresses that you control. You can set this restriction by specifying an
authorized network. Restricting access to an authorized network can provide
additional security benefits for your container cluster, including:
- Better protection from outsider attacks: Authorized networks provide an
additional layer of security by limiting external access to a specific set
of addresses you designate, such as those that originate from your
premises. This helps protect access to your cluster in the case of a
vulnerability in the cluster's authentication or authorization
mechanism.
- Better protection from insider attacks: Authorized networks help protect
your cluster from accidental leaks of master certificates from your
company's premises. Leaked certificates used from outside Amazon EC2 and
outside the authorized IP ranges (for example, from addresses outside your
company) are still denied access.
| Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_control_plane_access | Identifiers: | CCE-86182-3 | References: | | |
|
Rule
Ensure Private Endpoint Access
[ref] | Disable access to the Kubernetes API from outside the node network if it is
not required. | Rationale: | In a private cluster, the master node has two endpoints, a private and public
endpoint. The private endpoint is the internal IP address of the master,
behind an internal load balancer in the master's VPC network. Nodes
communicate with the master using the private endpoint. The public endpoint
enables the Kubernetes API to be accessed from outside the master's VPC
network.
Although Kubernetes API requires an authorized token to perform sensitive
actions, a vulnerability could potentially expose the Kubernetes publicly
with unrestricted access. Additionally, an attacker may be able to identify
the current cluster and Kubernetes API version and determine whether it is
vulnerable to an attack. Unless required, disabling public endpoint will help
prevent such threats, and require the attacker to be on the master's VPC
network to perform any attack on the Kubernetes API. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_endpoint_configuration | Identifiers: | CCE-88813-1 | References: | | |
|
Rule
Ensure Cluster Private Nodes
[ref] | Disable public IP addresses for cluster nodes, so that they only have private
IP addresses. Private Nodes are nodes with no public IP addresses. | Rationale: | Disabling public IP addresses on cluster nodes restricts access to only
internal networks, forcing attackers to obtain local network access before
attempting to compromise the underlying Kubernetes hosts. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_private_nodes | Identifiers: | CCE-88669-7 | References: | | |
|
Group
Kubernetes - Registry Security Practices
Group contains 4 rules |
[ref]
Contains evaluations for Kubernetes registry security practices, and cluster-wide registry configuration. |
Rule
Only use approved container registries
[ref] | Use approved container registries. | Rationale: | Allowing unrestricted access to external container registries provides the
opportunity for malicious or unapproved containers to be deployed into the
cluster. Allowlisting only approved container registries reduces this risk. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_approved_registries | Identifiers: | CCE-86901-6 | References: | | |
|
Rule
Ensure Image Vulnerability Scanning
[ref] | Scan images being deployed to Amazon EKS for vulnerabilities. | Rationale: | Vulnerabilities in software packages can be exploited by hackers or malicious
users to obtain unauthorized access to local cloud resources. Amazon ECR and
other third party products allow images to be scanned for known
vulnerabilities. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_image_scanning | Identifiers: | CCE-88990-7 | References: | | |
|
Rule
Ensure Cluster Service Account with read-only access to Amazon ECR
[ref] | Configure the Cluster Service Account with Storage Object Viewer Role to only
allow read- only access to Amazon ECR. | Rationale: | The Cluster Service Account does not require administrative access to Amazon
ECR, only requiring pull access to containers to deploy onto Amazon EKS.
Restricting permissions follows the principles of least privilege and
prevents credentials from being abused beyond the required role. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_read_only_registry_access | Identifiers: | CCE-86681-4 | References: | | |
|
Rule
Minimize user access to Amazon ECR
[ref] | Restrict user access to Amazon ECR, limiting interaction with build images to
only authorized personnel and service accounts. | Rationale: | Weak access control to Amazon ECR may allow malicious users to replace built
images with vulnerable containers. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_registry_access | Identifiers: | CCE-89643-1 | References: | | |
|
Group
Kubernetes Secrets Management
Group contains 1 rule |
[ref]
Secrets let you store and manage sensitive information,
such as passwords, OAuth tokens, and ssh keys.
Such information might otherwise be put in a Pod
specification or in an image. |
Rule
Ensure Kubernetes Secrets are Encrypted
[ref] | Encrypt Kubernetes secrets, stored in etcd, using secrets encryption feature
during Amazon EKS cluster creation. | Rationale: | Kubernetes can store secrets that pods can access via a mounted volume.
Today, Kubernetes secrets are stored with Base64 encoding, but encrypting is
the recommended approach. Amazon EKS clusters version 1.13 and higher support
the capability of encrypting your Kubernetes secrets using AWS Key Management
Service (KMS) Customer Managed Keys (CMK). The only requirement is to enable
the encryption provider support during EKS cluster creation.
Use AWS Key Management Service (KMS) keys to provide envelope encryption of
Kubernetes secrets stored in Amazon EKS. Implementing envelope encryption is
considered a security best practice for applications that store sensitive
data and is part of a defense in depth security strategy.
Application-layer Secrets Encryption provides an additional layer of security
for sensitive data, such as user defined Secrets and Secrets required for the
operation of the cluster, such as service account keys, which are all stored
in etcd.
Using this functionality, you can use a key, that you manage in AWS KMS, to
encrypt data at the application layer. This protects against attackers in the
event that they manage to gain access to etcd. | Severity: | unknown | Rule ID: | xccdf_org.ssgproject.content_rule_secret_encryption | Identifiers: | CCE-90708-9 | References: | | |
|