Writing Kubernetes manifests to create a secure application is not straightforward and keeping all of the correct security requirements in your head is nearly impossible. Additionally, there are times when it feels easiest to loosen security restraints temporarily, such as granting overly permissive access for a container, until you forget to dial it back before deploying to production.
Overly permissive access leads to risk, but when you add the scalability of IaC into the mix, this risk can quickly multiply exponentially. We recently saw this in the wild with open-source modules, where 47% of publicly-accessible Helm charts in Artifact Hub contained a misconfiguration. This means every resource deployed using these charts also contained a misconfiguration unless the default configuration was updated.
In addition to analyzing the state of open source Helm charts, we wanted to dig into the most common misconfigurations found in Kubernetes overall. We took the results of thousands of security scans of Kubernetes manifests and runtime environments and aggregated the data to find the most common misconfiguration at each of the four levels of severity (Low, Medium, High, Critical) plus one more bonus high severity issue. For each misconfiguration, we’ll walk through the issue, why it’s a security concern, and how to fix it. Although these are well documented misconfigurations, they’re still common, such as showing up in the recommended deployment for many popular services.
Low: The default namespace should not be used
Namespaces in Kubernetes create a logical separation for services that share the same cluster, creating “virtual” clusters. They are useful for separating services for security or resource allocation reasons when multiple applications share the same cluster or if there are multiple stages of applications in the same cluster (e.g., development, staging, production).
It shouldn’t be any surprise that the most common Low misconfiguration (and actually across all severities) is using the default namespace. This isn’t a bad thing for a single application, but in shared clusters, you lose the logical separation if everything is deployed to the default namespace. This makes it easier for a bad actor to access other services, or one service to be a resource hog for another team. Namespaces, along with other settings like resource limits, create those boundaries.
This could be the most common misconfig for a variety of reasons. First, it could be the power of defaults, where people simply apply a YAML file without adding a namespace. Second, it could be that many of the manifests scanned didn’t include a namespace, but the namespace was defined when applying the yaml (kubectl apply pod.yaml --namespace namespace1
). The best practice, however, is to include it in the YAML file to avoid accidentally deploying to the default namespace. Third, if a cluster isn’t shared or there aren’t concerns about services talking or being resource hogs, there isn’t a need for custom namespaces.
To fix this, create a namespace if you don’t already have one. You can do this using the CLI, but let’s follow the declarative path and create a development-namespace.yaml
: