Join us

Securing a Container: How to Clean Up Docker Images and Why Limit Runtimes

1_Ay1lCjSiFYUR7gGjYxeQYg.png

Containers have rocked the software engineering world not less than DevOps practices. Containers are fantastic — deployable everywhere, they let you run previously incompatible setups on the same servers and move projects between servers smoothly. However, there’s always a price to pay — container environments require an understanding of core fundamentals to run containers securely.

The fundamental problem with containers is they are only as secure as the programs within them. But knowing about the ways to strengthen security in specific platforms can improve the situation significantly. So let’s cover the types of container platforms first.

Container Platforms

Docker is a popular Platform as a Service (PaaS) that allows you to create and deploy applications and services in the form of containers. It utilizes the host OS Kernel instead of hypervisors like VirtualBox. Since Docker is working on the top of OS, it’s vital to update both regularly to cover all vulnerabilities.

Kubernetes, also known as K8s, is an open-source container orchestration system for automating computer app deployment, scaling, and management. You can install it yourself or use a cloud solution.

Nomad and OpenShift are famous Kubernetes alternatives. Nomad is a simple workload orchestrator that allows to deploy and manage both containers and non-containerized applications. And OpenShift is a hybrid cloud foundation for building and scaling containerized apps.

Pretty much every cloud provides its flavor of Kubernetes, but those aren’t the only container platforms they offer. Among the non-Kubernetes cloud container solutions, we can mention ECS in AWS, Cloud Run in GCP, and Container Apps (as well as ACI) in Azure. They all have automatic security patching that is hard to track. That means security in the cloud container solutions entirely depends on their content.

Legend says updating platforms wreaks havoc and brings fresh vulnerabilities, so it’s better not to update it at all. We want to dispel this myth and recommend you make regular updates since they cover existing vulnerabilities. And to avoid fresh issues brought by the new functionality, read the version changelog.

Now that we’ve covered complex systems to run apps, it’s high time to mention the significant part of these systems — container runtime that can also be called a container engine. Runtimes launch and manage containers, including containerd, CRI-O, and rkt.

Runtime holes could compromise resources in all containers and the host operating system, so it’s vital to update container runtimes. As a rule, they get updated as a part of the platform — but we recommend making separate security updates more often to achieve maximum reliability.

Security Recommendations

As mentioned before, container systems are as secure as the apps within them. The Corewide team is aware of all possible safety measures providing the top security level to all our clients. We’ve prepared some recommendations that can help you protect your app as well as the container environment:

  • Firewall Configuration

First of all, configure a firewall. It will protect your environment against network attacks aimed to expose a vulnerability you might have in your app or runtime.

  • Runtime Access Restriction

Limit the access to runtime for the internal apps. It’s vital to avoid hacking because, for example, if the k8s runs other containers having the permission to manage it, the intruder can get into the Kubernetes and hack the entire system.

Some applications communicate with Docker daemon to fetch its data or even manage the containers. It’s quite typical for them to do it by accessing the host system’s Docker socket to do their deeds — but even an unprivileged user inside these containers can do a lot of damage to the host system if it’s permitted to access the socket.

A lot of container images end up with curl installed at build time. Since Docker API is HTTP-based, this tool is enough to hack your host server if the process inside the container runs as a privileged user — meaning if your app has an RCE vulnerability, the entire OS is compromised. Docker docs have great examples of how to run a container by means of simple HTTP queries to Docker socket: breaking the isolation is easy as long as you can break the app that has access to Docker API.

With root access, it’s way more dangerous. Let’s run a simple shell in a container and share the Docker socket:

Okay, we’re in: a harmless Alpine-based container with the host system’s Docker socket available. Since we have root privileges in the container, we can install and use anything — the aforementioned curlor even Docker CLI itself — dream big, after all:

Now we can see our own container running in the host system. Compare container hostname with the ID you see in docker psoutput:

How dangerous is this? As dangerous as you can imagine, maybe more. We can even get full access to the host’s file system — it only takes to run another container with root FS mounted there:

No more restrictions: as we communicate with the host’s Docker daemon, we can specify anything it can reach in the host file system, so nothing prevents us from spawning new containers with any kernel-level privileges.

We do have to note, though: to make this possible, your container must break several rules we mention in this article. Let’s list them:

* application has an RCE vulnerability (you’d be surprised how many of these slip into production)
* application container has access to the container runtime (the socket is exposed)
* application runs as root in the container (although depending on the runtime, it might not always be a mandatory condition).

  • Restricting Privileges

Another significant point being mentioned every time in a container security discussion is about running containerized apps as non-root users. Many engineers claim it’s safer — but those who know container anatomy can (and should!) argue with this statement. Running apps as root isn’t harmful but providing extra privileges is! A root inside the container is not quite the same root you have on your host OS: it’s limited to kernel-level privileges you give it to. It can’t access devices, manage low-level network parameters and other bits that might compromise the security. There are only two ways for root from the container to break its isolation:

-you let your container manage your environment/runtime (see Runtime Access Restriction)

-your kernel is out-of-date and has a critical vulnerability — meaning you’ve got a much worse problem to deal with than a container process running loose, and should upgrade your kernel ASAP.

  • Resource Quotas

Limit the number of resources like memory and CPU per container. It will increase the environmental efficiency as well as prevent the imbalance of resources of the overall containers.

  • Clean Images

Building safely the images is vital because what happens in Vegas — stays in Vegas. In other words, all information is kept there forever, so ensure you don’t let sensitive data get into the image. We’ve had our share of badly crafted “Dockerfile’s”, have a look at this example:

And then you’re left wondering how this Docker image weighs a ton despite the fact you’ve cleaned up all the leftovers. The thing is, you haven’t: every RUN statement creates a new file system layer in the image, and you can only see the content of the last one. So it might seem like there’s nothing left in the image, the underlying layers still contain unnecessary data. Reading through official best practices for writing Dockerfiles you’ll end up with this instead:

This is the tip of the iceberg. Here’s where people get burnt a lot by not following the same core principles of building the images:

Feeling safe already? You’ve added a private SSH key, then used it to clone a private git repository, then wisely removed the key from the filesystem. Which, of course, is still left there: COPY statement created a layer where SSH key still exists, and if an attacker gets access to the image you’ve built they’ll be able to extract this key and use it to get access to all your private repositories. And the fix is trivial… multi-stage builds:

This neat trick allows you to build a temporary image before building the actual one, and the layers of the “temp-IMG” aren’t exposed and get deleted when you’ve finished building. No credentials leaked.

  • Security for Pipelines

Consider shifting left — implement security into the development, test, and build stages of your CI/CD pipeline. This security check can help you detect old vulnerabilities while deploying. Detection of malicious code, outdated packages, and similar threats on early-stage can help developers keep container images clean.

  • Regular Reviews

Checking the software for updates inside a container can’t be overestimated if you want the containers to run like clockwork. Moreover, performing regular code reviews for vulnerabilities will help you increase the security level dramatically. And to make these check-ups easy, you can use monitoring tools like RASP.

Summary

Unfortunately, containers aren’t magically secure. But with the tips above, you can strengthen security smoothly and manage a vast platform for containerized apps. No more doubts whether containers need antivirus or resolve security issues.

Don’t be afraid to trust this work to professionals — expert audit and consulting will save you much time and effort. If you want to protect your business and your clients, it’s time to start taking container security seriously.


Only registered users can post comments. Please, login or signup.

Start blogging about your favorite technologies, reach more readers and earn rewards!

Join other developers and claim your FAUN account now!

Avatar

Daria Lobanova

Marketing Team Lead, Corewide

@corewide
User Popularity
46

Influence

3k

Total Hits

1

Posts