Containers in the keep

Containers in the keep

Like the fortifications of a medieval castle, the multilayered practice of defense in depth can help protect your containers—no moats, drawbridges, or dragons required.
Part of
Issue 17 May 2021

Containers

Containers, Kubernetes, and cloud native applications are transforming operations work into what we now call DevOps. Today, manual provisioning, configuration, and deployment tasks are written in code so they can be automated, which helps businesses ship new functionality to their users more quickly. To ensure safety, security processes need to be automated, too—hence the emerging practice known as DevSecOps. This isn’t quite as simple as writing a few scripts; DevSecOps requires a new set of tools, and a whole new way of thinking about how to apply security practices. The good news is that it’s also an opportunity for your organization to implement stronger security.

Seen through the lens of traditional security approaches, adopting containers can seem like a terrifying change. Consider the number of things that have to be secured. A production deployment today will likely have hundreds, if not thousands, of containers. Each container might hold an entire copy of a Linux file system, with a set of packages installed to fulfill the dependencies of the application code it’s supposed to run. If Kubernetes is orchestrating your workloads, it’s deciding where in the cluster to place each container. Most likely, you’re scaling the number of containers automatically to cope with demand.

Imagine your job involves manually updating servers with vulnerability patches on a regular basis. Now, you’re tasked with securing thousands of containers, each with their own set of dependencies. How are you supposed to keep all those containers up to date? The network firewall that controls traffic in or out of the deployment might still have a role to play, but “Patch Tuesday” once or twice a month doesn’t cut it when containers are being created and destroyed automatically, and have an average life span of around one day

The key to enhanced security—and soothing your security team’s fears—is granularity. Containers encourage us to decouple parts of our application so we can deploy and scale them independently. In turn, we can apply security controls to each container, as well as the system as a whole. This embodies the security principle known as defense in depth.

Defense in depth means having layers of security. A medieval castle is a good example. Its thick stone walls and sturdy door are guarded by a moat and drawbridge. Archers firing through slit windows and boiling oil poured from the castellations add extra peril for would-be assailants. Inside the castle walls, a fortified structure called the keep acts as a refuge of last defense for the residents and their valuables; even if an attacker breaches the walls, they still have to fight their way into the keep. Each layer of the castle’s defense is useful in its own right, but the combination is often even more powerful. Containers provide us a similar opportunity to use additional layers of defense to bolster the overall security of our applications and data. 

A container image is a unit of deployable software that includes everything the container needs to run. It’s also an expression of the developer’s intent when writing the application code. As a developer, you know what the executable is called, what network traffic it accepts, and what traffic it should generate. By turning this knowledge into a security profile for each container image, container security tools can detect anomalous behavior that might indicate a compromise. This adds powerful security layers around each individual container. For example, network requests to a container have to arrive on a defined port number. Container networking allows us to map a container port to an (arbitrary) port on the host, but a firewall tool that understands container networking can ensure traffic only gets to the container if it’s on an expected port: We can permit traffic on port 12345 to container A and simultaneously block traffic on port 12345 to container B, even if both are running on the same host machine. This is much more fine-grained than a traditional firewall. (To get a feel for Kubernetes network policies, check out the visual editor at networkpolicy.io.)

Service meshes enforce this concept at the service level. In modern deployments, identical containers are often grouped together to provide a service, and individual containers can respond to a request for that service. For example, a retail website might have a product search service that looks up relevant products in a database. The containers that run this service handle product lookup requests from the frontend web service and communicate with the product database to get results. If a container from the product search service starts trying to send messages to a payment service, that would be a red flag. 

We can apply similar thinking to the executables running inside a container. Though some containers might have an initialization phase where they run a different program, the majority only ever run a single executable program. Specialized commercial container security tools like Aqua or Prisma can ensure only the expected programs are permitted; in the open-source world, projects like Falco, Cilium, and Tracee take advantage of eBPF technology to gain visibility when unexpected events occur. 

Runtime security tools, service meshes, and network policies enforce what a developer intends to have happen inside a container. However, this approach is not yet sufficient. We still need to think about patching vulnerable code. 

When it comes to patching dependencies in containers, the solution is… not to patch them at all. Instead, you need to identify container images with vulnerabilities, rebuild them with the fixed versions, and redeploy them based on these newly updated images. One advantage of a cloud native, container-based deployment is that killing running software and replacing it with something new isn’t a big deal. You’re supposed to be able to scale up and down at will, so replacing running code with a patched version only requires scaling down the old versions while scaling up the new ones. 

Cloud native organizations include image vulnerability scanning as part of their continuous integration pipelines. A container image with serious vulnerabilities will be rejected, just as it would if it failed automated functional tests. There are several vulnerability scanning tools that you can easily incorporate into popular CI pipeline systems, including commercial solutions and open-source tools like Trivy or Anchore. Some scanners can also check for other security issues, like the presence of malware in an image. 

New vulnerabilities are discovered in existing code all the time, so it’s important to scan images regularly to check against the latest set of known issues. Then, after scanning the images you want to use, run automated policy checks just before a container is deployed. These checks will confirm that the container image has been scanned recently and that the results don’t show any serious vulnerabilities or other security issues. Perhaps even more importantly, they can look for weaknesses in a container’s configuration. According to the principles of defense in depth, we need to behave as if an attack might happen despite our best efforts, and we have to make it difficult for an attacker to escape from the container onto the host. Containers share the kernel of the host they run on, and all the processes on a machine are visible to the host, whether they’re containerized or not. If the host gets compromised, so does every container on that host. 

From a security perspective, it’s integral to use automated policy checks that ensure no one is deploying containers with insecure configurations, because it’s much more likely a container escape occurs because of an insecure configuration than a vulnerability bug. Without additional controls, any user who can run a container can configure it to run in an insecure way. Allowing someone to run a container on a host machine is the equivalent of giving them root access. For instance, there’s nothing stopping them from running a container with --privileged, which gives the container access to the host’s entire file system through its device mounts. This is like having a master key to every door in the castle. 

The good news is container configurations are usually defined in code. Configuration files can be scanned as part of the CI pipeline in the same way a container image is scanned for vulnerabilities. You can also run these configuration checks at admission control, the point at which a workload is about to be deployed. One approach to implementing configuration scanning is the open-source software Open Policy Agent, and even more powerful checks are possible with commercial security tools. 

Policy checks can be sufficient for many organizations. However, when the consequences of a successful attack might be severe—for example, in financial or health care applications—you may also want to explore hardening the security boundary around each container to protect against the possibility of a kernel container escape vulnerability. To go back to our castle analogy, this is a bit like locking your workloads inside the keep in a dungeon, making it extremely hard for them to escape. This can be achieved with a sandbox, such as AWS’s Firecracker or Google’s gVisor. Rather than sharing the host’s kernel, Firecracker creates a virtual machine for each container, albeit a minimal one. The gVisor approach is to reimplement the system call interface in user space so the container can’t access the host’s kernel directly, despite sharing it. Of course, nothing comes for free—these sandboxing techniques aren’t compatible with every conceivable container image, and they may incur performance penalties in certain situations, but they work well in most cases. If you’re running containers on a public cloud–managed service, your containers may already be benefitting from sandboxing. 

By applying the principles of defense in depth, it’s possible to be more granular about container security than in a traditional monolithic deployment. Where we used to protect a network and its constituent servers, we can now also protect the containerized processes running on those servers. We can automate security checks for vulnerability scanning, policy checks, network policy, and runtime enforcement. We can even run each container in its own sandbox. This adds up to a powerful opportunity to run code more securely. 

Medieval castles weren’t identical—each one’s fortification was unique, the result of continued experimentation and evolution. New ideas were battle-tested in times of attack or siege. People saw what worked for and against their friends and foes, and rebuilt or extended their designs. Today, security professionals are the folks who build your castle. The move to containers will require redeveloping existing defenses, as well as embracing new tools and techniques. There’s a lot to absorb, including a cultural shift from manual effort to automation and increased collaboration between developers, operations, and security. This requires hard work, but the reward is a fine-grained, strong, and impregnable range of defenses that flexes to accommodate the workloads within. 

About the author

Liz Rice is the chief open-source officer at Isovalent, chair of the Cloud Native Computing Foundation’s Technical Oversight Committee, and author of O’Reilly’s Container Security.

@lizrice

Artwork by

Myriam Wares

myriamwares.com

Buy the print edition

Visit the Increment Store to purchase print issues.

Store

Continue Reading

Explore Topics

All Issues