Home > General > Container Lifecycle Management

Container Lifecycle Management


I wanted to share a big problem that I see developing for many devs, as they begin to adopt containers. In an effort to familiarize us with some fundamentals, I want to compare the difference between virtual machines and containers.

The animation (above) shows a few significant differences that can confuse many developers who are used to virtual machine lifecycles. We can outline the benefits or why you *want* to adopt containers

  • On any compute instance, you can run 10x as many applications
  • Faster initialization and tear down means better resource management

Now, in the days where you have separate teams, one running infrastructure and another handling application deployment, you learned to rely on one another. The application team would say, ‘works for me’ and cause friction for the infrastructure team. All of that disappears with containers…but…

By adopting containers, teams can overcome those problems by abstracting away the differences of environments, hardware and frameworks. A container that works on a devs laptop, will work anywhere!

What is not made clear to the dev team is, they are now completely responsible for the lifecycle of that container. They must lay down the filesystem and include any libraries needed for their application, that are NOT provided by the host that runs them. This creates several new challenges that they are not familiar with.

The most important part of utilizing containers, that many dev teams fail to understand, is they must update the container image, as often as the base image they choose to use becomes vulnerable. (Containers are made up of layers and the first one is the most important!) Your choice of base image filesystem, will come with some core components that are usually updated, whenever the OS vendor issues patches (which can be daily or even hourly!). When you choose to use a base image, you should consider it like a snapshot, those components develop vulnerabilities that are never fixed in your container image.

One approach that some devs use is live patching the base image (like apt-get or dnf or yum update). Seasoned image developers soon realize that this strategy is just a band-aid when they add another layer (in additional to the first one) and replace some of the components at the cost of increasing the size. Live patching can also add cached components that may/may not fully remove/replace the bad files. Even if you are effective at removing the cached components, you may forget others as you install and compile your application.

The second approach involves layer optimization. Dev teams are failing to reduce the size of the container images which uses more bandwidth, pulling and caching those image layers, which in turn, uses more storage on the nodes that cache them. Memory use is still efficient thanks in part to overlay filesystem optimization but the other resources are clearly wasted.

Dev teams also fail to see the build environment as an opportunity to use more than one. Multipart building strategy involves the use of several sacrificial images to do compilation and transpilation. Choosing to assemble your binaries and copying them to a new clean image helps remove additional vulnerabilities when those intermediate packages are not needed in the final running container image. It also reduces the attack surface and can extend the containers lifecycle.

It takes a very mature team to realize that any application is only as secure as the base image you choose. The really advanced ones ALSO know that keeping your base updated is just as important as keeping ALL your code secure, when dealing with containers.

Categories: General Tags: ,
  1. No comments yet.
  1. No trackbacks yet.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.