Archive
Container Lifecycle Management
I wanted to share a big problem that I see developing for many devs, as they begin to adopt containers. In an effort to familiarize us with some fundamentals, I want to compare the difference between virtual machines and containers.

The animation (above) shows a few significant differences that can confuse many developers who are used to virtual machine lifecycles. We can outline the benefits or why you *want* to adopt containers
- On any compute instance, you can run 10x as many applications
- Faster initialization and tear down means better resource management
Now, in the days where you have separate teams, one running infrastructure and another handling application deployment, you learned to rely on one another. The application team would say, ‘works for me’ and cause friction for the infrastructure team. All of that disappears with containers…but…
By adopting containers, teams can overcome those problems by abstracting away the differences of environments, hardware and frameworks. A container that works on a devs laptop, will work anywhere!
What is not made clear to the dev team is, they are now completely responsible for the lifecycle of that container. They must lay down the filesystem and include any libraries needed for their application, that are NOT provided by the host that runs them. This creates several new challenges that they are not familiar with.
The most important part of utilizing containers, that many dev teams fail to understand, is they must update the container image, as often as the base image they choose to use becomes vulnerable. (Containers are made up of layers and the first one is the most important!) Your choice of base image filesystem, will come with some core components that are usually updated, whenever the OS vendor issues patches (which can be daily or even hourly!). When you choose to use a base image, you should consider it like a snapshot, those components develop vulnerabilities that are never fixed in your container image.
One approach that some devs use is live patching the base image (like apt-get or dnf or yum update). Seasoned image developers soon realize that this strategy is just a band-aid when they add another layer (in additional to the first one) and replace some of the components at the cost of increasing the size. Live patching can also add cached components that may/may not fully remove/replace the bad files. Even if you are effective at removing the cached components, you may forget others as you install and compile your application.
The second approach involves layer optimization. Dev teams are failing to reduce the size of the container images which uses more bandwidth, pulling and caching those image layers, which in turn, uses more storage on the nodes that cache them. Memory use is still efficient thanks in part to overlay filesystem optimization but the other resources are clearly wasted.
Dev teams also fail to see the build environment as an opportunity to use more than one. Multipart building strategy involves the use of several sacrificial images to do compilation and transpilation. Choosing to assemble your binaries and copying them to a new clean image helps remove additional vulnerabilities when those intermediate packages are not needed in the final running container image. It also reduces the attack surface and can extend the containers lifecycle.
It takes a very mature team to realize that any application is only as secure as the base image you choose. The really advanced ones ALSO know that keeping your base updated is just as important as keeping ALL your code secure, when dealing with containers.
SLSA • Supply-chain Levels for Software Artifacts
Oh thank God I am not the only one who sees the next techopolus (O-day apocalypse) as we all adopt kubernetes as the orchestration platform and we forget about *where* those container images come from… http://slsa.dev/
Bank API as Microservices with CQRS in TypeScript | Level Up Coding
Very seldom, do we get to see several technologies used so well together. With the exception of how to illustrate how the secrets should be managed, this article really shows what secure by design is all about.
https://levelup.gitconnected.com/microservices-with-cqrs-in-typescript-and-nestjs-5a8af0a56c3a
Are you a Secure Programmer?
Happy New Year to those of you who read this blog, and to those folks who remember my predictions about going over 20,000 unique CVEs in 2019, I trust you may agree that 2019 was a banner year for vulnerabilities. Lucent/Alcatel are among the vendors who have CVEs that have taken us over 20,000 this year (CVE-2019-20047, 20048).
It’s time to ask yourself, are the hackers getting better at ‘hacking’ or are coders just getting worse? If we are going to examine how the last half of a decade has had more than 10,000 unique vulnerabilities each year and that number keeps increasing, we will all need to come to the conclusion that programmers just don’t know how to create programs that are secure by default!
Here is a chance for some of the best and brightest programmers to change course and learn how to avoid these vulnerabilities once and for all.
A California University (UCDavis) has created an online course that can help teach the Principles of Secure Coding. In a series of four courses, developers can learn about the fundamentals, identify vulnerabilities and walk on the wildside as they learn how to hack just like the a blackhat!
Take one, two or the set of four courses and really understand how pentesters can exploit how code works so you can learn how to avoid many of the common pitfalls. https://www.coursera.org/specializations/secure-coding-practices
New European rules for mobile banking apps coming to a device near you…
The world is clearly a better place now that we carry computers in our back pocket but we need an increase in security measures for payment transactions and therefore we will require an increase in regulation, such as the PSD2 from European Commission.
The Payment Services Directive mandates compliance by September 2019 and aims to regulate banks, payment service providers and electronic payments to include security features to protect consumers across digital channels. The PSD2 legislation will require financial services in the European Union (EU) to contribute to a more integrated, secure, and efficient payments ecosystem.
The PSD2 directive requires financial institutions to:
- Provide/Implement a monitoring mechanism in their apps to detect/report signs of malware.
- Provide security measures in their app to mitigate risk for the user device.
- Ensure consumers have a secure environment to execute their financial transactions
In Article 2 and Article 9 of the directive, PSD2 highlights Strong Customer Authentication (SCA) and Safe Execution Environment (SEE), which requires de-risking across various threat vectors impacting mobile apps.
These include detecting compromised devices (eg: jailbroken or rooted), unsafe environments (such as a fake or malicious wi-fi), as well as malware and vulnerabilities within the application execution environment. PSD2 also includes RTS (Regulatory Technical Standards), which are regulatory requirements set by the European Banking Authority (EBA) to ensure that payments across the EU are secure, fair & efficient.
To meet these requirements, financial institutions should add strong security capabilities like binary protections to their mobile apps. These controls are designed to protect against known and unknown threats on users’ devices.
Mobile banking apps should also be able to detect when they are installed on risky devices and consider restricting access to high value banking services until those risks have been remediated.