For solutions to these and other problems please contact us at The-Techy.com

Certificate Management may be hard, but you don’t have much choice any longer.

March 17, 2024 Leave a comment

Ever since the 1990s when Netscape₁ first introduced “Secure Sockets”, we have turned this thing called “The Internet” into an ecommerce engine worth over 3 trillion USD today. Statistics show that its growth is expected to top 5 trillion USD by 2029₂. Efforts to secure the Internet have been going on for three decades since then so why should be alarmed now? Well, it involves two of the most popular subjects in our modern era, Artificial Intelligence and Quantum Computing.

AI has proven to be highly effective at finding defects in software₃, something that humans continue to create and Quantum Computers will speed up computational power by a factor of 10x. Think of a hacker who never sleeps, has no preconceived notions about ‘if’ something can be accomplished, and just sets itself on a target of guessing your password or even breaking your encryption keys for your secure session with your bank? Is there any doubt that it will succeed…eventually, now that it is 10x faster? Does this sound like a George Orwell book, well it should, that time has arrived!

Traditional certificates relied on factorization of prime numbers. That is just a fancy way of saying 3 times 5 equals 15 (although this is an oversimplification). When you use factors that are thousands of digits long, computers were needed to solve these equations and reversing those equations would take years or even centuries. Now enter the Quantum computer that performs these calculations at dizzying speeds, and you are no longer safe. The only answer to help treat those risks is to replace those equations more often that one or twice every few years.

The scope of the problem becomes apparent when you see how prevalent traditional certificates are in our electronic world. Major use cases are not just limited to SSL/TLS certificates to protect your ecommerce or banking sites. They are used to provide integrity verification used in encryption for proof of ownership or tampering. They are also used for Identity (like secure shell or tokens) and systems that rely on trust. With AI and quantum wildly in use today, these systems are at risk if you do not replace these on a regular basis.

Google wants to shorten the lifecycle of certificates₄, to help manage the risk associated with SSL/TLs certificate usage on the Internet. By replacing the secrets more often, it makes it harder to guess them. Let’s Encrypt has be successful since the last decade, at generating 90-day certificates. There are many client implementations₅ that support the ACME standard that helps accomplish this.

This begs the question, “How do we manage hundreds of thousands of certificates at speeds that would take an army to accomplish?”

Automation is the key! Maybe you can ask your friendly AI prompt to help you accomplish this before someone uses it to crack your password and empty your bank account? 😊

WSL Christmas with Fedora

December 24, 2023 Leave a comment

Hey folks – Merry Xmas.

For those of you playing with the windows subsystem for Linux, I wanted to share a great recipe and the steps required for baking your very own Xmas Fedora Docker running on WSL. It makes a great holiday gift to share with your friends and family (because who doesn’t use containers these days!)

  1. Download your new base image from Koji at https://koji.fedoraproject.org/koji/packageinfo?packageID=26387

 2. Carefully open the package and extract the file called layer.tar 

 3. Mix well for a few seconds (here we have renamed the layer.tar file and used ‘fedora39’ as the image name but you may adapt this ‘to your own taste’

 4. Bake in the oven for a few minutes (here we start the distro, perform an update and then install systemd)

  5. Decorate it with a custom file to activate systemd and let it cool down (restart)

  6. Package your cake with the following items (dnf-plugins-core)

  7. Add the docker community repository for docker binaries and install the following components (docker-ce, docker-ce-cli containerd.io, docker-buildx-plugin and docker-compose-plugin)

That is it – Serve to friends and family!

Happy Holidays to all my readers and thanks for a great year!

Categories: Work related Tags: ,

Where *can* I put my secrets then?

September 9, 2023 Leave a comment

I have spent a large portion of my IT career, hacking others peoples software, so I thought it was time to give back to the community I work in and talk about secrets. Whether they be passwords, key material (like SSH, Asynchronous or Synchronous) or configuration elements, all elements that should be considered ‘sensitive’.

Whether you are an old timer who may still be modifying a monolithic codebase or you have modern cloud enabled shop that builds event driven microservices, the Twelve-Factor App is a great place to start. The link provided is the “12 Factor App” methodology, which outlines best practices for building modern software-as-a-service applications. When choosing to adopt this as your strategy, it can provide the basis for software development that transcends any language or shop-size, and should play a part of any Secure Software Development LifeCycle. In Section III Config, they explain the need to separate config from code but I feel this needs further clarity.

There are two schools of thought for many developers/engineers, when it comes to how to use secrets, you can load them into environment variables (as is outlined in this methodology above) or you can choose to persist them into protected files that may be loaded from any external secret manager and mounted only where they are needed. One thing is clear, you should never persist them alongside your code.

Let’s explore the most common, and arguably the easiest way to treat the risks of someone gaining unauthorized access to your secrets: Environment Variables

  • Your build environment may be considered implicitly available to the process of building/deploying your code, it can be difficult, but not impossible, for an attacker to track access and how the contents may be exposed (ps -eww <PID>).
  • Some applications or build platforms may grab the whole environment and print it out for debugging or error reporting. This requires will require advanced post processing as your build engine must scrub them from their infrastructure.
  • Child processes will inherit any environment variables by default, which may allow for unintended access. This breaks the principle of least privilege when you call another tool/code branch to perform some action and has access to your environment.
  • Crash and debug logs can/do store the environment variables in log-files. This means plain-text secrets on disk and will require bespoke post processing to scrub them.
  • Putting secrets in ENV variables quickly turns into tribal knowledge. New engineers who are not aware of the sensitive nature of specific environment variables will not handle them appropriately/with care (filtering them to sub-processes, etc).

Ref: https://blog.diogomonica.com//2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/

Secrets Management done right

Docker decided to to create KeyWhiz as far back as 2016 (seems abandoned now) and many vaulting tools today, make use of injectors that can dynamically populate variables OR create tmpfs mounts with files containing your secrets. When you prefer to read secrets from a temporary file, you can manage the lifecycle more effectively. Your application can call the timestamp functions to learn if/when the contents have changed and signal the running process. This allows database connectors and service connections to gracefully transition whenever key material changes.

Security should never trump convenience but don’t let Perfect be the enemy of ‘Good’. If you have sensitive data like static strings, certificates for protection or Identity or connection strings that could be misused, you need to balance the impact to you or your organization of losing them over your convenience. Learn to setup and use vaulting technology that can provide just enough security to help mitigate any of the risks associated with credential theft. Like hard work and exercise, it might hurt now, but you will thank me later!

Additionally, here are some API key gotchas (which are as dangerous as losing cash) that you should consider whenever you or your teams are building production software.

  • Do not embed API keys directly in code or in your repo source tree:
    • When API keys are embedded in code they may become exposed to the public, when code is cloned. Consider environment variables or files outside of your application’s source tree.
  • Constrain any API keys to any IP addresses, referrer URLs, and mobile apps that need them:
    • Limiting who the consumer can be, reduces the impact of a compromised API key.
  • Limit specific API keys to be usable only for certain APIs: 
    • By making more keys, it may seem that you are increasing the impact but if you have multiple APIs enabled in your project and your API key should only be used with some of them, you can easily detect and limit abuse of any one API key.
  • Manage the Lifecycle of ALL your API keys:
    • To minimize your exposure to attack, delete any API keys that you no longer need.
  • Rotate your API keys periodically:
    • Rotate your API keys, even when they appear to be used by authorized parties. After the replacement keys are created, your applications should be designed to use the newly-generated keys and discard the old keys.

Ref: https://support.google.com/googleapi/answer/6310037?hl=en

Categories: General Tags: ,

Container Lifecycle Management

July 16, 2023 Leave a comment

I wanted to share a big problem that I see developing for many devs, as they begin to adopt containers. In an effort to familiarize us with some fundamentals, I want to compare the difference between virtual machines and containers.

The animation (above) shows a few significant differences that can confuse many developers who are used to virtual machine lifecycles. We can outline the benefits or why you *want* to adopt containers

  • On any compute instance, you can run 10x as many applications
  • Faster initialization and tear down means better resource management

Now, in the days where you have separate teams, one running infrastructure and another handling application deployment, you learned to rely on one another. The application team would say, ‘works for me’ and cause friction for the infrastructure team. All of that disappears with containers…but…

By adopting containers, teams can overcome those problems by abstracting away the differences of environments, hardware and frameworks. A container that works on a devs laptop, will work anywhere!

What is not made clear to the dev team is, they are now completely responsible for the lifecycle of that container. They must lay down the filesystem and include any libraries needed for their application, that are NOT provided by the host that runs them. This creates several new challenges that they are not familiar with.

The most important part of utilizing containers, that many dev teams fail to understand, is they must update the container image, as often as the base image they choose to use becomes vulnerable. (Containers are made up of layers and the first one is the most important!) Your choice of base image filesystem, will come with some core components that are usually updated, whenever the OS vendor issues patches (which can be daily or even hourly!). When you choose to use a base image, you should consider it like a snapshot, those components develop vulnerabilities that are never fixed in your container image.

One approach that some devs use is live patching the base image (like apt-get or dnf or yum update). Seasoned image developers soon realize that this strategy is just a band-aid when they add another layer (in additional to the first one) and replace some of the components at the cost of increasing the size. Live patching can also add cached components that may/may not fully remove/replace the bad files. Even if you are effective at removing the cached components, you may forget others as you install and compile your application.

The second approach involves layer optimization. Dev teams are failing to reduce the size of the container images which uses more bandwidth, pulling and caching those image layers, which in turn, uses more storage on the nodes that cache them. Memory use is still efficient thanks in part to overlay filesystem optimization but the other resources are clearly wasted.

Dev teams also fail to see the build environment as an opportunity to use more than one. Multipart building strategy involves the use of several sacrificial images to do compilation and transpilation. Choosing to assemble your binaries and copying them to a new clean image helps remove additional vulnerabilities when those intermediate packages are not needed in the final running container image. It also reduces the attack surface and can extend the containers lifecycle.

It takes a very mature team to realize that any application is only as secure as the base image you choose. The really advanced ones ALSO know that keeping your base updated is just as important as keeping ALL your code secure, when dealing with containers.

Categories: General Tags: ,

Run Fedora WSL

June 11, 2023 Leave a comment

Hi fellow WSL folks. I wanted to provide some updates for those of you who still want to run Fedora on your Windows Subsystem install. My aim here is to enable kind/minikube/k3d so you can run kubernetes and to do that, you need to enable systemd.

How do you run your own WSL image you ask? Well if you are a RedHat lover like I am, you can use the current Fedora Cloud image in just a few steps. All you need is the base filesystem to get started. I will demonstrate how I setup my WSL2 image (this presupposes that you have configured your Windows Subsystem already).

First, lets start by downloading your container image. Depending what tools you have, you need to obtain the root filesystem. You may now need to uncompress the files. Either you downloaded a raw fil that was compressed using xz, tar.gz or some other compression tooling. What we want to do is get at the filesystem. Look for the rootfs file. The key is to extract the layer.tar file that consists of the filesystem. I used the Fedora Container Base image from here (https://koji.fedoraproject.org/koji/packageinfo?packageID=26387). Once downloaded, you can extract the tar file and then you can extract the layer (random folder name) to get at the layer.tar file.

Then you can import your Fedora Linux for WSL using this command line example

wsl –import Fedora c:\Tools\WSL\fedora Downloads\layer.tar

wsl.exe               (usually in your path)

–import             (parameter to import your tarfile)

‘Fedora’              (the name I give it in ‘wsl -l -v’)

‘C:\Tools\WSL’   (the path where I will keep the filesystem)

‘Downloads\…’  (the path where I have my tar file)

If you were successful, you should be able to start your wsl linux using the following command

wsl -d Fedora

(Here I am root and attempt to update the OS using dnf.

dnf update

Fedora 38 – x86_64                                                   2.4 MB/s |  83 MB     00:34

Fedora 38 openh264 (From Cisco) – x86_64          2.7 kB/s   | 2.5 kB      00:00

Fedora Modular 38 – x86_64                                    2.9 MB/s | 2.8 MB     00:00

Fedora 38 – x86_64 – Updates                                  6.8 MB/s |  24 MB     00:03

Fedora Modular 38 – x86_64 – Updates                  1.0 MB/s | 2.1 MB     00:02

Dependencies resolved.

Nothing to do.

Complete!

You must install systemd now to add all of the components

dnf install systemd

The last part included activating systemd in WSL. Add a file called /etc/wsl.conf and add the following

[boot]

systemd=true

That is all of the preparation, now you can restart the OS and you should check to verify if your systemd is working.

systemctl

Categories: General

Build/Maintain your own golden container base images

August 3, 2022 Leave a comment

Containers have become essential in the optimization of software delivery these days for any business. They can support the principals of least privilege and least access by removing most of the attack surface associated with exposing services for public consumption. They are the smallest unit that make up the 4Cs’ (CNCF uses this term to describe Cloud, Cluster, Container and Code) and have become an important part of Kubernetes management. Stripping away the complexity and isolation benefits makes them portable and it almost seems as though they have no downside right? Containers (and Kubernetes) are ephemeral and support the idea of a fully automated workload but we don’t patch them like we used to. So how do we ensure that the inevitable vulnerabilities that arise (daily if not weekly) can also be mitigated or even remediated? You start over (and over) again and again by using the ‘latest and greatest’ base images. To understand this process, we need to compare the strategy of traditional software deployment strategies and see how they differ.

First there was the base OS build where we deployed a operation System and struggled to keep it updated. We applied patches to the OS to replace any software components that needed to be replaced. Many organizations struggled with patching cadence when the fleet of systems grew to large to manage. The speed of patching needed to increase as more and more vulnerabilities were found which presented a challenge for larger organizations.

Containers start with a very small base image to provide some of the libraries that are necessary for the code that was deployed with the image. Developers need to actively minimize the components that are necessary for some core capabilities (like openssl for https, glibc for os file and device options, etc.) Failure to minimize the base image used can results in adding more and more of the libraries needed rather than relying on the benefits of a shared kernel. Best practices require the understanding of the OS being used so that the image size can be smaller and the attack surface can be reduced. This results in less vulnerabilities introduced at the container level which can result in a longer runtime using that container image.

In support of this model, it is suggested that we consider how to maintain an approved (secure) base image for any container development so our deployment strategy can make use of secure (known NOT vulnerable) images to start from. The OS manufacturers are always releasing patched versions of their base image file-systems complete with the updated components. If we consider how to turn those updated base OS images into approved secure base images, the benefits provided can increase our productivity while reducing our attack surfaces.

The process proposed here can help us obtain and build base images that have a unique hash associated with them. Since container filesystems (ausfs, overlay) can be fingerprinted, we can validate the base image hash through the entire release life-cycle. This provides an added layer of detection against rogue container use and can provide an early warning detection mechanism for both development as well as operations teams. Detecting who is using a known vulnerable base image can provide notification to be sent to application owners until those vulnerable images are removed from all of our systems.

Let me show you how this can be accomplished for any of the base images that should be approved for consumption. We start by using the ‘current build’ for any of the base OS images that we want to use. (Remember, whether your nodes run RedHat, Debian, Ubuntu, Oracle, etc. to gain the best performance and to make the best use of resources, your choice of base OS should match your node runtime version. Lets grab the latest version of the Jammy base OS for amd64 – I will use podman to build my OCI compatible image but we can also do this with docker.)

Step 1 (we should repeat this whenever there is a major change in this release. The vendor will update this daily)

root : podman import http://cdimage.ubuntu.com/ubuntu-base/jammy/daily/current/jammy-base-amd64.tar.gz jsi-jammy-08-22

Downloading from "http://cdimage.ubuntu.com/ubuntu-base/jammy/daily/current/jammy-base-amd64.tar.gz"
Getting image source signatures
Copying blob 911f3e234304 skipped: already exists
Copying config 4d667a55fb done
Writing manifest to image destination
Storing signatures
sha256:4d667a55fbdefddff7428b71ece752f7ccb7e881e4046ebf6e962d33ad4565cf

(Notice the hash of the base container image above. MY image was already downloaded)

Step 2 (we save the image archive now as a container to be tested)

root : podman save -o jsi-08-22.tar –format oci-archive jsi-jammy-08-22

Copying blob bb2923fbc64c done
Copying config 4d667a55fb done
Writing manifest to image destination
Storing signatures

(we are using the name of the image and the date [mm/yy] to identify it. You may also use image tags but it is best practice to use unique naming)

Step 3 (lets save some space and compress it)

root : gzip -9 jsi-jammy-08-22.tar – ( results in the image named jsi-jammy-08-22.tar.gz)

Final step is to run it though a security scan to ensure there are no high or critical vulnerabilities contained in this base image.

C:\image>snyk container test oci-archive:jsi-jammy-08-22.tar.gz

Testing oci-archive:jsi-jammy-08-22.tar.gz…

✗ Low severity vulnerability found in tar
Description: NULL Pointer Dereference
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-TAR-2791257
Introduced through: meta-common-packages@meta
From: meta-common-packages@meta > tar@1.34+dfsg-1build3

✗ Low severity vulnerability found in shadow/passwd
Description: Time-of-check Time-of-use (TOCTOU)
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-SHADOW-2801886
Introduced through: shadow/passwd@1:4.8.1-2ubuntu2, adduser@3.118ubuntu5, shadow/login@1:4.8.1-2ubuntu2
From: shadow/passwd@1:4.8.1-2ubuntu2
From: adduser@3.118ubuntu5 > shadow/passwd@1:4.8.1-2ubuntu2
From: shadow/login@1:4.8.1-2ubuntu2

✗ Low severity vulnerability found in pcre3/libpcre3
Description: Uncontrolled Recursion
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-PCRE3-2799820
Introduced through: pcre3/libpcre3@2:8.39-13ubuntu0.22.04.1, grep@3.7-1build1
From: pcre3/libpcre3@2:8.39-13ubuntu0.22.04.1
From: grep@3.7-1build1 > pcre3/libpcre3@2:8.39-13ubuntu0.22.04.1

✗ Low severity vulnerability found in pcre2/libpcre2-8-0
Description: Out-of-bounds Read
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-PCRE2-2810786
Introduced through: meta-common-packages@meta
From: meta-common-packages@meta > pcre2/libpcre2-8-0@10.39-3build1

✗ Low severity vulnerability found in pcre2/libpcre2-8-0
Description: Out-of-bounds Read
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-PCRE2-2810797
Introduced through: meta-common-packages@meta
From: meta-common-packages@meta > pcre2/libpcre2-8-0@10.39-3build1

✗ Low severity vulnerability found in ncurses/libtinfo6
Description: Out-of-bounds Read
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-NCURSES-2801048
Introduced through: ncurses/libtinfo6@6.3-2, bash@5.1-6ubuntu1, ncurses/libncurses6@6.3-2, ncurses/libncursesw6@6.3-2, ncurses/ncurses-bin@6.3-2, procps@2:3.3.17-6ubuntu2, util-linux@2.37.2-4ubuntu3, ncurses/ncurses-base@6.3-2
From: ncurses/libtinfo6@6.3-2
From: bash@5.1-6ubuntu1 > ncurses/libtinfo6@6.3-2
From: ncurses/libncurses6@6.3-2 > ncurses/libtinfo6@6.3-2
and 10 more...

✗ Low severity vulnerability found in krb5/libkrb5support0
Description: Integer Overflow or Wraparound
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-KRB5-2797765
Introduced through: krb5/libkrb5support0@1.19.2-2, adduser@3.118ubuntu5, krb5/libk5crypto3@1.19.2-2, krb5/libkrb5-3@1.19.2-2, krb5/libgssapi-krb5-2@1.19.2-2
From: krb5/libkrb5support0@1.19.2-2
From: adduser@3.118ubuntu5 > shadow/passwd@1:4.8.1-2ubuntu2 > pam/libpam-modules@1.4.0-11ubuntu2 > libnsl/libnsl2@1.3.0-2build2 > libtirpc/libtirpc3@1.3.2-2ubuntu0.1 > krb5/libgssapi-krb5-2@1.19.2-2 > krb5/libkrb5support0@1.19.2-2
From: adduser@3.118ubuntu5 > shadow/passwd@1:4.8.1-2ubuntu2 > pam/libpam-modules@1.4.0-11ubuntu2 > libnsl/libnsl2@1.3.0-2build2 > libtirpc/libtirpc3@1.3.2-2ubuntu0.1 > krb5/libgssapi-krb5-2@1.19.2-2 > krb5/libk5crypto3@1.19.2-2 > krb5/libkrb5support0@1.19.2-2
and 8 more...

✗ Low severity vulnerability found in gmp/libgmp10
Description: Integer Overflow or Wraparound
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-GMP-2775169
Introduced through: gmp/libgmp10@2:6.2.1+dfsg-3ubuntu1, coreutils@8.32-4.1ubuntu1, apt@2.4.6
From: gmp/libgmp10@2:6.2.1+dfsg-3ubuntu1
From: coreutils@8.32-4.1ubuntu1 > gmp/libgmp10@2:6.2.1+dfsg-3ubuntu1
From: apt@2.4.6 > gnutls28/libgnutls30@3.7.3-4ubuntu1 > gmp/libgmp10@2:6.2.1+dfsg-3ubuntu1
and 1 more...

✗ Low severity vulnerability found in glibc/libc-bin
Description: Allocation of Resources Without Limits or Throttling
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-GLIBC-2801292
Introduced through: glibc/libc-bin@2.35-0ubuntu3.1, meta-common-packages@meta
From: glibc/libc-bin@2.35-0ubuntu3.1
From: meta-common-packages@meta > glibc/libc6@2.35-0ubuntu3.1

✗ Low severity vulnerability found in coreutils
Description: Improper Input Validation
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-COREUTILS-2801226
Introduced through: coreutils@8.32-4.1ubuntu1
From: coreutils@8.32-4.1ubuntu1

✗ Medium severity vulnerability found in perl/perl-base
Description: Improper Verification of Cryptographic Signature
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-PERL-278908
Introduced through: meta-common-packages@meta
From: meta-common-packages@meta > perl/perl-base@5.34.0-3ubuntu1
--------------------------------------------------------
Tested 102 dependencies for known issues, found 11 issues.

——————————————————————————-

(Look Ma, no high or critical findings!)

Now we have a base OS image ready to be used with any new/existing container build process. Best practices include the ability to digitally sign these images so that build pipelines can verify that any images being included are tested and approved. We can remove the previous version of the base OS image and provide a notice to current/future users that vulnerabilities have been found in the previous version. Dev teams can bump the version in any code they have and begin to test if there are any breaking changes that would require refactoring. Even if there is no change in the code, they must release their containers using these new base OS images to mitigate any vulnerabilities that are introduced.

Categories: security, Work related Tags: ,

After the breach…

July 26, 2022 Leave a comment

Accidents happen and in the security field, they are usually called an ‘0-day’.

There are (at least) three questions you may be asked by your board, about your AppSec program…

  • Was all the software tested using all of our controls & capabilities that were applicable?
  • Did all the findings that were produced measure below our acceptable risk ratings?
  • Were any/all of the vulnerabilities being fixed according to our accepted remediation timelines?

Lets unpack that for everyone in an attempt to understand the motivations of some of our brightest ‘captains’. (If I was a board member…)

Misinformation – Does this event signal a lack of efficacy of our overall Appsec program? Do the controls work according to known practices? Perhaps, this is an anomaly, an edge case that now requires additional investment? What guarantees do we have that any correction strategy will be effective? If changes are warranted, which part should we focus on, People, Process or Technology?

Jeff says – changing the program can take a large investment for any/all of these. Get back to the basics and start with some metrics to see if you have effective coverage first. Prioritize making policy/configuration visible for each implementation of your security tools and aim for all of your results in one tool.

Liability – Is our security assessment program effective enough? Does this blind spot show us the inability to understand/avoid these threats at scale? Does this event indicate a systemic failure to detect/prevent this type of threat in the future?

Jeff says – Push results from Pentesting/Red Team/Security Ops back into the threat model and show if/how any improvement can be effective. Moving at the speed of DevOps means running more tests, more often, and correlating the findings to show value through velocity by catching and fixing them quickly.

Profit and Loss – Do we have a software quality problem that may require us to consider an alternative resource pool? If digitization is increasing in cost due to loss, maybe we need to improve our control capabilities to detect/prevent bad software from reaching production? Maybe we should take additional steps to ensure we have the right development teams to avoid mistakes?

Jeff says – to stop the bleeding, you might consider a different source of secure code? You might also consider an adjustment to your secure training programs? Maybe your security analysts are having their own quality issues? Consider raising the threshold of approved tools to be considered? Broker communication for your dev teams to take on more of the security responsibility.

For any leadership who is dealing with CyberSecurity these days, these are all very good questions. Security is Hard, Application Security, Cloud Security, Data Security – they are ALL hard individually so how does any one person/team understand them entirely?

I began to ask myself that question almost a decade ago during my mobile penetration testing period. When Facebook had created React which involved more than one software language in the same project. I found a cross site scripting flaw in the mobile client during testing which I felt pretty confident was NOT a false positive. I decided to check the static code findings to see if this could be correlated. (We can save the rest of that story for another blog post).

A light went off in my head, ‘correlation between two or more security tools in a single pane of glass’. What an idea – you need something that can pull in all of the datasets (finding reports) and provide some deduplication (so we don’t give dev teams multiple findings from multiple tools), just the fact that we are confident of the viability of the finding. I investigated some of the tool vendors and worked with them for a few years while the capability began to mature in the industry.

Today, Gartner calls this space Application Security Orchestration and Correlation, a contraction of ‘security orchestration’ (where you apply policy as code) and correlating, deduping the results. When done successfully, it also provides a single pane of glass for the operations team or any other orchestration or reporting software in use in your org. Think of it as the one endpoint with all the answers; a way to abstract away the API schema and various life-cycle changes that are associated with new and existing tool-sets.

Whether you wish to interconnect all of your existing orchestration tooling for your pipelines & other infrastructure or perhaps you want to build out your security governance capabilities by conducting all of your own security testing, ASOC tools are capable of providing security at the speed of DevOps.

There really is no other way to accomplish it at scale!

Categories: security, Work related Tags: ,

Infosec not your job but your responsibility? How to be smarter than the average bear

July 25, 2022 Leave a comment

Want to measure how beneficial it is for your software development teams to learn to think more like an adversary? Just look at the first 20 years of use against the last 10-20?

https://www.theregister.com/2022/07/25/infosec_not_your_job/

Cloud Vulnerabilities & Security Issues Database

July 22, 2022 Leave a comment

For those of us who thought the rising list of new CVEs was bad enough, now comes a new list of cloud platform relayed vulnerabilities that only your choice of CSP is responsible to fix.

Time to update your threat models folks!

https://www.cloudvulndb.org/

Categories: security Tags: ,

Zero-Day Exploitation of Atlassian Confluence | Volexity

June 3, 2022 Leave a comment

There is another 0-day for Atlassian, they are having a tough time with RCEs
https://www.volexity.com/blog/2022/06/02/zero-day-exploitation-of-atlassian-confluence/

Categories: General Tags: ,