For solutions to these and other problems please contact us at The-Techy.com

Before there was a Security Dept.

April 30, 2024 Leave a comment

In response to my wifes’ pleas to ‘clean up my room’, I stumbled upon some memorabilia from the early days of my security career.

Those ‘dialup days’ made IT Security pretty simple and what were CVEs (common vulnerabilities and exposures).

Anyone want my licenses?

Categories: General

Why your business should never accept a wildcard certificate.

April 19, 2024 Leave a comment

When starting your web service journey, most developers will only see the benefits of using a certificate with *only* the domain name referenced (a.k.a wildcard certificate) and will disregard the risks. On the surface, creating a certificate with an infinite number of first level subdomain (host) records seems like a successful pattern to follow. It is quick and easy to create a single certificate like *.mybank.com and then use it at the load balancer or in your backend to frontend (BFF) right? That certificate is for the benefits of clients, to convince them that the public key contained in the certificate is indeed the public key of the genuine SSL server. With a Wildcard certificate, the left-most label of the domain name is replaced with an asterisk. This is the literal “wildcard” character, and it tells web clients (browsers) that the certificate is valid for every possible name at that label.

What could possibly go wrong… 🙂

Let’s start at the beginning, with a standard: RFC-2818 – HTTP over TLS.

#1 – RFC-2818, Section 3.1 (Server Identity) clearly states that, “If the hostname is available, the client MUST check it against the server’s identity as presented in the server’s Certificate message, in order to prevent man-in-the-middle attacks.

How does a client check *which* server it is connecting to if it does not receive one? Maybe it is one of the authorized endpoints behind your load balancer, but maybe it is not? You would need another method of assurance to validate that connecting and sending your data to this endpoint is safe because connecting over one way TLS, into “any endpoint” claiming to be part of the group of endpoints that *you think* you are connecting to is trivial if your attacker has control of your DNS or any network devices in between you and your connection points.

#2 – The acceleration of Phishing began when wildcard certificates became free.

In 2018, in what was soon to become the world’s largest Certificate Authority (https://www.linuxfoundation.org/resources/case-studies/lets-encrypt), Lets Encrypt began to support wildcard certificates. Hackers would eventually use wildcard certificates to their advantage to hide hostnames and make attacks like ransomware and spear-phishing more versatile.

#3 – Bypasses Certificate Transparency

The entire Web Public Key Infrastructure requires user agents (browsers) and domain owners (servers) to completely trust that Certificate Authorities are tying domains to the right domain owners. Every operating system and every browser must build (or bring) a trusted root store that contains all the public keys for all the “trusted” root certificates and, as is often the case, mistakes can be made (https://www.feistyduck.com/ssl-tls-and-pki-history/#diginotar). By leveraging logs as phishing detection systems, phishers who want to use an SSL certificate to enhance the legitimate appearance of their phishing sites are making it easier to get caught if we don’t use wildcard certs.

#4 – Creates one big broad Trust level across all systems.

Unless all of the systems in your domain have the same trust level, using a wildcard cert to cover all systems under your control is a bad idea. It is a fact that wildcards do not traverse subdomains, so although you can restrict a wildcard cert to a specific namespace (like *.cdn.mybank.com.), if you apply it more granularly, you can limit its trust. If one server or sub-domain is compromised, all sub-domains may be compromised with any number of web-based attacks (SSRF, XSS, CORS, etc.)

#5 – Private Keys must not be shared across multiple hosts.

There are risks associated with using one key for multiple uses. (Imagine if we all had the same front door key?) Some companies *can* manage the private keys for you (https://www.entrust.com/sites/default/files/documentation/solution-briefs/ssl-private-key-duplication-wp.pdf), but without TLS on each individual endpoint, the blast radius increases when they share a private key. A compromise of one using TLS, will be easier to compromise all of them. If cyber criminals gain access to a wildcard certificates’ private key, they may be able to impersonate any domain protected by that wildcard certificate. If cybercriminals trick a CA into issuing a wildcard certificate for a fictitious company, they can then use those wildcard certificates to create subdomains and establish phishing sites.

#6 – Application Layer Protocols Allowing Cross-Protocol Attack (ALPACA)

The NSA says [PDF] that “ALPACA is a complex class of exploitation techniques that can take many forms” “and will confer risk from poorly secured servers to other servers the same certificate’s scope” To exploit this, all that is needed for an attacker, is to redirect a victims’ network traffic, intended for the target web app, to the second service (likely achieved through Domain Name System (DNS) poisoning or a man-in-the-middle compromise). Mitigations for this vulnerability involve Identifying all locations where the wildcard certificates’ private key is stored and ensuring that the security posture for that location is commensurate with the requirements for all applications within the certificates’ scope. Not an easy task given you have unlimited choices!

While the jury is ‘still out’ for the decision on whether Wildcard Certificates are worth the security risks, here are some questions that you should ask yourself before taking this short cut.

– Did you fully document the security risks?

How does the app owner plan to limit the safe and secure use of any use of wildcard certificates, maybe to a specific purpose? What detection (or prevention) controls do you have in place to detect (prevent) wildcard certificates from being used in any case, for your software projects? Consider how limiting your use of wildcard certificates can help you control your security.

– Are you trying to save time or claiming efficiencies?

Does your business find it too difficult to install or too time consuming to get certificates working? Are you planning many sites hosted on a small amount of infrastructure? Are you expecting to save money by issuing less certificates? Consider the tech debt of this decision – Public certificate authorities are competing for your money by offering certificate lifecycle management tools. Cloud Providers have already started providing Private Certificate Authority Services so you can run your own CA!

Reference: https://www.rfc-editor.org/rfc/rfc2818#section-3.1

https://venafi.com/blog/wildcard-certificates-make-encryption-easier-but-less-secure

Categories: General Tags: ,

Forget SPAM, why not backdoor software instead

March 30, 2024 Leave a comment
XZ is used by a secure shell (ssh)

Open Source software is used by so many people, that it has become the target for a more sophisticated attacker. One that is so pernicious, it is known, in the Common Weakness and Exposure chart as CWE-506.

Let’s face it, over the last few decades, as the world was being eaten by software* we became too reliant on other people’s work. We ‘trusted’ what others were doing and relied too much on ‘good people doing great things’ so we could focus more on what we wanted.

The strength of open source (where so many eyes would review the code instead of just a handful of coders in a cave), makes us dependent on those same coders so if they don’t catch the malicious additions, who will?

Luckily, in such a short time, a few posts by a couple of curious sysadmins on Thursday, managed to catch this ‘feature’ being added to the source, which quickly produced this CVE (above) so patch your stuff before you too, become backdoored.

For my readers, I can only say that, if you use software in your business (that should be *all* of you), then you probably use open source so ‘Verify then Trust’ because this happens more often than you realize. Invest in your Security program which should include a threat analysis channel. We are at War with the hacker community and they only need one to win!

Old tech for an old Techy

March 29, 2024 1 comment

How many of you remember the days of the Pentium processors? What about the 386 when we used 30 pin SIMMs (1mb shown here) and this  5 pin DIN adapter for the PS2 keyboard?

CPU, Memory and keyboard adapter

I think that piece of memory actually cost me $25 in 1980s dollars! Computers have come a long way since the early days.

Cloud makes it easier to use but remember, it also makes it easier for the hackers 😜.

Kubernetes and Certificates

March 21, 2024 Leave a comment

Kubernetes has become the defacto way to run software these days and many people are simply not aware of just how much it relies on cryptography. All the nodes use certificates for almost all the connections, the commands between them, Identity of services, and encryption at rest and in transit, and well… every part of the operating model. Functioning as its own Public Key Infrastructure, the security of the cluster is directly related to the safe and secure methods associated with all this key management. 

I wanted to raise the awareness for most of my readers, of many organizations use of Kubernetes and introduce the practices of Certificate Lifecycle Management as they apply to Kubernetes.

If you have used KinD (Kubernetes in Docker) before, you may notice that when you use the latest version of the kind-control-plane (from this image)

kindest/node     v1.29.2   09c50567d34e   4 weeks ago     956MB

You should notice that the Certificate Authority certificate is *still* being created to last 10 years.

docker exec -t kind-control-plane openssl x509 -startdate -enddate -noout -in /etc/kubernetes/pki/ca.crt

notBefore=Mar 12 17:27:26 2024 GMT

notAfter=Mar 10 17:32:26 2034 GMT

This helps to demonstrate that the primary Certificate authority that is used to stand up your kind cluster, has a lifecycle of 10 years. This seems overly permissive but for any root CA, it is still well under the acceptable limit. Why don’t we try to determine what our production GKE clusters are using by executing this command…

gcloud container clusters describe --zone <zone> <clustername> --project <projectname> --format="value(masterAuth.clusterCaCertificate)" | base64 --decode | openssl x509 -startdate -enddate -noout

When you execute this command on your tenant, what do you think we should see as the output? Would we see that you used your own root-CA to create an Intermediary CA for Google to use in your clusters? 

Ideally, a mature organization could be using a private CA (either external or now, with help as a Google service offering) to secure all certificates being created and verified inside each GKE cluster or nodes? (This might be a little too much to ask – we must walk before we learn to run)

What about letting Google, who is probably creating the cluster for you, create the Kubernetes Root CA and perhaps they would be following the best practice for root certificates by limiting it to 15 years? (Note: Google is *also* proposing a maximum term limit of 7 years for any publicly accessible root CA certificate https://www.chromium.org/Home/chromium-security/root-ca-policy/moving-forward-together/).  

Well, sadly, they continue to create the default term for their GKE Kubernetes clusters of…

notBefore=Mar 17 15:43:17 2022 GMT

notAfter=Mar  9 16:43:17 2052 GMT

30 years!

So now that we know that all certificates, for *any* purpose, within your GKE clusters, (at least this one) will be using a default lifetime of 30 years, you may be asking, is there anything we can do about that? I mean, what if the key becomes compromised, we should trust ANY activity, no service mesh, no identity, not even any nodes trying to communicate with the etcd (Kubernetes database) …. well surprisingly…. YES you can – you can rotate them yourself.

In this reference from the GKE help pages, we learn that we can perform a credential rotation as a way to mitigate the loss or compromise of your root CA keys! Great, but how hard is it?

gcloud container clusters update CLUSTER_NAME --region REGION_NAME --start-credential-rotation

gcloud container clusters upgrade CLUSTER_NAME --location=LOCATION --cluster-version=VERSION

You can use the same version of each cluster to force each node to request and use the newly created certificates that will be signed by the new Kubernetes root CA certificate. Once you are sure that all your nodes have been rotated and are using new credentials, you can complete the synchronization.

gcloud container clusters update CLUSTER_NAME --region REGION_NAME --complete-credential-rotation

So why don’t more organizations do this if the process is so easy to recover? Well, if you are deploying your clusters using defaults, you probably aren’t aware of this little fact. Security people have learned that it is better to be safe than to be sorry. Mature organizations become aware of these gotchas early on in their usage of Kubernetes (over 10 years now) and first learn how to correct bad outcomes. Later, they learn how to avoid these bad outcomes by managing all parts of the Kubernetes platform to help assure it.

Kubernetes *may* be the greatest invention since Virtualization, but with great power comes great responsibility. Don’t let an old hacker (like me) get the better of your organization!

Certificate Management may be hard, but you don’t have much choice any longer.

March 17, 2024 Leave a comment

Ever since the 1990s when Netscape₁ first introduced “Secure Sockets”, we have turned this thing called “The Internet” into an ecommerce engine worth over 3 trillion USD today. Statistics show that its growth is expected to top 5 trillion USD by 2029₂. Efforts to secure the Internet have been going on for three decades since then so why should be alarmed now? Well, it involves two of the most popular subjects in our modern era, Artificial Intelligence and Quantum Computing.

AI has proven to be highly effective at finding defects in software₃, something that humans continue to create and Quantum Computers will speed up computational power by a factor of 10x. Think of a hacker who never sleeps, has no preconceived notions about ‘if’ something can be accomplished, and just sets itself on a target of guessing your password or even breaking your encryption keys for your secure session with your bank? Is there any doubt that it will succeed…eventually, now that it is 10x faster? Does this sound like a George Orwell book, well it should, that time has arrived!

Traditional certificates relied on factorization of prime numbers. That is just a fancy way of saying 3 times 5 equals 15 (although this is an oversimplification). When you use factors that are thousands of digits long, computers were needed to solve these equations and reversing those equations would take years or even centuries. Now enter the Quantum computer that performs these calculations at dizzying speeds, and you are no longer safe. The only answer to help treat those risks is to replace those equations more often that one or twice every few years.

The scope of the problem becomes apparent when you see how prevalent traditional certificates are in our electronic world. Major use cases are not just limited to SSL/TLS certificates to protect your ecommerce or banking sites. They are used to provide integrity verification used in encryption for proof of ownership or tampering. They are also used for Identity (like secure shell or tokens) and systems that rely on trust. With AI and quantum wildly in use today, these systems are at risk if you do not replace these on a regular basis.

Google wants to shorten the lifecycle of certificates₄, to help manage the risk associated with SSL/TLs certificate usage on the Internet. By replacing the secrets more often, it makes it harder to guess them. Let’s Encrypt has be successful since the last decade, at generating 90-day certificates. There are many client implementations₅ that support the ACME standard that helps accomplish this.

This begs the question, “How do we manage hundreds of thousands of certificates at speeds that would take an army to accomplish?”

Automation is the key! Maybe you can ask your friendly AI prompt to help you accomplish this before someone uses it to crack your password and empty your bank account? 😊

WSL Christmas with Fedora

December 24, 2023 Leave a comment

Hey folks – Merry Xmas.

For those of you playing with the windows subsystem for Linux, I wanted to share a great recipe and the steps required for baking your very own Xmas Fedora Docker running on WSL. It makes a great holiday gift to share with your friends and family (because who doesn’t use containers these days!)

  1. Download your new base image from Koji at https://koji.fedoraproject.org/koji/packageinfo?packageID=26387

 2. Carefully open the package and extract the file called layer.tar 

 3. Mix well for a few seconds (here we have renamed the layer.tar file and used ‘fedora39’ as the image name but you may adapt this ‘to your own taste’

 4. Bake in the oven for a few minutes (here we start the distro, perform an update and then install systemd)

  5. Decorate it with a custom file to activate systemd and let it cool down (restart)

  6. Package your cake with the following items (dnf-plugins-core)

  7. Add the docker community repository for docker binaries and install the following components (docker-ce, docker-ce-cli containerd.io, docker-buildx-plugin and docker-compose-plugin)

That is it – Serve to friends and family!

Happy Holidays to all my readers and thanks for a great year!

Categories: Work related Tags: ,

Where *can* I put my secrets then?

September 9, 2023 Leave a comment

I have spent a large portion of my IT career, hacking others peoples software, so I thought it was time to give back to the community I work in and talk about secrets. Whether they be passwords, key material (like SSH, Asynchronous or Synchronous) or configuration elements, all elements that should be considered ‘sensitive’.

Whether you are an old timer who may still be modifying a monolithic codebase or you have modern cloud enabled shop that builds event driven microservices, the Twelve-Factor App is a great place to start. The link provided is the “12 Factor App” methodology, which outlines best practices for building modern software-as-a-service applications. When choosing to adopt this as your strategy, it can provide the basis for software development that transcends any language or shop-size, and should play a part of any Secure Software Development LifeCycle. In Section III Config, they explain the need to separate config from code but I feel this needs further clarity.

There are two schools of thought for many developers/engineers, when it comes to how to use secrets, you can load them into environment variables (as is outlined in this methodology above) or you can choose to persist them into protected files that may be loaded from any external secret manager and mounted only where they are needed. One thing is clear, you should never persist them alongside your code.

Let’s explore the most common, and arguably the easiest way to treat the risks of someone gaining unauthorized access to your secrets: Environment Variables

  • Your build environment may be considered implicitly available to the process of building/deploying your code, it can be difficult, but not impossible, for an attacker to track access and how the contents may be exposed (ps -eww <PID>).
  • Some applications or build platforms may grab the whole environment and print it out for debugging or error reporting. This requires will require advanced post processing as your build engine must scrub them from their infrastructure.
  • Child processes will inherit any environment variables by default, which may allow for unintended access. This breaks the principle of least privilege when you call another tool/code branch to perform some action and has access to your environment.
  • Crash and debug logs can/do store the environment variables in log-files. This means plain-text secrets on disk and will require bespoke post processing to scrub them.
  • Putting secrets in ENV variables quickly turns into tribal knowledge. New engineers who are not aware of the sensitive nature of specific environment variables will not handle them appropriately/with care (filtering them to sub-processes, etc).

Ref: https://blog.diogomonica.com//2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/

Secrets Management done right

Docker decided to to create KeyWhiz as far back as 2016 (seems abandoned now) and many vaulting tools today, make use of injectors that can dynamically populate variables OR create tmpfs mounts with files containing your secrets. When you prefer to read secrets from a temporary file, you can manage the lifecycle more effectively. Your application can call the timestamp functions to learn if/when the contents have changed and signal the running process. This allows database connectors and service connections to gracefully transition whenever key material changes.

Security should never trump convenience but don’t let Perfect be the enemy of ‘Good’. If you have sensitive data like static strings, certificates for protection or Identity or connection strings that could be misused, you need to balance the impact to you or your organization of losing them over your convenience. Learn to setup and use vaulting technology that can provide just enough security to help mitigate any of the risks associated with credential theft. Like hard work and exercise, it might hurt now, but you will thank me later!

Additionally, here are some API key gotchas (which are as dangerous as losing cash) that you should consider whenever you or your teams are building production software.

  • Do not embed API keys directly in code or in your repo source tree:
    • When API keys are embedded in code they may become exposed to the public, when code is cloned. Consider environment variables or files outside of your application’s source tree.
  • Constrain any API keys to any IP addresses, referrer URLs, and mobile apps that need them:
    • Limiting who the consumer can be, reduces the impact of a compromised API key.
  • Limit specific API keys to be usable only for certain APIs: 
    • By making more keys, it may seem that you are increasing the impact but if you have multiple APIs enabled in your project and your API key should only be used with some of them, you can easily detect and limit abuse of any one API key.
  • Manage the Lifecycle of ALL your API keys:
    • To minimize your exposure to attack, delete any API keys that you no longer need.
  • Rotate your API keys periodically:
    • Rotate your API keys, even when they appear to be used by authorized parties. After the replacement keys are created, your applications should be designed to use the newly-generated keys and discard the old keys.

Ref: https://support.google.com/googleapi/answer/6310037?hl=en

Categories: General Tags: ,

Container Lifecycle Management

July 16, 2023 Leave a comment

I wanted to share a big problem that I see developing for many devs, as they begin to adopt containers. In an effort to familiarize us with some fundamentals, I want to compare the difference between virtual machines and containers.

The animation (above) shows a few significant differences that can confuse many developers who are used to virtual machine lifecycles. We can outline the benefits or why you *want* to adopt containers

  • On any compute instance, you can run 10x as many applications
  • Faster initialization and tear down means better resource management

Now, in the days where you have separate teams, one running infrastructure and another handling application deployment, you learned to rely on one another. The application team would say, ‘works for me’ and cause friction for the infrastructure team. All of that disappears with containers…but…

By adopting containers, teams can overcome those problems by abstracting away the differences of environments, hardware and frameworks. A container that works on a devs laptop, will work anywhere!

What is not made clear to the dev team is, they are now completely responsible for the lifecycle of that container. They must lay down the filesystem and include any libraries needed for their application, that are NOT provided by the host that runs them. This creates several new challenges that they are not familiar with.

The most important part of utilizing containers, that many dev teams fail to understand, is they must update the container image, as often as the base image they choose to use becomes vulnerable. (Containers are made up of layers and the first one is the most important!) Your choice of base image filesystem, will come with some core components that are usually updated, whenever the OS vendor issues patches (which can be daily or even hourly!). When you choose to use a base image, you should consider it like a snapshot, those components develop vulnerabilities that are never fixed in your container image.

One approach that some devs use is live patching the base image (like apt-get or dnf or yum update). Seasoned image developers soon realize that this strategy is just a band-aid when they add another layer (in additional to the first one) and replace some of the components at the cost of increasing the size. Live patching can also add cached components that may/may not fully remove/replace the bad files. Even if you are effective at removing the cached components, you may forget others as you install and compile your application.

The second approach involves layer optimization. Dev teams are failing to reduce the size of the container images which uses more bandwidth, pulling and caching those image layers, which in turn, uses more storage on the nodes that cache them. Memory use is still efficient thanks in part to overlay filesystem optimization but the other resources are clearly wasted.

Dev teams also fail to see the build environment as an opportunity to use more than one. Multipart building strategy involves the use of several sacrificial images to do compilation and transpilation. Choosing to assemble your binaries and copying them to a new clean image helps remove additional vulnerabilities when those intermediate packages are not needed in the final running container image. It also reduces the attack surface and can extend the containers lifecycle.

It takes a very mature team to realize that any application is only as secure as the base image you choose. The really advanced ones ALSO know that keeping your base updated is just as important as keeping ALL your code secure, when dealing with containers.

Categories: General Tags: ,

Run Fedora WSL

June 11, 2023 Leave a comment

Hi fellow WSL folks. I wanted to provide some updates for those of you who still want to run Fedora on your Windows Subsystem install. My aim here is to enable kind/minikube/k3d so you can run kubernetes and to do that, you need to enable systemd.

How do you run your own WSL image you ask? Well if you are a RedHat lover like I am, you can use the current Fedora Cloud image in just a few steps. All you need is the base filesystem to get started. I will demonstrate how I setup my WSL2 image (this presupposes that you have configured your Windows Subsystem already).

First, lets start by downloading your container image. Depending what tools you have, you need to obtain the root filesystem. You may now need to uncompress the files. Either you downloaded a raw fil that was compressed using xz, tar.gz or some other compression tooling. What we want to do is get at the filesystem. Look for the rootfs file. The key is to extract the layer.tar file that consists of the filesystem. I used the Fedora Container Base image from here (https://koji.fedoraproject.org/koji/packageinfo?packageID=26387). Once downloaded, you can extract the tar file and then you can extract the layer (random folder name) to get at the layer.tar file.

Then you can import your Fedora Linux for WSL using this command line example

wsl –import Fedora c:\Tools\WSL\fedora Downloads\layer.tar

wsl.exe               (usually in your path)

–import             (parameter to import your tarfile)

‘Fedora’              (the name I give it in ‘wsl -l -v’)

‘C:\Tools\WSL’   (the path where I will keep the filesystem)

‘Downloads\…’  (the path where I have my tar file)

If you were successful, you should be able to start your wsl linux using the following command

wsl -d Fedora

(Here I am root and attempt to update the OS using dnf.

dnf update

Fedora 38 – x86_64                                                   2.4 MB/s |  83 MB     00:34

Fedora 38 openh264 (From Cisco) – x86_64          2.7 kB/s   | 2.5 kB      00:00

Fedora Modular 38 – x86_64                                    2.9 MB/s | 2.8 MB     00:00

Fedora 38 – x86_64 – Updates                                  6.8 MB/s |  24 MB     00:03

Fedora Modular 38 – x86_64 – Updates                  1.0 MB/s | 2.1 MB     00:02

Dependencies resolved.

Nothing to do.

Complete!

You must install systemd now to add all of the components

dnf install systemd

The last part included activating systemd in WSL. Add a file called /etc/wsl.conf and add the following

[boot]

systemd=true

That is all of the preparation, now you can restart the OS and you should check to verify if your systemd is working.

systemctl

Categories: General