Archive

Archive for the ‘Work related’ Category

Build/Maintain your own golden container base images

August 3, 2022 Leave a comment

Containers have become essential in the optimization of software delivery these days for any business. They can support the principals of least privilege and least access by removing most of the attack surface associated with exposing services for public consumption. They are the smallest unit that make up the 4Cs’ (CNCF uses this term to describe Cloud, Cluster, Container and Code) and have become an important part of Kubernetes management. Stripping away the complexity and isolation benefits makes them portable and it almost seems as though they have no downside right? Containers (and Kubernetes) are ephemeral and support the idea of a fully automated workload but we don’t patch them like we used to. So how do we ensure that the inevitable vulnerabilities that arise (daily if not weekly) can also be mitigated or even remediated? You start over (and over) again and again by using the ‘latest and greatest’ base images. To understand this process, we need to compare the strategy of traditional software deployment strategies and see how they differ.

First there was the base OS build where we deployed a operation System and struggled to keep it updated. We applied patches to the OS to replace any software components that needed to be replaced. Many organizations struggled with patching cadence when the fleet of systems grew to large to manage. The speed of patching needed to increase as more and more vulnerabilities were found which presented a challenge for larger organizations.

Containers start with a very small base image to provide some of the libraries that are necessary for the code that was deployed with the image. Developers need to actively minimize the components that are necessary for some core capabilities (like openssl for https, glibc for os file and device options, etc.) Failure to minimize the base image used can results in adding more and more of the libraries needed rather than relying on the benefits of a shared kernel. Best practices require the understanding of the OS being used so that the image size can be smaller and the attack surface can be reduced. This results in less vulnerabilities introduced at the container level which can result in a longer runtime using that container image.

In support of this model, it is suggested that we consider how to maintain an approved (secure) base image for any container development so our deployment strategy can make use of secure (known NOT vulnerable) images to start from. The OS manufacturers are always releasing patched versions of their base image file-systems complete with the updated components. If we consider how to turn those updated base OS images into approved secure base images, the benefits provided can increase our productivity while reducing our attack surfaces.

The process proposed here can help us obtain and build base images that have a unique hash associated with them. Since container filesystems (ausfs, overlay) can be fingerprinted, we can validate the base image hash through the entire release life-cycle. This provides an added layer of detection against rogue container use and can provide an early warning detection mechanism for both development as well as operations teams. Detecting who is using a known vulnerable base image can provide notification to be sent to application owners until those vulnerable images are removed from all of our systems.

Let me show you how this can be accomplished for any of the base images that should be approved for consumption. We start by using the ‘current build’ for any of the base OS images that we want to use. (Remember, whether your nodes run RedHat, Debian, Ubuntu, Oracle, etc. to gain the best performance and to make the best use of resources, your choice of base OS should match your node runtime version. Lets grab the latest version of the Jammy base OS for amd64 – I will use podman to build my OCI compatible image but we can also do this with docker.)

Step 1 (we should repeat this whenever there is a major change in this release. The vendor will update this daily)

root : podman import http://cdimage.ubuntu.com/ubuntu-base/jammy/daily/current/jammy-base-amd64.tar.gz jsi-jammy-08-22

Downloading from "http://cdimage.ubuntu.com/ubuntu-base/jammy/daily/current/jammy-base-amd64.tar.gz"
Getting image source signatures
Copying blob 911f3e234304 skipped: already exists
Copying config 4d667a55fb done
Writing manifest to image destination
Storing signatures
sha256:4d667a55fbdefddff7428b71ece752f7ccb7e881e4046ebf6e962d33ad4565cf

(Notice the hash of the base container image above. MY image was already downloaded)

Step 2 (we save the image archive now as a container to be tested)

root : podman save -o jsi-08-22.tar –format oci-archive jsi-jammy-08-22

Copying blob bb2923fbc64c done
Copying config 4d667a55fb done
Writing manifest to image destination
Storing signatures

(we are using the name of the image and the date [mm/yy] to identify it. You may also use image tags but it is best practice to use unique naming)

Step 3 (lets save some space and compress it)

root : gzip -9 jsi-jammy-08-22.tar – ( results in the image named jsi-jammy-08-22.tar.gz)

Final step is to run it though a security scan to ensure there are no high or critical vulnerabilities contained in this base image.

C:\image>snyk container test oci-archive:jsi-jammy-08-22.tar.gz

Testing oci-archive:jsi-jammy-08-22.tar.gz…

✗ Low severity vulnerability found in tar
Description: NULL Pointer Dereference
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-TAR-2791257
Introduced through: meta-common-packages@meta
From: meta-common-packages@meta > tar@1.34+dfsg-1build3

✗ Low severity vulnerability found in shadow/passwd
Description: Time-of-check Time-of-use (TOCTOU)
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-SHADOW-2801886
Introduced through: shadow/passwd@1:4.8.1-2ubuntu2, adduser@3.118ubuntu5, shadow/login@1:4.8.1-2ubuntu2
From: shadow/passwd@1:4.8.1-2ubuntu2
From: adduser@3.118ubuntu5 > shadow/passwd@1:4.8.1-2ubuntu2
From: shadow/login@1:4.8.1-2ubuntu2

✗ Low severity vulnerability found in pcre3/libpcre3
Description: Uncontrolled Recursion
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-PCRE3-2799820
Introduced through: pcre3/libpcre3@2:8.39-13ubuntu0.22.04.1, grep@3.7-1build1
From: pcre3/libpcre3@2:8.39-13ubuntu0.22.04.1
From: grep@3.7-1build1 > pcre3/libpcre3@2:8.39-13ubuntu0.22.04.1

✗ Low severity vulnerability found in pcre2/libpcre2-8-0
Description: Out-of-bounds Read
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-PCRE2-2810786
Introduced through: meta-common-packages@meta
From: meta-common-packages@meta > pcre2/libpcre2-8-0@10.39-3build1

✗ Low severity vulnerability found in pcre2/libpcre2-8-0
Description: Out-of-bounds Read
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-PCRE2-2810797
Introduced through: meta-common-packages@meta
From: meta-common-packages@meta > pcre2/libpcre2-8-0@10.39-3build1

✗ Low severity vulnerability found in ncurses/libtinfo6
Description: Out-of-bounds Read
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-NCURSES-2801048
Introduced through: ncurses/libtinfo6@6.3-2, bash@5.1-6ubuntu1, ncurses/libncurses6@6.3-2, ncurses/libncursesw6@6.3-2, ncurses/ncurses-bin@6.3-2, procps@2:3.3.17-6ubuntu2, util-linux@2.37.2-4ubuntu3, ncurses/ncurses-base@6.3-2
From: ncurses/libtinfo6@6.3-2
From: bash@5.1-6ubuntu1 > ncurses/libtinfo6@6.3-2
From: ncurses/libncurses6@6.3-2 > ncurses/libtinfo6@6.3-2
and 10 more...

✗ Low severity vulnerability found in krb5/libkrb5support0
Description: Integer Overflow or Wraparound
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-KRB5-2797765
Introduced through: krb5/libkrb5support0@1.19.2-2, adduser@3.118ubuntu5, krb5/libk5crypto3@1.19.2-2, krb5/libkrb5-3@1.19.2-2, krb5/libgssapi-krb5-2@1.19.2-2
From: krb5/libkrb5support0@1.19.2-2
From: adduser@3.118ubuntu5 > shadow/passwd@1:4.8.1-2ubuntu2 > pam/libpam-modules@1.4.0-11ubuntu2 > libnsl/libnsl2@1.3.0-2build2 > libtirpc/libtirpc3@1.3.2-2ubuntu0.1 > krb5/libgssapi-krb5-2@1.19.2-2 > krb5/libkrb5support0@1.19.2-2
From: adduser@3.118ubuntu5 > shadow/passwd@1:4.8.1-2ubuntu2 > pam/libpam-modules@1.4.0-11ubuntu2 > libnsl/libnsl2@1.3.0-2build2 > libtirpc/libtirpc3@1.3.2-2ubuntu0.1 > krb5/libgssapi-krb5-2@1.19.2-2 > krb5/libk5crypto3@1.19.2-2 > krb5/libkrb5support0@1.19.2-2
and 8 more...

✗ Low severity vulnerability found in gmp/libgmp10
Description: Integer Overflow or Wraparound
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-GMP-2775169
Introduced through: gmp/libgmp10@2:6.2.1+dfsg-3ubuntu1, coreutils@8.32-4.1ubuntu1, apt@2.4.6
From: gmp/libgmp10@2:6.2.1+dfsg-3ubuntu1
From: coreutils@8.32-4.1ubuntu1 > gmp/libgmp10@2:6.2.1+dfsg-3ubuntu1
From: apt@2.4.6 > gnutls28/libgnutls30@3.7.3-4ubuntu1 > gmp/libgmp10@2:6.2.1+dfsg-3ubuntu1
and 1 more...

✗ Low severity vulnerability found in glibc/libc-bin
Description: Allocation of Resources Without Limits or Throttling
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-GLIBC-2801292
Introduced through: glibc/libc-bin@2.35-0ubuntu3.1, meta-common-packages@meta
From: glibc/libc-bin@2.35-0ubuntu3.1
From: meta-common-packages@meta > glibc/libc6@2.35-0ubuntu3.1

✗ Low severity vulnerability found in coreutils
Description: Improper Input Validation
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-COREUTILS-2801226
Introduced through: coreutils@8.32-4.1ubuntu1
From: coreutils@8.32-4.1ubuntu1

✗ Medium severity vulnerability found in perl/perl-base
Description: Improper Verification of Cryptographic Signature
  Info: https://snyk.io/vuln/SNYK-UBUNTU2204-PERL-278908
Introduced through: meta-common-packages@meta
From: meta-common-packages@meta > perl/perl-base@5.34.0-3ubuntu1
--------------------------------------------------------
Tested 102 dependencies for known issues, found 11 issues.

——————————————————————————-

(Look Ma, no high or critical findings!)

Now we have a base OS image ready to be used with any new/existing container build process. Best practices include the ability to digitally sign these images so that build pipelines can verify that any images being included are tested and approved. We can remove the previous version of the base OS image and provide a notice to current/future users that vulnerabilities have been found in the previous version. Dev teams can bump the version in any code they have and begin to test if there are any breaking changes that would require refactoring. Even if there is no change in the code, they must release their containers using these new base OS images to mitigate any vulnerabilities that are introduced.

Categories: security, Work related Tags: ,

After the breach…

July 26, 2022 Leave a comment

Accidents happen and in the security field, they are usually called an ‘0-day’.

There are (at least) three questions you may be asked by your board, about your AppSec program…

  • Was all the software tested using all of our controls & capabilities that were applicable?
  • Did all the findings that were produced measure below our acceptable risk ratings?
  • Were any/all of the vulnerabilities being fixed according to our accepted remediation timelines?

Lets unpack that for everyone in an attempt to understand the motivations of some of our brightest ‘captains’. (If I was a board member…)

Misinformation – Does this event signal a lack of efficacy of our overall Appsec program? Do the controls work according to known practices? Perhaps, this is an anomaly, an edge case that now requires additional investment? What guarantees do we have that any correction strategy will be effective? If changes are warranted, which part should we focus on, People, Process or Technology?

Jeff says – changing the program can take a large investment for any/all of these. Get back to the basics and start with some metrics to see if you have effective coverage first. Prioritize making policy/configuration visible for each implementation of your security tools and aim for all of your results in one tool.

Liability – Is our security assessment program effective enough? Does this blind spot show us the inability to understand/avoid these threats at scale? Does this event indicate a systemic failure to detect/prevent this type of threat in the future?

Jeff says – Push results from Pentesting/Red Team/Security Ops back into the threat model and show if/how any improvement can be effective. Moving at the speed of DevOps means running more tests, more often, and correlating the findings to show value through velocity by catching and fixing them quickly.

Profit and Loss – Do we have a software quality problem that may require us to consider an alternative resource pool? If digitization is increasing in cost due to loss, maybe we need to improve our control capabilities to detect/prevent bad software from reaching production? Maybe we should take additional steps to ensure we have the right development teams to avoid mistakes?

Jeff says – to stop the bleeding, you might consider a different source of secure code? You might also consider an adjustment to your secure training programs? Maybe your security analysts are having their own quality issues? Consider raising the threshold of approved tools to be considered? Broker communication for your dev teams to take on more of the security responsibility.

For any leadership who is dealing with CyberSecurity these days, these are all very good questions. Security is Hard, Application Security, Cloud Security, Data Security – they are ALL hard individually so how does any one person/team understand them entirely?

I began to ask myself that question almost a decade ago during my mobile penetration testing period. When Facebook had created React which involved more than one software language in the same project. I found a cross site scripting flaw in the mobile client during testing which I felt pretty confident was NOT a false positive. I decided to check the static code findings to see if this could be correlated. (We can save the rest of that story for another blog post).

A light went off in my head, ‘correlation between two or more security tools in a single pane of glass’. What an idea – you need something that can pull in all of the datasets (finding reports) and provide some deduplication (so we don’t give dev teams multiple findings from multiple tools), just the fact that we are confident of the viability of the finding. I investigated some of the tool vendors and worked with them for a few years while the capability began to mature in the industry.

Today, Gartner calls this space Application Security Orchestration and Correlation, a contraction of ‘security orchestration’ (where you apply policy as code) and correlating, deduping the results. When done successfully, it also provides a single pane of glass for the operations team or any other orchestration or reporting software in use in your org. Think of it as the one endpoint with all the answers; a way to abstract away the API schema and various life-cycle changes that are associated with new and existing tool-sets.

Whether you wish to interconnect all of your existing orchestration tooling for your pipelines & other infrastructure or perhaps you want to build out your security governance capabilities by conducting all of your own security testing, ASOC tools are capable of providing security at the speed of DevOps.

There really is no other way to accomplish it at scale!

Categories: security, Work related Tags: ,

Infosec not your job but your responsibility? How to be smarter than the average bear

July 25, 2022 Leave a comment

Want to measure how beneficial it is for your software development teams to learn to think more like an adversary? Just look at the first 20 years of use against the last 10-20?

https://www.theregister.com/2022/07/25/infosec_not_your_job/

Web servers are still vulnerable…

April 28, 2018 Leave a comment

In a survey published on an often referenced support site for developers (Stack Overflow), they recently confirmed that JavaScript is the most popular programming language for the 6th year in a row. Almost 70% of the respondents claim that they visit searching for help on this subject so it may not come as a surprise that JavaScript is also the primary cause of vulnerabilities on websites today.

In a blog post from the vendor that brings us one of the most popular tool for hacking websites and finding vulnerabilities, Portswigger writes a great article in which they detail a number of methods that can be used to abuse JavaScript and to bypass cross site scripting mitigation by most frameworks.

There are thousands of ways that can be used to bypass XSS in websites and web developers should already know this. XSS is the number one method to compromise a browser which, in combination with privilege escalation can allow an attacker to take over your computer. Even script kiddies can capture session tokens or cookies from websites without proper security controls that can be used to login as you without even knowing your password. Here is a list of the risks in order of importance for an attacker;

  1. Account hijacking
  2. Credential stealing
  3. Sensitive Data Leakage
  4. Drive by Downloading
  5. Keyloggers/Scanners
  6. Vandalism

Don’t ignore these risks on your websites, public facing or not. If you login to a website often in your organization and it is vulnerable to cross site scripting, teach your users how to identify security risks that could be used to harvest credentials and expose them to malicious attacks. You may also want to make sure that your sites are tested to ensure they are not vulnerable to this type of attack. With Phishing attacks being the number one method that pentesters gain access to your organization, xss is the primary method being used.

 

Categories: security, Work related Tags: ,

Playing with ASLR for Linux

November 5, 2016 Leave a comment

I recently completed a certification with SANS where we were taught to create our own exploits. A well behaved program should have a start procedure called a prologue, clean up and maintain itself and any objects it creates and finally an end. During our testing we assumed that Address Space Layout Randomization (ASLR) was turned on in order for our exploits to be successful but this is not always the case.

All programs have two different sections (instructions and data) and they need to be flagged differently in the memory. A section where only normal data is stored should be marked as non-executable. Executable code that does not dynamically change, should be flagged as read-only.

One of the tricks that hackers use to get control is to hijacking the stack pointer (a road map for where to go next). Evil programs are designed to perform a redirection in order to run their own malicious code. When any program is running in memory, if we want to protect it against evil programs we can use ASLR. Address space layout randomization is based upon the low chance of an attacker guessing the locations of randomly placed areas. Security is increased by increasing the search space.


Auditing

One of the easiest ways to find if a Linux server had ASLR turned on was to look at the proc filesystem for the variable ‘randomize_va_space’.

cat /proc/sys/kernel/randomize_va_space

If the value returned was a zero (0) then it was off, a one (1) then it was on for the stack. Ideally you want to use the value two (2) which will also randomize the data segments but all programs need to be built as a Position Independent Executable (PIE). It will also require that all shared object libraries (.so libraries) also be built as PIE.

Another way to see if ASLR has been activated is to run ldd against your executable and inspect the load address of the shared object libraries.

$ ldd -d -r demo
linux-vdso.so.1 => (0x00007fffe2fff000)
libc.so.6 => /lib64/libc.so.6 (0x00007f3e73cd8000)
/lib64/ld-linux-x86-64.so.2 (0x00007f3e74057000)
$ ldd -d -r demo
linux-vdso.so.1 => (0x00007fff053ff000)
libc.so.6 => /lib64/libc.so.6 (0x00007f82071ac000)
/lib64/ld-linux-x86-64.so.2 (0x00007f820752b000)

Notice the change in the memory address referenced by the C library and the dynamic linker (libc and ld-Linux-x86-64)? This indicates that ASLR is in effect.

To ensure that all of your Linux code is more difficult to bypass you will want to audit your systems and look for the value of 2 (assuming your other programs fully support PIE.

root@mybox:~$ sysctl -a –pattern “randomize”

kernel.randomize_va_space = 2


Bypass

I refer to ‘Bypassing ASLR’ on the Linux system as finding a way to defeat it as a security mechanism. If all shared libraries were compiled as PIE along with all executables (services if attacking remotely and all executables if granted console access) then ASLR would be very difficult to beat but that is not the case. The additional checks required to be sure that the attack footprint has been reduced to zero is to audit each of the services publically available to make sure that all libraries and executables involved are 100% position independent.

There is a great opensource tool out there for checking if your executables support PIE.

‘CheckSec.sh’ maintained by slimm609 (https://github.com/slimm609/checksec.sh)

Categories: security, Work related

Pwn2Own – its not just for browsers…

October 30, 2016 Leave a comment

pwn2own-2016-statsIn a spectacular effort rewarded with over $200,000, a Chinese security team that goes by the name of ‘Tencent Keen’ managed to breach most of the mobile challenges in Trend Micro’s  Masters of Pwn contest in Tokyo this week (Thursday Oct 27).

Miss spelled and often pronounced, the term ‘pwn’ refers to the verb ‘own’ and is generally regarded as the term that best describes domination of a rival. It comes from the online video culture and has been adopted by hackers as a way to describe taking control of a computer.

Both the Tencent Keen and MWR Labs teams (creators of the Drozer and Needle frameworks) were unable to successfully demonstrate the ability to remotely install a rogue application persistently but don’t think all you Smart phone users are safe just yet. Both teams showed some success earlier and were just not able to execute it during the 20 minute testing periods.

Trend Micro still paid out almost $400.000 to help understand how these bugs work so lets hope that these vulnerabilities will be mitigated in their newest mobile security software. Learn more about the event here

Categories: Mobile, Work related

0-day in every Linux system introduced by Linus himself.

October 28, 2016 Leave a comment

dirty-cow-logoLast week, in a self-proclaimed mistake made more than a decade ago, Linus Torvalds, the father of the Linux Operating system introduced a race condition that every version of Linux has today. Referred to as a Zero-day (0-day) this vulnerability affects all versions of Linux today and is described as a bug in the kernel that allows read write access to a read only memory location. More info here

Introduced to fix another bug in a system call called get_user_pages() this ‘fix by torvalds’ results in any server currently running an open service port being vulnerable to this attack. This represents a staggering amount of servers, routers, cameras, IP phones, Android Smart Phones, digital video recorders, The list is endless for the use of Linux today so swift patching is key. You may be surprised to learn that traffic control systems, high speed trains, nuclear submarines, robotic systems, fridges and stoves, play stations and even the Hadron collider runs on Linux and would also be vulnerable to this recent vulnerability.

The bug was witnessed by a keen observer who was inspecting his web server logs so there are known exploits for this 0-day publically available. The implications of this vulnerability is staggering and the press has not given this much attention. This affect EVERY Linux based system out there going back over a DECADE.

The real shame is that there are already millions of embedded devices out there that will never receive patches and will remain vulnerable to this attack!

 

Categories: Mobile, Work related

Exploits are Everywhere

October 15, 2016 2 comments

I recently went through and completed, what I consider to be the hardest and most informative technical course and examination out there, the GIAC Exploit Researcher and Advanced Penetration Tester known as GPXN. What I learned was that there is a lot of opportunity for the bad guys to get control.

As a White hat hacker, I am asked to engage in a variety of activities, most of which are network related. For some of the hackers out there, your goal is to utilize a wide variety of tools to identify weaknesses in the defenses and/or the applications that are running and to overcome the controls in place to protect the data.

To some of the security researchers out there, Exploit writing is the next logical step to transition. As an attacker, if you are fixated on a target and you have exhausted all of your tools and tricks, you are left with little else but to find some type of vulnerability and write an exploit for it. As we purchase and add more and more items to our digital world, the odds are stacked in favour of the bad guy.

Many people have surmised that we are finding so many bugs now because programmers are making so many mistakes but I disagree. I feel that we are finding so many bugs because there ARE so many bugs. Some of us just got better at finding them.

Lets take the recent SSL vulnerability that was exposed for many of the Internet of Things (IoT) devices ( https://www.wired.com/2016/10/akamai-finds-longtime-security-flaw-2-million-devices/). Akamai researchers would have you believe that this is somehow a recent find but there are references to the dangers of ssh port forwarding over a decade ago ( http://www.informit.com/articles/article.aspx?p=602977 ).

Earlier in 2016 we have reports that Gnu Lib C share library has a critical vulnerability ( https://security.googleblog.com/2016/02/cve-2015-7547-glibc-getaddrinfo-stack.html). Admittedly this is very hard to exploit but as more and more people learn how to looks for these types of bugs, we are going to find out about them.

My recently certification has taught me that bugs are everywhere, in the mobile devices we carry, in our cars, in our thermostats. We just have to get better at looking for them.

A word to wise, learn about all the electronics you own, keep them up to date if they are recent purchases and be prepared to give them up if they are not. As a pentester, I  am looking for older vulnerable devices that are connected to your Wi-Fi or cabled networks at home or in the office as a bulkhead to allow me to get a foothold. There has never been a better time to discard those older routers and VoIP phones.

 

Categories: Work related Tags:

Computer Breach and what you can do about it

May 18, 2015 Leave a comment

Security Breach can happen to you

Experts agree that 2015 will be a tipping point for most small to medium sized businesses when it comes to computer security. The average organization cost of data breach is now over 6 million dollars. For most of my clients their loss won’t be anywhere near those numbers but to understand the cost to you or your organization that is over $200 per record. Maybe it’s a list of your clients or your employee wages or perhaps it’s usernames and passwords for your organization. Do the math – these can add up to large scale loss for everyone.

Among the top 5 threats for computer networks today are;

  1. IoT – The Internet of things brings along convenience but those IP enabled devices are not without risk. As you purchase Wi-Fi enabled security systems, TVs, media devices, Network Area Storage, etc. we are seeing an increase in vulnerabilities that expose your network and help to increase your attack surface. They need to be monitored and maintained because they are not as secure as a computer or a server.
  2. DDoS – The abilty to overwhelm your network with traffic is quite common and can easily be done by most consumers with a home network connection. If you require the Internet to do business you should evaluate whether you can operate without it. If not then you should consider protecting yourself against the real possibility that it could happen to you.
  3. Social Media Attacks – If your business uses any cloud based or social media application you should review your authentication and user management policies to avoid a potential breach of your accounts. Hackers are now targeting online applications in order to infect your users and gain access to your networks through the use of Cross Site scripting vulnerabilities. All it takes to be infected is for an email to be clicked on and you can no longer rely that your AntiVirus will prevent any Trojans from getting through.
  4. Mobile Malware – The volume of mobile devices beginning to enter your workplace and the ability to use your internet connection add a very large possibility that malware on a mobile device can get access to your corporate network. If you already allow users to have access to your network with any computerized devices you are probably at risk. You should consider controlling the access or monitor all of the devices by using a Mobile Device Management platform or you risk a possible breach to continue without your knowledge.
  5. Third party Attacks – Many companies allow third party applications to connect with their own network assets but how safe are they? Large scale breaches have been shown to be caused by third party vulnerabilities and these occupy a ‘grey area’ when it comes to management (who is responsible to keep all applications up to date on those systems?). Many user agreements do not cover damages that can be caused by a lack of security practices and once the vulnerabilities have been exploited, hackers use those systems to pivot onto your networks and wreak havoc on your networks.

There are several methods you can implement that can help mitigate the risks.

  1. Implement Monitoring – It is no longer safe practice to just implement a firewall you need to monitor all traffic coming into and out of your network. Hundreds of breaches in any network design have been traced to a failure to see IOC (Indicators of compromise). Not only do you need to record reams of data but you need to review them in order to determine what is normal behavior and what indicates a potential breach. There are devices available that can help you do that and although they can be complicated to implement, once properly deployed they can help you become aware of details that help you find attacks before they become too big.
  2. End User security awareness – If you don’t already have a program in place you should consider a large scale awareness campaign surrounding security at your organization. It can be as simple as a regular talk over lunch or it  can involve testing to be sure that your employees have taken the necessary steps and understand your policies. You need to train your users about the do’s and don’ts of all aspects of your security. Physical security, passwords, email questions, sharing account credentials, staffing questions, etc. You need to protect all aspects of information leakage whereas hackers only need one of them.
  3. Inventory all equipment – If you do not have an active list of your equipment, anything that is or was connected to your network, then take the time to make one and keep it up to date. Many organizations are leaking information that can be critical to your operations. Network devices that no longer are connected should be properly disposed of and /or their configurations need to be wiped. Improperly configured devices and anything with wireless access remain the largest risk to any organization – all of these devices need to be audited on an regular basis to manage the risk.
  4. Review your Protection – Make sure that you update ALL software (this includes Operating systems and any third party applications) that are actively used on all networked computers. Update any firmware on devices that connect to your networks. Implement and maintain Antivirus software on any computer that is actively used to open emails or browse the Internet.

There are many different ways you can help protect yourself from attack but I wanted to point out the clear methods to avoid them. If you are aware of all of the different methods that can be used to gain access to your company or it’s information then you can help manage them. A failure to see them coming is a sure fire way to enable the attack over an over again.

Categories: Work related Tags:

Imagine a single tool that hackers could use to break into your network…

March 12, 2015 2 comments

…and you are probably thinking about Metasploit.

As a security specialist I am saddened to think how easy it is to break into what was once considered a pretty safe way to conduct your business online. Years and years ago we all touted the necessity of a firewall with it’s ‘allow nothing in – allow everything out’ stance. Most sysadmins believed that if you had a crunchy outer shell it would be enough to protect you from the bad guys outside of your organization who are knocking on your proverbial door. We, as sysadmins then debated about the merits of network segmentation and egress filtering and a lot of us agreed that it would be a lot of work to implement and administrate compared to the risks associated with simply leaving the network topology flat and open. Then came along WiFi and for most of the users – it made connectivity easier but as sysadmins we knew that it would require some additional brain power to make it work securely. First WEP got cracked and when WPA-Personal and -Enterprise was introduced and at that time, it represented a pretty safe and uncrackable method to secure the wireless network. WPS made it easy to setup but we found shortly after that WPS has it’s flaws.

Today any user with a computer and extremely fast graphic card could crush a short password in a matter of hours. Now we tell users to make their password longer and to choose better passwords. Then would-be hackers build faster computers to crack longer passwords in a shorter period of time. It all begins to seem to me more like when the bad guys get in rather than if they get in.

It’s time to ask yourself about how well your assets are protected? Does your network topology resemble a cookie (hard on the outside and soft on the inside) or have you taken steps to limit the damage that can be done once your walls fall? It’s hard to believe that you could come in one Monday morning and find out that your network is having a really bad day; all the result of a little tool like Metasploit in the hands of a few skilled people. There are literally thousands of known vulnerabilities, at least one for any number of hardware devices that make up your network and they are all contained in and ready to be unleashed on all of your devices by this tool once they get in. Network switches, IP phones and phone systems, routers and firewalls, printers, etc. Lets not forget the laptops, workstations, servers, tablets, ipads and oh yes the smart phones that we all know and love?

You home users are just as vulnerable with your Thermostats, IP cameras, wifi adapters, home alarm systems, all web enabled. Every day we hear about some vendor that has IP enabled another appliance in your home and do you think they are worried about the safety of the device while you own it? As a consumer I am pleased when my new fridge can show me a picture on my cell phone of what is inside while I am standing in my local super market but as a security researcher – I am horrified of all the possibilities that could happen as a result of poor security. On the flipside and as a white hat (someone who hacks stuff to make it better) I am thrilled that there will soon be more things to test and ensure that the vendor has created a safe secure product for my fellow users to enjoy. The question that is raised in my mind by these likely events is just who is quality controlling these devices – them or you?

Categories: Work related Tags: ,