I was cruising the Exploit-DB.com site today just to see the latest in the exploits in the wild and noticed right away that there was a new metasploit exploit released on October 1st for Trend Micro’s Internet Security Pro 2010. It always chills me when I see exploits for security vendors. I guess I see them as being special or something. Maybe I shouldn’t put them so much on a pedestal since I guess all programmers can make mistakes. However, the question is… should we expect security vendors to have better security than their customers or other software companies? I wonder if NSS Labs is going to come up with a framework for assessing or certifying security product vendor’s development processes? Hmm… That’d be nice to see.
While there are different stories about what cloud computing “is”, there is one specific direction that virtualization is headed that could bring along with it some additional problems for the security industry. One issue I wanted to focus in on is centered around vulnerability management and how it is implemented in a cloud environment. Many customer’s are faced with the need to scan their cloud, but unable to do so.
Virtualization providers have been pushing their customers and hosting providers to adopt new infrastructure to automate the distribution of CPU processing time for their applications across multiple condensed hardware devices. This concept was originally conceived as “Grid-Computing” which was created to address the limits of processing power in single CPU systems. This new wave of virtualization technology is meant to automatically distribute processing time to maximize the utilization of hardware for reduced Cap Ex (Capital Expenditures) and ongoing support costs. VMware’s Cloud Director is a good example of the direction that virtualization is going and how the definition of “cloud computing” is changing. Virtualized systems are quickly being condensed into combined multi-CPU appliances that integrate the network, application and storage systems together for more harmonious and efficient IT operations.
The vulnerability management problem:
While cloud management is definitely becoming much more robust, one issue that is apparent for cloud providers is the management of the vulnerabilities inside a particular customer’s cloud. In a distributed environment, if the allocation of systems changes by either adding or removing virtual systems/instances from your cloud you quickly face the fact that you may not be scanning the correct system for it’s vulnerabilities. This is especially important in environments that are “shared” across different customers. Since most Vulnerability Management products use CIDR blocks or CMDB databases for defining the profile for scanning, you could easily end up scanning an adjacent customer’s system and hitting their environment with scans due to either a lag between CMDB updates or due to static definitions of scan network address space.
The vulnerability management cloud solution:
My belief is that this vulnerability management problem will be addressed by the integration and sharing of asset information between the cloud and vulnerability scanning services. Cloud providers will more than likely need to provide application programming interfaces which will allow the scan engines/management consoles to read-in current asset or deployment information from the cloud and then dynamically update the IP address information before scans commence.
Furthermore, I feel that applications such as web, ftp and databases will be increasingly distributed across these same virtualized environments and automatically integrate with load distribution systems (load balancers) to ensure delivery of the application no matter where the applications move inside the cloud. The first signs of this trend are already apparent in the VN-Link functionality release as part of the Unified Computing System from Cisco however adoption has been slow due to legacy and capital deployment on account of the world’s market recession. This may even lead to having multiple customer applications being processed or running on the same virtual host with different TCP/UDP port numbers.
This information would also need to roll down to the reporting and ticketing functionality of the vulnerability management suite so that reports and tickets are dynamically generated using the most up-to-date information and no adjacent customer data leaks into the report or your ticketing system for managing remediation efforts. Please let me know your thoughts….
Delve into the world of the pre-forensics work that must be done as an investigation kicks off. Learn to profile someone as best you can to identify the context for your forensic examination, discover how to properly interview someone, read their body language and even read their eye movements.
Many corporations in the world are now mandated by PCI to perform at least quarterly scans against their PCI in-scope computing systems. The main goal of this activity is to ensure vulnerabilities in systems are identified and fixed on a regular basis. I myself think this is one of the more important provisions of PCI and one that I believe is tantamount to maintaining a secure environment.
What most corporations initially do is start by using simple scanning tools such as nessus, Gfi languard, ISS scanner etc and perform on-demand scans. While this is all well and good and provides an immediate snapshot of a particular point in time. There are several major flaws that must be addressed through richer tools.
First, it is great to get vulnerability and patch data, however providing a systems engineer or administrator with only one single report with many if not hundreds of things to fix this method becomes quickly unreasonable for them to track and respond to. We often forget that this systems engineer is often tasked with many other duties they must prioritize including new installs, troubleshooting, bug patching, administration, configuration etc that demands most of their time. These activities are often far more time sensitive in their eyes as projects etc have people bugging them regularly for completion. It is also important to note that the business is pushing them for ever greater functionality/features.
Given this fact, a simple scan report is just not viable for them to prioritize and track against existing workload. this has givrn rise to vulnerability management a.k.a. the process of managing vulnerabilities to remediation through the use of ticketing/reporting to management.
Secondly, another important flaw that exists with just simple scanning is the lack of overall metrics with regard to measuring risk. Measuring risk is hard is hard to do in security, but if you have an automated scanning process that is scheduled on a regularly occuring basis (i.e. more than once every 3 months) your vulnerability data over that time can be measured as systems become either more exposed or less exposed as they are patched or new vulnerabilities are found. This is one way you can effectively measure the effectiveness of your patch management and your security program.
Thirdly, this ensures your company clearly see’s that security is a process and not just a one time effort. This distinction is important because you as a security practitioner will need data to prove you need a consistent and ongoing supply of money to maintain security. Security is continuous and ever changing, stagnation is a guarentee of breach.
Moral of this story… manage security, don’t just triage it and forget it.
Great tools for managing vulnerabilities are: -Rapid7 -McAfee Vulnerability Manager -Qualys
With all of today’s focus on securing for PCI or SOX we often find ourselves leaving our security risk management priorities behind. As we all know there are many ways to breach the security of a corporation and many safeguards we have to select from.
Which brings me to the fact that there are many internal web applications used inside companies that we sometimes forget that can cause the rest of our security to fail. Good examples of such sites are intranets, bug tracking apps, internal document websites, employee benefit portals, time tracking portals etc.
It only takes one of these sites using a non-encrypted session (i.e. no ssl) to render an entire corporate PCI or SOX security paradigm useless. One single use of Cain & Abel sniffer tool along with ARP spoofing can suck down the passwords your privileged users use and give rise to an attacker gaining access to your sensitive data.
Although most corporations ask employees to use different or more complex passwords on disperate applications, the move to centralized LDAP or AD authenticated environments means now passwords are no longer different on these systems.
The moral of this story is, please don’t ignore your weakest link. Security is end to end.
This management book focuses on the crucial knowledge you'll need to become a great manager and leader. It will teach you the important management and leadership skills so others will call you "great"!