Vulnerability Management in the cloud

Vulnerability Management - Source (ISACA.org)

While there are different stories about what cloud computing “is”,  there is one specific direction that virtualization is headed that could bring along with it some additional problems for the security industry. One issue I wanted to focus in on is centered around vulnerability management and how it is implemented in a cloud environment. Many customer’s are faced with the need to scan their cloud, but unable to do so.

Virtualization providers have been pushing their customers and hosting providers to adopt new infrastructure to automate the distribution of CPU processing time for their applications across multiple condensed hardware devices. This concept was originally conceived as “Grid-Computing” which was created to address the limits of processing power in single CPU systems. This new wave of virtualization technology is meant to automatically distribute processing time to maximize the utilization of hardware for reduced Cap Ex (Capital Expenditures) and ongoing support costs. VMware’s Cloud Director is a good example of the direction that virtualization is going and how the definition of “cloud computing” is changing.  Virtualized systems are quickly being condensed into combined multi-CPU appliances that integrate the network, application and storage systems together for more harmonious and efficient IT operations.

The vulnerability management problem:

While cloud management is definitely becoming much more robust, one issue that is apparent for cloud providers is the management of the vulnerabilities inside a particular customer’s cloud. In a distributed environment, if the allocation of systems changes by either adding or removing virtual systems/instances from your cloud you quickly face the fact that you may not be scanning the correct system for it’s vulnerabilities. This is especially important in environments that are “shared” across different customers. Since most Vulnerability Management products use CIDR blocks or CMDB databases for defining the profile for scanning, you could easily end up scanning an adjacent customer’s system and hitting their environment with scans due to either a lag between CMDB updates or due to static definitions of scan network address space.

The vulnerability management cloud solution:

My belief is that this vulnerability management problem will be addressed by the integration and sharing of asset information between the cloud and vulnerability scanning services. Cloud providers will more than likely need to provide application programming interfaces which will allow the scan engines/management consoles to read-in current asset or deployment information from the cloud and then dynamically update the IP address information before scans commence.

Furthermore, I feel that applications such as web, ftp and databases will be increasingly distributed across these same virtualized environments and automatically integrate with load distribution systems (load balancers) to ensure delivery of the application no matter where the applications move inside the cloud. The first signs of this trend are already apparent in the VN-Link functionality release as part of the Unified Computing System from Cisco however adoption has been slow due to legacy and capital deployment on account of the world’s market recession. This may even lead to having multiple customer applications being processed or running on the same virtual host with different TCP/UDP port numbers.

This information would also need to roll down to the reporting and ticketing functionality of the vulnerability management suite so that reports and tickets are dynamically generated using the most up-to-date information and no adjacent customer data leaks into the report or your ticketing system for managing remediation efforts. Please let me know your thoughts….




Facebooktwittergoogle_plusredditpinterestlinkedinmail

Lawrence Presenting at SecureWorldExpo: Obtaining “Context” During a Forensic Investigation

Image Source: St. Stedwards University

Delve into the world of the pre-forensics work that must be done as an investigation kicks off.  Learn to profile someone as best you can to identify the context for your forensic examination, discover how to properly interview someone, read their body language and even read their eye movements.

To go to the conference click here.

For those who either attended or cannot attend, we’ve uploaded a video here.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

What is a “Separation of Duties”?

Often, I find in my job as a security professional that I must explain what a separation of duties is and why it is necessary in an organization. The term “separation of duties” seems a little nebulous for many but it is the act of separating a business process into several distinct parts. These distinct parts of a business process that are normally separated are the execution, approval and audit functions of the process.

Examples of Separation of Duties:

1. Security incident investigation – Typically, an investigation is initiated by a security operations function (CSIRT) and then after an investigation is completed the assessment of the incident response process is performed by IT Audit or an Internal Audit department to ensure the process is being executed based on the documented procedure.

2. Payroll – Typically, payroll departments are responsible for the distribution of checks, the manager of the employee being paid approves the hours worked and then an Internal Audit function may check to ensure all the processes are being followed and the appropriate payment amounts are being completed.

3. Vulnerability Management – Security operations or engineering functions typically use a scanning application such as a Rapid7, McAfee Vulnerability Manager or QualysGuard scanning engine to scan devices on the network for security vulnerabilities, once a scan is completed tickets may be created to address these vulnerabilities. An IT Audit department may come in and request a “sample” of the tickets to ensure that proper remediation is being performed and that tickets are not being closed without an actual remediation of the vulnerabilities.

The Goal:

The primary goal for implementing proper separation of duties should always be the prevention of fraud. By separating business processes into these segments, we can ensure that a business process is efficiently executed and checked by an independent party that can assure the execution is appropriately. Implementation of a separation of duties prevents a single business process to be completely managed by a single individual and thus requires collusion to occur before fraud can take place. In most cases, it is more difficult (however not impossible) to enlist the help of others to perform fraud. The primary goal is to reduce the risk of fraud, not to completely prevent it.


Facebooktwittergoogle_plusredditpinterestlinkedinmail