http://www.computerworld.com/article/2975024/data-security/the-security-and-risk-management-of-shadow-it.html By Robert C. Covington Computerworld Aug 24, 2015 Most would agree that we in the information security industry are fighting an uphill battle. Many have even taken the extreme position that we cannot keep intruders out of our networks, so we should give up and focus on containment, an argument I strongly objected to in an earlier post, “Are we surrendering the cyberwar?” Regardless of your position on how best to control the threat, I think you will agree that it is a difficult problem to address. In the world of corporate IT, I have seen a definite shift toward better focus on network security, vulnerability management and governance. We are having success in locking networks and data down, even as more improvement is needed. Even as we succeed in deploying better security controls for the assets we know about, we are facing a growing threat from within — the challenge of shadow IT. According to Techopedia, the term “shadow IT” “is used to describe IT solutions and systems created and applied inside companies and organizations without their authorization.” The phenomenon usually begins with an enterprise department or team getting frustrated with the IT department’s perceived inability to deliver what they think they need, when they think they need it. As a result, they go off and do their own thing, usually without the knowledge of IT. The problem usually continues with IT unaware, until technical problems develop, or until integration with other corporate applications is needed. When IT is brought into the loop by users now needing help, it is not usually viewed as a pleasant surprise by the CIO or IT director. […]
http://www.fiercehealthit.com/story/office-inspector-general-audit-criticizes-hhs-access-controls/2014-07-29 By Susan D. Hall FierceHealthIT.com July 29, 2014 The U.S. Department of Health and Human Services must improve its security procedures for granting access to physical facilities as well as computer applications and files, according to an audit from the HHS Office of Inspector General that found security controls inadequate. The audit looked at how well the agency complied with Homeland Security Presidential Directive-12, which lays out access-management policy for government workers and contractors. It covered program and system-specific controls, encryption, change controls, Web vulnerability management and physical security. It found five areas it categorized as high risk and one
http://www.informationweek.com/security/vulnerabilities/unplug-universal-plug-and-play-security/240147226 By Mathew J. Schwartz InformationWeek January 29, 2013 More than 23 million Internet-connected devices are vulnerable to being exploited by a single UDP packet, while tens of millions more are at risk of being remotely exploited. That warning was issued Tuesday by vulnerability management and penetration testing firm Rapid7, which said its researchers spent six months studying how many universal plug and play (UPnP) devices are connected to the Internet — and what the resulting security implications might be. The full findings have been documented in a 29-page report, “Security Flaws In Universal Plug and Play.” “The results were shocking, to the say the least,” according to a blog post from report author HD Moore, chief security officer of Rapid7 and the creator of the open source penetration testing toolkit Metasploit. “Over 80 million unique IPs were identified that responded to UPnP discovery requests from the Internet.” UPnP is a set of standardized protocols and procedures that are designed to make network-connected and wireless devices easy to use. Devices that use the protocol — which is aimed more at residential users rather than enterprises — include everything from routers and printers to network-attached storage devices and smart TVs. […] ______________________________________________ Visit the InfoSec News Security Bookstore Best Selling Security Books and More! http://www.shopinfosecnews.org
While there are different stories about what cloud computing “is”, there is one specific direction that virtualization is headed that could bring along with it some additional problems for the security industry. One issue I wanted to focus in on is centered around vulnerability management and how it is implemented in a cloud environment. Many customer’s are faced with the need to scan their cloud, but unable to do so.
Virtualization providers have been pushing their customers and hosting providers to adopt new infrastructure to automate the distribution of CPU processing time for their applications across multiple condensed hardware devices. This concept was originally conceived as “Grid-Computing” which was created to address the limits of processing power in single CPU systems. This new wave of virtualization technology is meant to automatically distribute processing time to maximize the utilization of hardware for reduced Cap Ex (Capital Expenditures) and ongoing support costs. VMware’s Cloud Director is a good example of the direction that virtualization is going and how the definition of “cloud computing” is changing. Virtualized systems are quickly being condensed into combined multi-CPU appliances that integrate the network, application and storage systems together for more harmonious and efficient IT operations.
The vulnerability management problem:
While cloud management is definitely becoming much more robust, one issue that is apparent for cloud providers is the management of the vulnerabilities inside a particular customer’s cloud. In a distributed environment, if the allocation of systems changes by either adding or removing virtual systems/instances from your cloud you quickly face the fact that you may not be scanning the correct system for it’s vulnerabilities. This is especially important in environments that are “shared” across different customers. Since most Vulnerability Management products use CIDR blocks or CMDB databases for defining the profile for scanning, you could easily end up scanning an adjacent customer’s system and hitting their environment with scans due to either a lag between CMDB updates or due to static definitions of scan network address space.
The vulnerability management cloud solution:
My belief is that this vulnerability management problem will be addressed by the integration and sharing of asset information between the cloud and vulnerability scanning services. Cloud providers will more than likely need to provide application programming interfaces which will allow the scan engines/management consoles to read-in current asset or deployment information from the cloud and then dynamically update the IP address information before scans commence.
Furthermore, I feel that applications such as web, ftp and databases will be increasingly distributed across these same virtualized environments and automatically integrate with load distribution systems (load balancers) to ensure delivery of the application no matter where the applications move inside the cloud. The first signs of this trend are already apparent in the VN-Link functionality release as part of the Unified Computing System from Cisco however adoption has been slow due to legacy and capital deployment on account of the world’s market recession. This may even lead to having multiple customer applications being processed or running on the same virtual host with different TCP/UDP port numbers.
This information would also need to roll down to the reporting and ticketing functionality of the vulnerability management suite so that reports and tickets are dynamically generated using the most up-to-date information and no adjacent customer data leaks into the report or your ticketing system for managing remediation efforts. Please let me know your thoughts….
Often, I find in my job as a security professional that I must explain what a separation of duties is and why it is necessary in an organization. The term “separation of duties” seems a little nebulous for many but it is the act of separating a business process into several distinct parts. These distinct parts of a business process that are normally separated are the execution, approval and audit functions of the process.
Examples of Separation of Duties:
1. Security incident investigation – Typically, an investigation is initiated by a security operations function (CSIRT) and then after an investigation is completed the assessment of the incident response process is performed by IT Audit or an Internal Audit department to ensure the process is being executed based on the documented procedure.
2. Payroll – Typically, payroll departments are responsible for the distribution of checks, the manager of the employee being paid approves the hours worked and then an Internal Audit function may check to ensure all the processes are being followed and the appropriate payment amounts are being completed.
3. Vulnerability Management – Security operations or engineering functions typically use a scanning application such as a Rapid7, McAfee Vulnerability Manager or QualysGuard scanning engine to scan devices on the network for security vulnerabilities, once a scan is completed tickets may be created to address these vulnerabilities. An IT Audit department may come in and request a “sample” of the tickets to ensure that proper remediation is being performed and that tickets are not being closed without an actual remediation of the vulnerabilities.
The primary goal for implementing proper separation of duties should always be the prevention of fraud. By separating business processes into these segments, we can ensure that a business process is efficiently executed and checked by an independent party that can assure the execution is appropriately. Implementation of a separation of duties prevents a single business process to be completely managed by a single individual and thus requires collusion to occur before fraud can take place. In most cases, it is more difficult (however not impossible) to enlist the help of others to perform fraud. The primary goal is to reduce the risk of fraud, not to completely prevent it.
Many corporations in the world are now mandated by PCI to perform at least quarterly scans against their PCI in-scope computing systems. The main goal of this activity is to ensure vulnerabilities in systems are identified and fixed on a regular basis. I myself think this is one of the more important provisions of PCI and one that I believe is tantamount to maintaining a secure environment.
What most corporations initially do is start by using simple scanning tools such as nessus, Gfi languard, ISS scanner etc and perform on-demand scans. While this is all well and good and provides an immediate snapshot of a particular point in time. There are several major flaws that must be addressed through richer tools.
First, it is great to get vulnerability and patch data, however providing a systems engineer or administrator with only one single report with many if not hundreds of things to fix this method becomes quickly unreasonable for them to track and respond to. We often forget that this systems engineer is often tasked with many other duties they must prioritize including new installs, troubleshooting, bug patching, administration, configuration etc that demands most of their time. These activities are often far more time sensitive in their eyes as projects etc have people bugging them regularly for completion. It is also important to note that the business is pushing them for ever greater functionality/features.
Given this fact, a simple scan report is just not viable for them to prioritize and track against existing workload. this has givrn rise to vulnerability management a.k.a. the process of managing vulnerabilities to remediation through the use of ticketing/reporting to management.
Secondly, another important flaw that exists with just simple scanning is the lack of overall metrics with regard to measuring risk. Measuring risk is hard is hard to do in security, but if you have an automated scanning process that is scheduled on a regularly occuring basis (i.e. more than once every 3 months) your vulnerability data over that time can be measured as systems become either more exposed or less exposed as they are patched or new vulnerabilities are found. This is one way you can effectively measure the effectiveness of your patch management and your security program.
Thirdly, this ensures your company clearly see’s that security is a process and not just a one time effort. This distinction is important because you as a security practitioner will need data to prove you need a consistent and ongoing supply of money to maintain security. Security is continuous and ever changing, stagnation is a guarentee of breach.
Moral of this story… manage security, don’t just triage it and forget it.
Great tools for managing vulnerabilities are:
-McAfee Vulnerability Manager
Best Enterprise Vulnerability Management Product: Rapid 7 NeXpose
After reviewing the top players in my select list, it is my opinion that the vendor who is the most feature rich, low cost and safest deployment option currently available is the Rapid 7 appliance. Qualys is my second choice based on the same criteria and mostly due to my favoring onsite deployment. Finally with McAfee and they come in last for me mostly due to their lack of web and database scanning. I just jotted down SWOT thoughts on the following vendors so if there are any corrections please send me them via my blog’s contact form.
Vendors I Selected for the SWOT
- Rapid 7
- McAfee, Inc.
Rapid 7 – NeXpose
– Highly focused on just vulnerability management
– Quick deployment
– Fast customer adoption (high growth)
– Recent infusion of growth capital (VC funding)
– Enterprise ticketing integration
– Web application scanning
– Database scanning
– VMware capability
– Onsite deployment
– Low cost (depreciable)
– Small company
– Limited policy compliance functionality (ITGRC)
– Operations cost (management, power, rack space etc)
– Small research team
– Small support team
– Take greater market share as larger vendors lag
– Expansion to policy management (ITGRC)
– Expand distribution channel
– Integration with 3rd party blocking technology (web app firewalls)
– Integrate web app scanning ticketing to development bug tracking systems
– Company aquisition
– Alternative technologies are developed
– Large players address weaknesses
Qualys – QualysGuard Enterprise
– SaaS and cloud adoption increasing
– Web application security
– Database security
– Quick deployment
– Enterprise ticket integration
– Highly focused on vulnerability management
– SaaS only (high cost for onsite deployment option)
– High ongoing fees (non depreciable)
– Lower ROI due to continuous yearly subscription model
– Limited database scanning support
– Commitment to on site deployment option
– Reduce yearly subscription renewals to address ROI argument
– Move more towards SaaS based ITGRC platform
– Integrate web app scanning ticketing to development bug tracking systems
– ITGRC vendors expand to Vulnerability management space
– Smaller (more nimble companies) develop better functionality
– Larger players lower pricing further
– Larger players match SaaS offering
McAfee – McAfee Vulnerability Manager
– Large market share
– Countermeasure awareness
– Vmware option available
– Foundstone research heritage
– Instant new threat assessment reporting
– Onsite deployment option
– Limited web application scanning
– Limited database scanning
– Countermeasure awareness limitations (competitor products?)
– Console strategy unknown (epo?)
– Some functionality requires separate console
– SaaS expansion to include ticketing and policy compliance (ITGRC)
– Consolidate existing SaaS offerings under one single website console.
– Consolidate separately managed products into EPO (i.e. Vuln manager, Risk and compliance manager and remediation manager)
– Poor execution of consolidated console strategy
– Possibility of Acquisition
– Reduced revenue due to commoditization
Note: The results of this analysis are not quantitative in nature and are only opinions of the author and no other associations, organizations or persons.