My latest Gartner research: Market Share Analysis: Unified Threat Management (SMB Multifunction Firewalls), Worldwide, 2014 Update

The market consolidates with the Dell acquisition of SonicWALL and Cassidian CyberSecurity acquiring NetASQ. Recent market entrant Huawei disrupts some of the UTM market with new products. Providers such as Dell, with comprehensive portfolios beyond security products, make it more …

Gartner customers may access this research by clicking here.




Facebooktwittergoogle_plusredditpinterestlinkedinmail

My Latest Gartner Research: Market Share: Unified Threat Management (SMB Multifunction Firewalls), Worldwide, 2013

This document was revised on 14 March 2013. For more information, see the Corrections page on gartner.com. 1 Market Size by Segment, Worldwide, 2010-2013 2 Market Shares by Segment, Worldwide, 2010-2013 3 Definitions 1-1 Market Size: Unified Threat Management (SMB Multifunction Firewalls), ..

Gartner customers may access this research by clicking here.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

Problems with running to SSL in fear of the NSA.

Recently, a whole host of companies have been rapidly implementing SSL across their entire websites in response to the NSA scandal. I for one don’t buy into the paranoia to the extent that the media and everyone else does. As an american citizen, my expectation is that my government is doing what it can to protect me and as a technologist I am constantly advising organizations globally on what they need to do to protect themselves. In the process, it is very common for the technologies to be deployed to peer into user network traffic. The main goal of this inspection is to protect users, not spy and snoop on their activities. I realize that organizations are a bit different than that of a government agency but honestly folks, when have you seen court cases involving NSA data? Its very far and few between. Intelligence is about gathering information. Information is used as context in decision making, we all do this and seek information for all decisions we make.

Now I am not defending the NSA’s tromping through the US constitution, I agree that our government should be tightly controlled and held to the constitutional standards set forth by our forefathers. I only want to shed some light on what “we” already do as organizations globally. We as organizations go way beyond tracking “metadata” about the users that use our networks, and this is largely in order to protect ourselves from the evil presented by the hackers and nation states that wish to get into our information or steal our intellectual property.

Now we come to the use of SSL, although I do believe that all folks that are concerned with government monitoring or the transport of sensitive information over the internet should be encrypted, one thing that organizations need to consider are the impacts to the user experience and their own infrastructure.  Leveraging SSL for absolutely all content can have a dramatic performance disadvantage. Although SSL encryption is now much easier to implement due to hardware performance enhancements. Implementing SSl can have huge impacts and must be considered by all that are involved. I urge the community at large and the IETF to push for mixed-mode web content encryption and new standards in browsers that would provide encryption that can be specified only for sensitive things like the transport of cookies, forms, specific called out elements and other such information without the need to transport absolutely everything over an encrypted channel. I realize that HTML does provide for this but many browsers prompts users with warnings making it difficult for web content providers to selectively encrypt content that “must” be secured, while other content can remain unencrypted. There could be a concerted effort that eliminates the need for browser warnings while also improving security of “sensitive” content.

One major disadvantage here is that for organizations that wish to dramatically reduce network load and leverage caching proxies, SSL must be terminated at the proxy in order for these proxy caches to be effective. This actually diminishes security quite extensively and could introduce potential liability (not a lawyer so this isn’t legal advice). The reason I bring up this topic is that I leverage a network proxy cache myself and I really don’t want to pierce my SSL sessions en-mass to properly cache my network resources.

My two cents. What are yours?


Facebooktwittergoogle_plusredditpinterestlinkedinmail

Optimized TCP settings for Linux (Opensuse 13.1) via sysctl.conf

Thought this may be helpful to some of you out there in internet land. This is optimized settings for sysctl.conf for Opensuse 13.1 on a 100 megabit link. I have also created a file called “initcwnd”

Set the cwnd to 10
/etc/rc.d # more initcwnd

#Begin Script

/sbin/ip route | while read p; do ip route change $p initcwnd 10 initrwnd 10; done

chmod +x initcwnd

 

####
#
# /etc/sysctl.conf is meant for local sysctl settings
#
# sysctl reads settings from the following locations:
# /boot/sysctl.conf-<kernelversion>
# /lib/sysctl.d/*.conf
# /usr/lib/sysctl.d/*.conf
# /usr/local/lib/sysctl.d/*.conf
# /etc/sysctl.d/*.conf
# /run/sysctl.d/*.conf
# /etc/sysctl.conf
#
# To disable or override a distribution provided file just place a
# file with the same name in /etc/sysctl.d/
#
# See sysctl.conf(5), sysctl.d(5) and sysctl(8) for more information
#
####

# net.ipv6.conf.all.disable_ipv6 = 1
# net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
fs.file-max = 65535
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.ipv4.tcp_mem = 67108864 67108864 67108864
net.ipv4.tcp_low_latency = 1
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_max_syn_backlog = 250000
# increase TCP max buffer size settable using setsockopt()
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
# increase Linux autotuning TCP buffer limit
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
# increase the length of the processor input queue
net.core.netdev_max_backlog = 250000
# recommended default congestion control is htcp
net.ipv4.tcp_congestion_control=reno
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=0
vm.swappiness = 60
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_window_scaling = 1
vm.vfs_cache_pressure=50
vm.dirty_background_ratio=25
vm.dirty_ratio=20
net.ipv4.tcp_rfc1337=1
net.ipv4.tcp_workaround_signed_windows=1
net.ipv4.tcp_sack=1
net.ipv4.tcp_fack=1
net.ipv4.ip_no_pmtu_disc=1
net.ipv4.tcp_frto=2
net.ipv4.tcp_keepalive_probes=3
vm/min_free_kbytes = 65536
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_early_retrans = 1
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_timestamps=0
## Netfilter
net.netfilter.nf_conntrack_acct = 0
net.netfilter.nf_conntrack_checksum = 1
net.netfilter.nf_conntrack_events = 1
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.netfilter.nf_conntrack_expect_max = 4096
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_helper = 1
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_log_invalid = 0
net.netfilter.nf_conntrack_max = 65536
net.netfilter.nf_conntrack_tcp_be_liberal = 1
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_timestamp = 0
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Law Firms Are Pressed on Security for Data

http://dealbook.nytimes.com/2014/03/26/law-firms-scrutinized-as-hacking-increases/ By MATTHEW GOLDSTEIN Dealbook The New York Times MARCH 26, 2014 A growing number of big corporate clients are demanding that their law firms take more steps to guard against online intrusions that could compromise sensitive information as global concerns about hacker threats mount. Wall Street banks are pressing outside law firms to demonstrate that their computer systems are employing top-tier technologies to detect and deter attacks from hackers bent on getting their hands on corporate secrets either for their own use or sale to others, said people briefed on the matter who spoke on the condition of anonymity. Some financial institutions are asking law firms to fill out lengthy 60-page questionnaires detailing their cybersecurity measures, while others are doing on-site inspections. Other companies are asking law firms to stop putting files on portable thumb drives, emailing them to nonsecure iPads or working on computers linked to a shared network in countries like China and Russia where hacking is prevalent, said the people briefed on the matter. In some cases, banks and companies are threatening to withhold legal work from law firms that balk at the increased scrutiny or requesting that firms add insurance coverage for data breaches to their malpractice policies. “It is forcing the law firms to clean up their acts,” said Daniel B. Garrie, executive managing partner with Law & Forensics, a computer security consulting firm that specializes in working with law firms. “When people say, ‘We won’t pay you money because your security stinks,’ that carries weight.” […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] How will Windows XP end of support affect health IT security?

http://healthitsecurity.com/2014/03/27/how-will-windows-xp-end-of-support-affect-health-it-security/ By Patrick Ouellette Health IT Security March 27, 2014 As is the case with most pending vendor support deadlines, the upcoming end of Microsoft Windows XP support on April 8, 2014 has been a polarizing topic in the enterprise and healthcare spaces. There are some organizations that may be unaware that Microsoft will no longer be providing security patches and others that are building Fort Knox 2.0 because of the XP end of support. However, a few IT security professionals within healthcare organizations told HealthITSecurity.com that they believe the biggest impact will likely be on smaller healthcare organizations. The reality for these organizations is that they must account for projects such as ICD-10 or Meaningful Use and upgrading their XP machines may go on the back-burner out of necessity. Without the proper funding and IT security talent available to some providers, these security concerns become that much more difficult to manage. Stephen Person, Network & Security Engineer at North Valley Hospital and HealthCare Information Security and Privacy Practitioner (HCISPP) said he guarantees that many organizations are looking at the end-of-life of Windows XP. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Patch management flubs facilitate cybercrime

http://www.networkworld.com/news/2014/032714-solutionary-280149.html By Ellen Messmer Network World March 27, 2014 Failures in patch management of vulnerable systems have been a key enabler of cybercrime, according to the conclusions reached in Solutionary’s annual Global Threat Intelligence Report out today, saying it sees botnet attacks as the biggest single threat. The managed security services provider, now part of NTT, compiled a year’s worth of scans of customers’ networks gathered through 139,000 network devices, such as intrusion-detections systems, firewall and routers, and analyzed about 300 million events, along with 3 trillion collected logs associated with attacks. Solutionary says it relies on several types of vendor products for these scans, including Qualys, Nessus, Saint, Rapid7, nCircle and Retina. Solutionary also looked at the latest exploit kits used by hackers, which include exploits from as far back as 2006. Solutionary found that half of the vulnerability scans it did on NTT customers last year were first identified and assigned CVE numbers between 2004 and 2011. “That is, half of the exploitable vulnerabilities we identified have been publicly known for at least two years, yet they remain open for an attacker to find and exploit,” Solutionary said in its Global Threat Intelligence Report. “The data indicates many organizations today are unaware, lack the capability, or don’t perceive the importance of addressing these vulnerabilities in a timely manner.” […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail