Big data analytics are now a foundation element for an emerging generation of more powerful security products and services. An increasing number of providers are leveraging security intelligence to bring better controls to counteract growing advanced attack threats. The boundaries …

Gartner clients can read this research by clicking here.

Tags: , , , , , , ,
Tagged with:
 

Performance Results of Free Public DNS Services

On April 9, 2014, in Personal, Security, by Lawrence Pingree

I ran some tests today to optimize my Internet performance. I performed the test using DNSBench. In my test I included all the major Free public DNS services that provide SOME level of malicious host protection capabilities. Below are my results.

Please Note: Performance results may vary. This test was performed via a Comcast home broadband connection in Livermore, California. Location of requestor and server’s can contribute significantly to overall performance. All users should perform their own tests to select an appropriate provider.

Secure Free Public DNS Test Performance Graphic

secure-dns-test-04-09-2014

Performance Ranking by Provider and DNS Server

#1 - Norton ConnectSafe DNS
nameserver 199.85.127.10 (fastest DNS lookups in overall test)
nameserver 199.85.126.10 (4th place DNS lookups in overall test)

#2 - OpenDNS Home
nameserver 208.67.220.220 (2nd fastest DNS lookups in overall test)
nameserver 208.67.222.222 (3rd place DNS lookups in overall test)

#3 – Comodo Secure DNS
nameserver 8.26.56.26 (5th place DNS lookups in overall test)
nameserver 8.20.247.20 (6th place DNS lookups in overall test)
Below is a comprehensive report from the tool 

Final benchmark results, sorted by nameserver performance:
(average cached name retrieval speed, fastest to slowest)

199. 85.127. 10 | Min | Avg | Max |Std.Dev|Reliab%|
—————-+——-+——-+——-+——-+——-+
- Cached Name | 0.014 | 0.016 | 0.020 | 0.001 | 100.0 |
- Uncached Name | 0.017 | 0.086 | 0.254 | 0.070 | 100.0 |
- DotCom Lookup | 0.035 | 0.077 | 0.127 | 0.023 | 100.0 |
—<——–>—+——-+——-+——-+——-+——-+
··· no official Internet DNS name ···
ULTRADNS – NeuStar, Inc.,US
208. 67.220.220 | Min | Avg | Max |Std.Dev|Reliab%|
—————-+——-+——-+——-+——-+——-+
- Cached Name | 0.020 | 0.022 | 0.024 | 0.001 | 100.0 |
- Uncached Name | 0.021 | 0.146 | 0.590 | 0.136 | 100.0 |
- DotCom Lookup | 0.082 | 0.196 | 0.335 | 0.058 | 100.0 |
—<——–>—+——-+——-+——-+——-+——-+
resolver2.opendns.com
OPENDNS – OpenDNS, LLC,US
208. 67.222.222 | Min | Avg | Max |Std.Dev|Reliab%|
—————-+——-+——-+——-+——-+——-+
- Cached Name | 0.020 | 0.022 | 0.025 | 0.001 | 100.0 |
- Uncached Name | 0.021 | 0.152 | 0.518 | 0.139 | 100.0 |
- DotCom Lookup | 0.078 | 0.189 | 0.351 | 0.070 | 100.0 |
—<——–>—+——-+——-+——-+——-+——-+
resolver1.opendns.com
OPENDNS – OpenDNS, LLC,US
199. 85.126. 10 | Min | Avg | Max |Std.Dev|Reliab%|
—————-+——-+——-+——-+——-+——-+
- Cached Name | 0.030 | 0.032 | 0.037 | 0.001 | 100.0 |
- Uncached Name | 0.033 | 0.100 | 0.261 | 0.072 | 100.0 |
- DotCom Lookup | 0.061 | 0.105 | 0.159 | 0.022 | 100.0 |
—<——–>—+——-+——-+——-+——-+——-+
··· no official Internet DNS name ···
ULTRADNS – NeuStar, Inc.,US
8. 26. 56. 26 | Min | Avg | Max |Std.Dev|Reliab%|
—————-+——-+——-+——-+——-+——-+
- Cached Name | 0.032 | 0.058 | 0.246 | 0.053 | 100.0 |
- Uncached Name | 0.035 | 0.130 | 0.423 | 0.100 | 100.0 |
- DotCom Lookup | 0.035 | 0.094 | 0.132 | 0.036 | 100.0 |
—<——–>—+——-+——-+——-+——-+——-+
ns1.recursive.dns.com
ELVATE – Elvate.com, LLC,US
8. 20.247. 20 | Min | Avg | Max |Std.Dev|Reliab%|
—————-+——-+——-+——-+——-+——-+
- Cached Name | 0.032 | 0.066 | 0.253 | 0.054 | 100.0 |
- Uncached Name | 0.034 | 0.138 | 0.525 | 0.119 | 100.0 |
- DotCom Lookup | 0.034 | 0.099 | 0.130 | 0.031 | 100.0 |
—<——–>—+——-+——-+——-+——-+——-+
ns2.recursive.dns.com
ELVATE – Elvate.com, LLC,US
UTC: 2014-04-09, from 21:33:01 to 21:33:30, for 00:28.540

Interpreting your benchmark results above:

The following guide is only intended as a quick
“get you going” reference and reminder.

To obtain a working understanding of this program’s operation, and to familiarize yourself with its many features, please see the main DNS Benchmark web page by clicking on the “Goto DNS Page” button below.

Referring to this sample:

64. 81.159. 2 | Min | Avg | Max |Std.Dev|Reliab%
—————-+——-+——-+——-+——-+——-
- Cached Name | 0.001 | 0.001 | 0.001 | 0.000 | 100.0
- Uncached Name | 0.021 | 0.033 | 0.045 | 0.016 | 100.0
- DotCom Lookup | 0.021 | 0.022 | 0.022 | 0.001 | 100.0
—<O-OO—->—+——-+——-+——-+——-+——-
dns.chi1.speakeasy.net
Speakeasy

The Benchmark creates a table similar to the one above for each DNS resolver (nameserver) tested. The top line specifies the IP address of the nameserver for this table.

The first three numeric columns provide the minimum, average, and maximum query-response times in seconds. Note that these timings incorporate all network delays from the querying computer, across the Internet, to the nameserver, the nameserver’s own processing, and the return of the reply. Since the numbers contain three decimal digits of accuracy, the overall resolution of the timing is thousandths of a second, or milliseconds.

The fourth numeric column shows the “standard deviation” of the collected query-response times which is a common statistical measure of the spread of the values – a smaller standard deviation means more consistency and less spread.

The fifth and last numeric column shows the reliability of the tested nameserver’s replies to queries. Since lost, dropped, or ignored queries introduce a significant lookup delay (typically a full second or more each) a nameserver’s reliability is an important consideration.

The labels of the middle three lines are colored red, green, and blue to match their respective bars on the response time bar chart.

The “Cached Name” line presents the timings for queries that are answered from the server’s own local name cache without requiring it to forward the query to other name servers. Since the name caches of active public nameservers will always be full of the IPs of common domains, the vast majority of queries will be cached. Therefore, the Benchmark gives this timing the highest weight.

The “Uncached Name” line presents the timings for queries which could not be answered from the server’s local cache and required it to ask another name server for the data. Specifically, this measures the time required to resolve the IP addresses of the Internet’s 30 most popular web sites. The Benchmark gives this timing the second highest weight.

The “DotCom Lookup” line presents the timings for the resolution of dot com nameserver IP addresses. This differs from the Cached and Uncached tests above, since they measure the time required to determine a dot com’s IP, whereas the DotCom Lookup measures the time required to resolve the IP of a dot com’s nameserver, from which a dot com’s IP would then be resolved. This test presents a measure of how well the DNS server being tested is connected to the dot com nameservers.

The lower border of the table contains a set of eight indicators (O and -) representing non-routable networks whose IP addresses are actively blocked by the resolver to protect its users from DNS rebinding attacks: <O-OO—->. The “O” character indicates that blocking is occurring for the corresponding network, whereas the “-” character indicates that non-routable IP addresses are being resolved and rebinding protection is not present. The first four symbols represent the four IPv4 networks beginning with 10., 127., 172., and 192. respectively, and the second four symbols are the same networks but for IPv6.

The final two lines at the bottom of each chart duplicate the information from the Name and Owner tabs on the Nameserver page:

dns.chi1.speakeasy.net
Speakeasy

The first line displays the “Reverse DNS” name of the server, if any. (This is the name looked up by the nameserver’s IP address.) The second line displays the Ownership information, if any, of the network containing the nameserver

The final line of the automatically generated chart is a timestamp that shows the date and time of the start, completion, and total elapsed time of the benchmark:

UTC: 2009-07-15 from 16:41:50 to 16:44:59 for 03:08.703

All times are given in Universal Coordinated Time (UTC) which is equivalent to GMT. In the sample shown above, the entire benchmark required 3 minutes, 8.703 seconds to run to completion.

All, or a marked portion, of the Tabular Data results on this page may be copied to the Windows’ clipboard or saved to a file for safe keeping, sharing, or later comparison.
• • •

Tags: , , , , , , , , , , , , , , , , , , , , ,
Tagged with:
 

Hilarious Marcus Ranum Interview at RSA 2014

On April 8, 2014, in Personal, Security, by Lawrence Pingree

I gotta say, there are very few days where I really laugh out loud about the security industry. Today is one of those days. I was sent this clip from a friend of mine and apparently it was also tweeted @riskybusiness. Its a very good interview of Marcus by Patrick Gray about the RSA conference and I agree with their comment “I think Marcus Ranum (@mjranum) and I managed to sum up the RSA trade floor in 37 seconds…“.

Click here to listen to the interview

Tags: , , , , , , , ,
Tagged with:
 

The market consolidates with the Dell acquisition of SonicWALL and Cassidian CyberSecurity acquiring NetASQ. Recent market entrant Huawei disrupts some of the UTM market with new products. Providers such as Dell, with comprehensive portfolios beyond security products, make it more …

Gartner customers may access this research by clicking here.

Tags: , , , , , , , , ,
Tagged with:
 

This document was revised on 14 March 2013. For more information, see the Corrections page on gartner.com. 1 Market Size by Segment, Worldwide, 2010-2013 2 Market Shares by Segment, Worldwide, 2010-2013 3 Definitions 1-1 Market Size: Unified Threat Management (SMB Multifunction Firewalls), ..

Gartner customers may access this research by clicking here.

Tags: , , ,
Tagged with:
 

Problems with running to SSL in fear of the NSA.

On March 28, 2014, in Personal, Security, by Lawrence Pingree

Recently, a whole host of companies have been rapidly implementing SSL across their entire websites in response to the NSA scandal. I for one don’t buy into the paranoia to the extent that the media and everyone else does. As an american citizen, my expectation is that my government is doing what it can to protect me and as a technologist I am constantly advising organizations globally on what they need to do to protect themselves. In the process, it is very common for the technologies to be deployed to peer into user network traffic. The main goal of this inspection is to protect users, not spy and snoop on their activities. I realize that organizations are a bit different than that of a government agency but honestly folks, when have you seen court cases involving NSA data? Its very far and few between. Intelligence is about gathering information. Information is used as context in decision making, we all do this and seek information for all decisions we make.

Now I am not defending the NSA’s tromping through the US constitution, I agree that our government should be tightly controlled and held to the constitutional standards set forth by our forefathers. I only want to shed some light on what “we” already do as organizations globally. We as organizations go way beyond tracking “metadata” about the users that use our networks, and this is largely in order to protect ourselves from the evil presented by the hackers and nation states that wish to get into our information or steal our intellectual property.

Now we come to the use of SSL, although I do believe that all folks that are concerned with government monitoring or the transport of sensitive information over the internet should be encrypted, one thing that organizations need to consider are the impacts to the user experience and their own infrastructure.  Leveraging SSL for absolutely all content can have a dramatic performance disadvantage. Although SSL encryption is now much easier to implement due to hardware performance enhancements. Implementing SSl can have huge impacts and must be considered by all that are involved. I urge the community at large and the IETF to push for mixed-mode web content encryption and new standards in browsers that would provide encryption that can be specified only for sensitive things like the transport of cookies, forms, specific called out elements and other such information without the need to transport absolutely everything over an encrypted channel. I realize that HTML does provide for this but many browsers prompts users with warnings making it difficult for web content providers to selectively encrypt content that “must” be secured, while other content can remain unencrypted. There could be a concerted effort that eliminates the need for browser warnings while also improving security of “sensitive” content.

One major disadvantage here is that for organizations that wish to dramatically reduce network load and leverage caching proxies, SSL must be terminated at the proxy in order for these proxy caches to be effective. This actually diminishes security quite extensively and could introduce potential liability (not a lawyer so this isn’t legal advice). The reason I bring up this topic is that I leverage a network proxy cache myself and I really don’t want to pierce my SSL sessions en-mass to properly cache my network resources.

My two cents. What are yours?

Tags: , , , , , , , , , , , , , , , , , , , , ,
Tagged with:
 

Thought this may be helpful to some of you out there in internet land. This is optimized settings for sysctl.conf for Opensuse 13.1 on a 100 megabit link. I have also created a file called “initcwnd”

Set the cwnd to 10
/etc/rc.d # more initcwnd

#Begin Script

/sbin/ip route | while read p; do ip route change $p initcwnd 10 initrwnd 10; done

chmod +x initcwnd

 

####
#
# /etc/sysctl.conf is meant for local sysctl settings
#
# sysctl reads settings from the following locations:
# /boot/sysctl.conf-<kernelversion>
# /lib/sysctl.d/*.conf
# /usr/lib/sysctl.d/*.conf
# /usr/local/lib/sysctl.d/*.conf
# /etc/sysctl.d/*.conf
# /run/sysctl.d/*.conf
# /etc/sysctl.conf
#
# To disable or override a distribution provided file just place a
# file with the same name in /etc/sysctl.d/
#
# See sysctl.conf(5), sysctl.d(5) and sysctl(8) for more information
#
####

# net.ipv6.conf.all.disable_ipv6 = 1
# net.ipv6.conf.all.disable_ipv6 = 1
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
fs.file-max = 65535
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.ipv4.tcp_mem = 67108864 67108864 67108864
net.ipv4.tcp_low_latency = 1
net.ipv4.ip_local_port_range = 1024 65000
net.ipv4.tcp_max_syn_backlog = 250000
# increase TCP max buffer size settable using setsockopt()
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
# increase Linux autotuning TCP buffer limit
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
# increase the length of the processor input queue
net.core.netdev_max_backlog = 250000
# recommended default congestion control is htcp
net.ipv4.tcp_congestion_control=reno
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=0
vm.swappiness = 60
net.ipv4.tcp_ecn = 0
net.ipv4.tcp_window_scaling = 1
vm.vfs_cache_pressure=50
vm.dirty_background_ratio=25
vm.dirty_ratio=20
net.ipv4.tcp_rfc1337=1
net.ipv4.tcp_workaround_signed_windows=1
net.ipv4.tcp_sack=1
net.ipv4.tcp_fack=1
net.ipv4.ip_no_pmtu_disc=1
net.ipv4.tcp_frto=2
net.ipv4.tcp_keepalive_probes=3
vm/min_free_kbytes = 65536
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_early_retrans = 1
net.ipv4.tcp_slow_start_after_idle=0
net.ipv4.tcp_timestamps=0
## Netfilter
net.netfilter.nf_conntrack_acct = 0
net.netfilter.nf_conntrack_checksum = 1
net.netfilter.nf_conntrack_events = 1
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.netfilter.nf_conntrack_expect_max = 4096
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_helper = 1
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_log_invalid = 0
net.netfilter.nf_conntrack_max = 65536
net.netfilter.nf_conntrack_tcp_be_liberal = 1
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_timestamp = 0
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180

Tags: , , , , , , , , , ,
Tagged with:
 

Forwarded from: security curmudgeon On Wed, 26 Mar 2014, InfoSec News wrote: : http://www.au.af.mil/au/ssq/digital/pdf/spring_2014/Libicki.pdf : : Strategic Studies Quarterly (SSQ) : The Strategic Journal of the United States Air Force : Volume 8, Issue 1 – Spring 2014 : By Martin C. Libicki : : Even assuming the cyber domain has yet to stop evolving, it is not clear : a classic strategic treatment of cyber war is possible, or, if it were, : it would be particularly beneficial. The salutary effects of such : classics are limited, the basic facts of cyberspace and cyber war do not : suggest it would be as revolutionary as airpower has been, and if there : were a classic on cyber war, it would likely be pernicious. The subject is interesting, the link to af.mil is intriguing. Oh wait, Libicki? I know that name… “The following hints may be indicative. Private hackers are more likely to use techniques that have been circulating throughout the hacker community. While it is not impossible that they have managed to generate a novel exploit to take advantage of a hitherto unknown vulnerability, they are unlikely to have more than one.”

Tags: , , , , , , , , ,
Tagged with: