Tag Archives: end

My latest Gartner research: Forecast Snapshot: Endpoint Detection and Response, Worldwide, 2017

3 March 2017  |  The EDR market will present large opportunities and grow at a CAGR of 45.27% from 2015 through 2020, dwarfing overall IT security and endpoint protection growth rates. Buyer demand for improved detection and response to augment failing protection methods are fueling growth….

Gartner clients can access this research by clicking here.




Facebooktwittergoogle_plusredditpinterestlinkedinmail

Optimized Squid Proxy Configuration for 4.0.17

Optimized (Optimal) Squid Configuration for Version 4.0.17
Results:All non-dynamic HTTP traffic cached

System Requirements:

Intel Dual Core: 3GHZ

Memory: 16 GB

2 SSD Drives 200 GB Each

Directory Structure:

/ssd

/ssd2

Note: Make sure to edit IP addresses to your environment.

 

#BEGIN CONFIG
#
# Recommended minimum Access Permission configuration:
#
# Only allow cachemgr access from localhost

#no_cache deny noscan
#http_access deny block-googlezip-dcp
#always_direct allow noscan
#no_cache deny video
#always_direct allow video

# Deny requests to certain unsafe ports

# Deny CONNECT to other than secure SSL ports

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on .localhost. is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
#cache_peer 192.168.1.1 parent 8080 0 default no-query no-digest no-netdb-exchange
#never_direct allow all

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed

http_access allow all

# allow localhost always proxy functionality

# And finally deny all other access to this proxy
# Squid normally listens to port 3128
pipeline_prefetch 99
read_ahead_gap 1024 MB
client_request_buffer_max_size 16384 KB
request_header_max_size 8192 KB
reply_header_max_size 8192 KB
#quick_abort_min -1 KB
#quick_abort_pct 100
#range_offset_limit 32 MB
#refresh_stale_hit 60 seconds
eui_lookup off
http_port 0.0.0.0:8080 intercept disable-pmtu-discovery=always
http_port 0.0.0.0:3128
tcp_outgoing_address 192.168.2.2

client_persistent_connections on
server_persistent_connections on
detect_broken_pconn on

# We recommend you to use at least the following line.
#hierarchy_stoplist cgi-bin ?
# Uncomment and adjust the following to add a disk cache directory.
#cache_dir diskd /ssd/0 54000 32 256 Q1=256 Q2=144
#cache_dir diskd /ssd/1 54000 32 256 Q1=256 Q2=144
#cache_dir diskd /ssd/3 54000 32 256 Q1=256 Q2=144

#cache_dir diskd /ssd2/0 68000 32 256 Q1=256 Q2=144
#cache_dir diskd /ssd2/1 68000 32 256 Q1=256 Q2=144
#cache_dir diskd /ssd2/3 68000 32 256 Q1=256 Q2=144

cache_dir ufs /ssd/0 40000 1024 256
cache_dir ufs /ssd/1 40000 1024 256
cache_dir ufs /ssd/2 40000 1024 256
cache_dir ufs /ssd/3 40000 1024 256
cache_dir ufs /ssd/4 40000 1024 256
cache_dir ufs /ssd/5 40000 1024 256
cache_dir ufs /ssd2/0 40000 1024 256
cache_dir ufs /ssd2/1 40000 1024 256
cache_dir ufs /ssd2/2 40000 1024 256
cache_dir ufs /ssd2/3 40000 1024 256
cache_dir ufs /ssd2/4 40000 1024 256
cache_dir ufs /ssd2/5 40000 1024 256
store_dir_select_algorithm round-robin
#cache_replacement_policy heap LFUDA
#memory_replacement_policy heap GDSF

# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid
# Add any of your own refresh_pattern entries above these.
# General Rules
#cache images

refresh_pattern -i \.(gif|png|ico|jpg|jpeg|jp2|webp)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-private
refresh_pattern -i \.(jpx|j2k|j2c|fpx|bmp|tif|tiff|bif)$ 100000 90% 20000 override-expire reload-into-ims ignore-no-store ignore-private
refresh_pattern -i \.(pcd|pict|rif|exif|hdr|bpg|img|jif|jfif|lsr)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-private
refresh_pattern -i \.(woff|woff2|eps|ttf|otf|svg|svgi|svgz|ps|ps1|acsm|eot)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-private

#cache content
refresh_pattern -i \.(swf|js|ejs)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-private
refresh_pattern -i \.(wav|css|class|dat|zsci|ver|advcs)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-private

#cache videos
refresh_pattern -i \.(mpa|m2a|mpe|avi|mov|mpg|mpeg|mpg3|mpg4|mpg5)$ 0 90% 200000 reload-into-ims ignore-no-store ignore-private
refresh_pattern -i \.(m1s|mp2v|m2v|m2s|m2ts|wmx|rm|rmvb|3pg|3gpp|omg|ogm|asf|war)$ 0 90% 200000 reload-into-ims ignore-no-store ignore-private
refresh_pattern -i \.(asx|mp2|mp3|mp4|mp5|wmv|flv|mts|f4v|f4|pls|midi|mid)$ 0 90% 200000 reload-into-ims ignore-no-store ignore-private
refresh_pattern -i \.(htm|html)$ 9440 90% 200000 reload-into-ims ignore-no-store ignore-private
refresh_pattern -i \.(xml|flow|asp|aspx)$ 0 90% 200000
refresh_pattern -i \.(json)$ 0 90% 200000
refresh_pattern -i (/cgi-bin/|\?) 0 90% 200000

#live video cache rules
refresh_pattern -i \.(m3u8|ts)$ 0 90% 200000

#cache specific sites
refresh_pattern -i ^http:\/\/liveupdate.symantecliveupdate.com.*\(zip)$ 0 0% 0
refresh_pattern -i ^http:\/\/premium.avira-update.com.*\(gz) 0 0% 0
refresh_pattern -i microsoft.com/.*\.(cab|exe|msi|msu|msf|asf|wma|dat|zip)$ 4320 80% 43200
refresh_pattern -i windowsupdate.com/.*\.(cab|exe|msi|msu|msf|asf|wma|wmv)|dat|zip)$ 4320 80% 43200
refresh_pattern -i windows.com/.*\.(cab|exe|msi|msu|msf|asf|wmv|wma|dat|zip)$ 4320 80% 43200
refresh_pattern -i apple.com/.*\.(cab|exe|msi|msu|msf|asf|wmv|wma|dat|zip|dist)$ 0 80% 4320

#cache binaries
refresh_pattern -i \.(app|bin|deb|rpm|drpm|exe|zip|zipx|tar|tgz|tbz2|tlz|iso|arj|cfs|dar|jar)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-no-cache ignore-private
refresh_pattern -i \.(bz|bz2|ipa|ram|rar|uxx|gz|msi|dll|lz|lzma|7z|s7z|Z|z|zz|sz)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-private
refresh_pattern -i \.(exe|msi)$ 0 90% 200000
refresh_pattern -i \.(cab|psf|vidt|apk|wtex|hz|ova|ovf)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-private

#cache microsoft and adobe and other documents
refresh_pattern -i \.(ppt|pptx|doc|docx|docm|docb|dot|pdf|pub|ps)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-no-cache ignore-private
refresh_pattern -i \.(xls|xlsx|xlt|xlm|xlsm|xltm|xlw|csv|txt)$ 100000 90% 200000 override-expire reload-into-ims ignore-no-store ignore-no-cache ignore-private
#refresh_pattern -i ^ftp: 100000 90% 200000
#refresh_pattern -i ^gopher: 1440 0% 1440

#allow caching of other things based on cache control headers with some exceptions
refresh_pattern -i . 0 90% 200000 reload-into-ims
reload_into_ims on
log_icp_queries off
icp_port 0
htcp_port 0
acl snmppublic snmp_community public
snmp_port 3401
snmp_incoming_address 192.168.2.2
snmp_incoming_address 127.0.0.1
snmp_access allow snmppublic all
minimum_object_size 0 KB
cache_effective_user squid
#header_replace User-Agent Mozilla/5.0 (X11; U;) Gecko/20080221 Firefox/2.0.0.9
#vary_ignore_expire on
cache_swap_low 90
cache_swap_high 95
visible_hostname shadow
unique_hostname shadow-DHS
shutdown_lifetime 0 second
request_entities on
half_closed_clients off
max_filedesc 65535
connect_timeout 5 seconds
connect_retries 2
cache_effective_group squid
buffered_logs on
#access_log stdio:/var/log/squid/access.log squid
access_log daemon:/var/log/squid/access.log
#access_log none
netdb_filename none
client_db off
dns_nameservers 127.0.0.1 192.168.2.2 192.168.1.96 192.168.1.89 192.168.1.92
ipcache_size 4000
ipcache_low 90
ipcache_high 95
dns_v4_first on
negative_ttl 5 minutes
positive_dns_ttl 30 days
negative_dns_ttl 5 minutes
dns_retransmit_interval 2 seconds
check_hostnames off
forwarded_for delete
via off
httpd_suppress_version_string on
# mem and cache size
#collapsed_forwarding on
cache_mem 6 GB
memory_cache_mode disk
maximum_object_size 3 GB
maximum_object_size_in_memory 3 GB
#store_objects_per_bucket 40
digest_generation off
#digest_bits_per_entry 8
pinger_enable off
memory_pools on
max_stale 4 months

#END CONFIG


Facebooktwittergoogle_plusredditpinterestlinkedinmail

Configuring Logstash and Kibana to receive and Dashboard Sonicwall Logs

Note: If you want to quickly download my Logstash config and Kibana dashboards, see the end of this post.

Locate and Update your Logstash.conf File
First, you must update your logstash configuration file, generally located in /etc/logstash or /etc/logstash/conf.d/ and named logstash.conf

Add a logstash input
In logstash.conf, you must first add an input which will allow logstash to receive the syslog from your Sonicwall appliance along with a designated “listening” port. For my configuration, I set this to port 5515. In my logstash instance, I am using Suricata SELKs, so you can also see a file input for that prior to my Sonicwall input. See below (the text highlighted in RED was the text I added to the config file).

input {
file {
path => [“/var/log/suricata/eve.json”]
#sincedb_path => [“/var/lib/logstash/”]
sincedb_path => [“/var/cache/logstash/sincedbs/since.db”]
codec => json
type => “SELKS”
}
syslog {
type => Sonicwall
port => 5515
}

Insert a logstash Filter
The next step is to insert a new filter for parsing your sonicwall logs, this is so that Logstash knows how to automatically create fields so that you can filter on specific fields in Syslog. Below is the text that I added to the configuration file.  Important: You must make sure that if you have pre-existing filters, your start and end curly braces appropriately open and close and in the filter section the text below incorporated into the filter bracketed text.

if [type] == “Sonicwall” {
kv {
exclude_keys => [ “c”, “id”, “m”, “n”, “pri” ]
}
grok {
match => [ “src”, “%{IP:srcip}:%{DATA:srcinfo}” ]
}
grok {
match => [ “dst”, “%{IP:dstip}:%{DATA:dstinfo}” ]
}
grok {
remove_field => [ “srcinfo”, “dstinfo” ]
}
geoip {
add_tag => [ “geoip” ]
source => “srcip”
database => “/opt/logstash/vendor/geoip/GeoLiteCity.dat”
}

Configure the Parsed Output Location
Finally, you need to configure the output for the config file. The output is to send into the logstash instance. Below is the configuration for this. In this case, my logstash instance is sending to localhost because it is running on the same box.

}

output {
elasticsearch {
host => “127.0.0.1”
protocol => transport
}
}

Configure the Sonicwall
Next you will need to configure your Sonicwall to send syslog messages to the logstash server. Login to your sonicwall, go to “Log->Syslog and then add a server x.x.x.x with port 5515.

Next you’ll need to turn on Sonicwall Name Resolution for Logs
Go to Log->Name Resolution and make sure to setup a DNS server to resolve names. Otherwise, the src and dst fields in the Kibana dashboards will not have names and show double IP address entries.

Finally, you’ll need to configure dashboards in Kibana. To make all of this easier, I’ve included all my files below that can be easily downloaded.

Logstash Configuration *Use Right-Click and Save As*

Kibana Dashboards
(To Import go into Kibana and select “Load” then go to “Advanced and click on “Load File”)

  • Sonic-Alerts (Filters the Top Alert Messages from the Sonicwall Syslog
  • Sonic Top (Filters the Top Source and Destination hosts and events associated with your sonicwall.

Facebooktwittergoogle_plusredditpinterestlinkedinmail

My latest Gartner research:Competitive Landscape: Endpoint Detection and Response Tools

5 January 2017  |  …EPP providers starting to offer EDR features. At least 50% of endpoint detection and responseproviders will incorporate enhanced analytics of user and attacker…the next 12 to 24 months, up from less than 15% today. The endpoint detection and response (EDR…

Gartner clients can access this research by clicking here.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

My latest Gartner research: Vendor Rating: Huawei

Huawei has established itself as a solid provider of ICT infrastructure technologies across consumer, carrier and enterprise markets worldwide. CIOs and IT leaders should utilize this research to familiarize themselves with Huawei’s “all-cloud” strategy and ecosystem development….

Gartner subscribers can access this research by clicking here.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

My latest Gartner Research: SWOT: Check Point Software Technologies, Network Security, Worldwide

Check Point remains a leading security vendor, with a strong and broad portfolio that has improved with the pace of innovation. However, its product leaders need better marketing and refined renewal pricing strategies to sustain its growth and leadership in the firewall market….

Gartner subscribers can access this research by clicking here.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

My Latest Gartner Research: Enterprise Firewall and Unified Threat Management Products Impact End-User Buying Behavior

This document helps product developers and managers of security providers prepare enterprise firewall and unified threat management products for the impact of digital business, mobile and the Internet of Things on end-user buying behavior….

Gartner Subscribers can access this research by clicking here.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

My latest Gartner Research:Competitive Landscape: Distributed Deception Platforms, 2016

4 August 2016  |  Distributed deception platforms are now a viable option for enhancing detection within enterprise security programs. Product marketing managers must understand the competitive positioning of their products and crucial market dynamics in order to compete effectively in the DDP market.

Gartner client’s may access this research by clicking here.


Facebooktwittergoogle_plusredditpinterestlinkedinmail