Tag Archives: width

Mrtg Config File for Squid Proxy

Below is my MRTG file for monitoring squid.

 

######################################################################
# Multi Router Traffic Grapher — squid Configuration File
######################################################################
# This file is for use with mrtg-2.0
#
# Customized for monitoring Squid Cache
# by Chris Miles http://chrismiles.info/
# http://chrismiles.info/unix/mrtg/
# To use:
# – change WorkDir and LoadMIBs settings
# – change all “shadow” occurrences to your squid host
# – change all “chris” occurrences to your name/address
# – change the community strings if required (eg: “public”)
# – change the snmp port if required (eg: 3401)
#
# Note:
#
# * Keywords must start at the begin of a line.
#
# * Lines which follow a keyword line which do start
# with a blank are appended to the keyword line
#
# * Empty Lines are ignored
#
# * Lines starting with a # sign are comments.
# ####################
# Global Configuration
# ####################

# Where should the logfiles, and webpages be created?
WorkDir: /srv/www/htdocs/squid-mrtg

# ————————–
# Optional Global Parameters
# ————————–

# How many seconds apart should the browser (Netscape) be
# instructed to reload the page? If this is not defined, the
# default is 300 seconds (5 minutes).

# Refresh: 600

# How often do you call mrtg? The default is 5 minutes. If
# you call it less often, you should specify it here. This
# does two things:

# a) the generated HTML page does contain the right
# information about the calling interval …

# b) a META header in the generated HTML page will instruct
# caches about the time to live of this page …..

# In this example we tell mrtg that we will be calling it
# every 10 minutes. If you are calling mrtg every 5
# minutes, you can leave this line commented out.

# Interval: 10

# With this switch mrtg will generate .meta files for CERN
# and Apache servers which contain Expiration tags for the
# html and gif files. The *.meta files will be created in
# the same directory as the other files, so you might have
# to set “MetaDir .” in your srm.conf file for this to work
#
# NOTE: If you are running Apache-1.2 you can use the mod_expire
# to achieve the same effect … see the file htaccess-dist

WriteExpires: Yes

# If you want to keep the mrtg icons in some place other than the
# working directory, use the IconDir varibale to give its url.

# IconDir: /mrtgicons/
IconDir: /images/

LoadMIBs: /usr/share/squid/mib.txt

# #################################################
# Configuration for each Target you want to monitor
# #################################################

# The configuration keywords “Target” must be followed by a
# unique name. This will also be the name used for the
# webpages, logfiles and gifs created for that target.

# Note that the “Target” sections can be auto-generated with
# the cfgmaker tool. Check readme.html for instructions.
# ========

##
## Target —————————————-
##

# With the “Target” keyword you tell mrtg what it should
# monitor. The “Target” keyword takes arguments in a wide
# range of formats:

# * The most basic format is “port:community@router”
# This will generate a traffic graph for port ‘port’
# of the router ‘router’ and it will use the community
# ‘community’ for the snmp query.

# Target[ezwf]: 2:public@wellfleet-fddi.ethz.ch

# * Sometimes you are sitting on the wrong side of the
# link. And you would like to have mrtg report Incoming
# traffic as outgoing and visa versa. This can be achieved
# by adding the ‘-‘ sign in front of the “Target”
# description. It flips the in and outgoing traffic rates.

# Target[ezci]: -1:public@ezci-ether.ethz.ch

# * You can also explicitly define the OID to query by using the
# following syntax ‘OID_1&OID_2:community@router’
# The following example will retrieve error input and output
# octets/sec on interface 1. MRTG needs to graph two values, so
# you need to specify two OID’s such as temperature and humidity
# or error input and error output.

# Target[ezwf]: 1.3.6.1.2.1.2.2.1.14.1&1.3.6.1.2.1.2.2.1.20.1:public@myrouter

# * mrtg knows a number of symbolical SNMP variable
# names. See the file mibhelp.txt for a list of known
# names. One example are the ifInErrors and and ifOutErrors
# names. This means you can specify the above as:

# Target[ezwf]: ifInErrors.1&ifOutErrors.1:public@myrouter

# * if you want to monitor something which does not provide
# data via snmp you can use some external program to do
# the data gathering.

#
# The external command must return 4 lines of output:
# Line 1 : current state of the ‘incoming bytes counter’
# Line 2 : current state of the ‘outgoing bytes counter’
# Line 3 : string, telling the uptime of the target.
# Line 4 : string, telling the name of the target.

# Depending on the type of data your script returns you
# might want to use the ‘gauge’ or ‘absolute’ arguments
# for the “Options” keyword.

# Target[ezwf]: `/usr/local/bin/df2mrtg /dev/dsk/c0t2d0s0`

# * You can also use several statements in a mathematical
# expression. This could be used to aggregate both B channels
# in an ISDN connection or multiple T1’s that are aggregated
# into a single channel for greater bandwidth.
# Note the whitespace arround the target definitions.

# Target[ezwf]: 2:public@wellfleetA + 1:public@wellfleetA
# * 4:public@ciscoF

##
## RouterUptime —————————————
##
#
# In cases where you calculate the used bandwidth from
# several interfaces you normaly don’t get the routeruptime
# and routername displayed on the web page.
# If this interface are on the same router and the uptime and
# name should be displayed nevertheless you have to specify
# its community and address again with the RouterUptime keyword.

# Target[kacisco]: 1:public@194.64.66.250 + 2:public@194.64.66.250
# RouterUptime[kacisco]: public@194.64.66.250

##
## MaxBytes ——————————————-
##

# How many bytes per second can this port carry. Since most
# links are rated in bits per second, you need to divide
# their maximum bandwidth (in bits) by eight (8) in order to get
# bytes per second. This is very important to make your
# unscaled graphs display realistic information.
# T1 = 193000, 56K = 7000, Ethernet = 1250000. The “MaxBytes”
# value will be used by mrtg to decide whether it got a
# valid response from the router. If a number higher than
# “MaxBytes” is returned, it is ignored. Also read the section
# on AbsMax for further info.

# MaxBytes[ezwf]: 1250000

##
## Title ———————————————–
##

# Title for the HTML page which gets generated for the graph.

# Title[ezwf]: Traffic Analysis for ETZ C 95.1

##
## PageTop ———————————————
##

# Things to add to the top of the generated HTML page. Note
# that you can have several lines of text as long as the
# first column is empty.
# Note that the continuation lines will all end up on the same
# line in the html page. If you want linebreaks in the generated
# html use the ‘\n’ sequence.

# PageTop[ezwf]: <H1>Traffic Analysis for ETZ C95.1</H1>
# Our Campus Backbone runs over an FDDI line\n
# with a maximum transfer rate of 12.5 Mega Bytes per
# Second.

##
## PageFoot ———————————————
##

# Things to add at the very end of the mrtg generated html page

# PageFoot[ezwf]: <HR size=2 noshade>This page is managed by Blubber

# ————————————————–
# Optional Target Configuration Tags
# ————————————————–

##
## AddHead —————————————–
##

# Use this tag like the PageTop header, but its contents
# will be added between </TITLE> and </HEAD>.

# AddHead[ezwf]: <!– Just a comment for fun –>

##
## AbsMax ——————————————
##

# If you are monitoring a link which can handle more traffic
# than the MaxBytes value. Eg, a line which uses compression
# or some frame relay link, you can use the AbsMax keyword
# to give the absolute maximum value ever to be reached. We
# need to know this in order to sort out unrealistic values
# returned by the routers. If you do not set absmax, rateup
# will ignore values higher then MaxBytes.

# AbsMax[ezwf]: 2500000

##
## Unscaled ——————————————
##

# By default each graph is scaled vertically to make the
# actual data visible even when it is much lower than
# MaxBytes. With the “Unscaled” variable you can suppress
# this. It’s argument is a string, containing one letter
# for each graph you don’t want to be scaled: d=day w=week
# m=month y=year. In the example I suppress scaling for the
# yearly and the monthly graph.

# Unscaled[ezwf]: ym

##
## WithPeak ——————————————
##

# By default the graphs only contain the average transfer
# rates for incoming and outgoing traffic. The
# following option instructs mrtg to display the peak
# 5 minute transfer rates in the [w]eekly, [m]onthly and
# [y]early graph. In the example we define the monthly
# and the yearly graph to contain peak as well as average
# values.

# WithPeak[ezwf]: ym

##
## Supress ——————————————
##

# By Default mrtg produces 4 graphs. With this option you
# can suppress the generation of selected graphs. The format
# is analog to the above option. In this example we suppress
# the yearly graph as it is quite empty in the beginning.

# Suppress[ezwf]: y

##
## Directory
##

# By default, mrtg puts all the files that it generates for each
# router (the GIFs, the HTML page, the log file, etc.) in WorkDir.
# If the “Directory” option is specified, the files are instead put
# into a directory under WorkDir. (For example, given the options in
# this mrtg.cfg-dist file, the “Directory” option below would cause all
# the ezwf files to be put into /usr/tardis/pub/www/stats/mrtg/ezwf .)
#
# The directory must already exist; mrtg will not create it.

# Directory[ezwf]: ezwf

##
## XSize and YSize ——————————————
##

# By Default mrtgs graphs are 100 by 400 pixels wide (plus
# some more for the labels. In the example we get almost
# square graphs …
# Note: XSize must be between 20 and 600
# YSize must be larger than 20

# XSize[ezwf]: 300
# YSize[ezwf]: 300

##
## XZoom YZoom ————————————————-
##

# If you want your graphs to have larger pixels, you can
# “Zoom” them.

#XZoom[ezwf]: 2.0
#YZoom[ezwf]: 2.0

##
## XScale YScale ————————————————-
##

# If you want your graphs to be actually scaled use XScale
# and YScale. (Beware while this works, the results look ugly
# (to be frank) so if someone wants fix this: patches are
# welcome.

# XScale[ezwf]: 1.5
# YScale[ezwf]: 1.5
##
## Step ———————————————————–
##

# Change the default step with from 5 * 60 seconds to
# something else I have not tested this well …

# Step[ezwf]: 60

##
## Options ——————————————
##

# The “Options” Keyword allows you to set some boolean
# switches:
#
# growright – The graph grows to the left by default.
#
# bits – All the numbers printed are in bits instead
# of bytes … looks much more impressive 🙂
#
# noinfo – Supress the information about uptime and
# device name in the generated webpage.
#
# absolute – This is for data sources which reset their
# value when they are read. This means that
# rateup has not to build the difference between
# this and the last value read from the data
# source. Useful for external data gatherers.
#
# gauge – Treat the values gathered from target as absolute
# and not as counters. This would be useful to
# monitor things like diskspace, load and so
# on ….
#
# nopercent Don’t print usage percentages
#
# integer Don’t print only integers in the summary …
#

# Options[ezwf]: growright, bits

##
## Colours ——————————————
##

# The “Colours” tag allows you to override the default colour
# scheme. Note: All 4 of the required colours must be
# specified here The colour name (‘Colourx’ below) is the
# legend name displayed, while the RGB value is the real
# colour used for the display, both on the graph and n the
# html doc.

# Format is: Colour1#RRGGBB,Colour2#RRGGBB,Colour3#RRGGBB,Colour4#RRGGBB
# where: Colour1 = Input on default graph
# Colour2 = Output on default graph
# Colour3 = Max input
# Colour4 = Max output
# RRGGBB = 2 digit hex values for Red, Green and Blue

# Colours[ezwf]: GREEN#00eb0c,BLUE#1000ff,DARK GREEN#006600,VIOLET#ff00ff

##
## Background ——————————————
##

# With the “Background” tag you can configure the background
# colour of the generated HTML page

# Background[ezwf]: #a0a0a0a

##
## YLegend, ShortLegend, Legend[1234] ——————
##

# The following keywords allow you to override the text
# displayed for the various legends of the graph and in the
# HTML document
#
# * YLegend : The Y-Axis of the graph
# * ShortLegend: The ‘b/s’ string used for Max, Average and Current
# * Legend[1234IO]: The strings for the colour legend
#
#YLegend[ezwf]: Bits per Second
#ShortLegend[ezwf]: b/s
#Legend1[ezwf]: Incoming Traffic in Bits per Second
#Legend2[ezwf]: Outgoing Traffic in Bits per Second
#Legend3[ezwf]: Maximal 5 Minute Incoming Traffic
#Legend4[ezwf]: Maximal 5 Minute Outgoing Traffic
#LegendI[ezwf]: &nbsp;In:
#LegendO[ezwf]: &nbsp;Out:
# Note, if LegendI or LegendO are set to an empty string with
# LegendO[ezwf]:
# The corresponding line below the graph will not be printed at all.

# If you live in an international world, you might want to
# generate the graphs in different timezones. This is set in the
# TZ variable. Under certain operating systems like Solaris,
# this will provoke the localtime call to giv the time in
# the selected timezone …

# Timezone[ezwf]: Japan

# The Timezone is the standard Solaris timezone, ie Japan, Hongkong,
# GMT, GMT+1 etc etc.

# By default, mrtg (actually rateup) uses the strftime(3) ‘%W’ option
# to format week numbers in the monthly graphs. The exact semantics
# of this format option vary between systems. If you find that the
# week numbers are wrong, and your system’s strftime(3) routine
# supports it, you can try another format option. The POSIX ‘%V’
# option seems to correspond to a widely used week numbering
# convention. The week format character should be specified as a
# single letter; either W, V, or U.

# Weekformat[ezwf]: V

# #############################
# Two very special Target names
# #############################

# To save yourself some typing you can define a target
# called ‘^’. The text of every Keyword you define for this
# target will be PREPENDED to the corresponding Keyword of
# all the targets defined below this line. The same goes for
# a Target called ‘$’ but its options will be APPENDED.
#
# The example will make mrtg use a common header and a
# common contact person in all the pages generated from
# targets defined later in this file.
#
#PageTop[^]: <H1>Traffic Stats</H1><HR>
#PageTop[$]: Contact Peter Norton if you have any questions<HR>

PageFoot[^]: <i>Page managed by GeekGuy</a></i>

Target[cacheServerRequests]: cacheServerRequests&cacheServerRequests:public@shadow:3401
MaxBytes[cacheServerRequests]: 10000000
Title[cacheServerRequests]: Server Requests @ shadow
Options[cacheServerRequests]: growright, nopercent
PageTop[cacheServerRequests]: <h1>Server Requests @ shadow</h1>
YLegend[cacheServerRequests]: requests/sec
ShortLegend[cacheServerRequests]: req/s
LegendI[cacheServerRequests]: Requests&nbsp;
LegendO[cacheServerRequests]:
Legend1[cacheServerRequests]: Requests
Legend2[cacheServerRequests]:

Target[cacheServerErrors]: cacheServerErrors&cacheServerErrors:public@shadow:3401
MaxBytes[cacheServerErrors]: 10000000
Title[cacheServerErrors]: Server Errors @ shadow
Options[cacheServerErrors]: growright, nopercent
PageTop[cacheServerErrors]: <H1>Server Errors @ shadow</H1>
YLegend[cacheServerErrors]: errors/sec
ShortLegend[cacheServerErrors]: err/s
LegendI[cacheServerErrors]: Errors&nbsp;
LegendO[cacheServerErrors]:
Legend1[cacheServerErrors]: Errors
Legend2[cacheServerErrors]:

Target[cacheServerInOutKb]: cacheServerInKb&cacheServerOutKb:public@shadow:3401 * 1024
MaxBytes[cacheServerInOutKb]: 1000000000
Title[cacheServerInOutKb]: Server In/Out Traffic @ shadow
Options[cacheServerInOutKb]: growright, nopercent
PageTop[cacheServerInOutKb]: <H1>Server In/Out Traffic @ shadow</H1>
YLegend[cacheServerInOutKb]: Bytes/sec
ShortLegend[cacheServerInOutKb]: Bytes/s
LegendI[cacheServerInOutKb]: Server In&nbsp;
LegendO[cacheServerInOutKb]: Server Out&nbsp;
Legend1[cacheServerInOutKb]: Server In
Legend2[cacheServerInOutKb]: Server Out

Target[cacheClientHttpRequests]: cacheClientHttpRequests&cacheClientHttpRequests:public@shadow:3401
MaxBytes[cacheClientHttpRequests]: 10000000
Title[cacheClientHttpRequests]: Client Http Requests @ shadow
Options[cacheClientHttpRequests]: growright, nopercent
PageTop[cacheClientHttpRequests]: <H1>Client Http Requests @ shadow</H1>
YLegend[cacheClientHttpRequests]: requests/sec
ShortLegend[cacheClientHttpRequests]: req/s
LegendI[cacheClientHttpRequests]: Requests&nbsp;
LegendO[cacheClientHttpRequests]:
Legend1[cacheClientHttpRequests]: Requests
Legend2[cacheClientHttpRequests]:

Target[cacheHttpHits]: cacheHttpHits&cacheHttpHits:public@shadow:3401
MaxBytes[cacheHttpHits]: 10000000
Title[cacheHttpHits]: HTTP Hits @ shadow
Options[cacheHttpHits]: growright, nopercent
PageTop[cacheHttpHits]: <H1>HTTP Hits @ shadow</H1>
YLegend[cacheHttpHits]: hits/sec
ShortLegend[cacheHttpHits]: hits/s
LegendI[cacheHttpHits]: Hits&nbsp;
LegendO[cacheHttpHits]:
Legend1[cacheHttpHits]: Hits
Legend2[cacheHttpHits]:

Target[cacheHttpErrors]: cacheHttpErrors&cacheHttpErrors:public@shadow:3401
MaxBytes[cacheHttpErrors]: 10000000
Title[cacheHttpErrors]: HTTP Errors @ shadow
Options[cacheHttpErrors]: growright, nopercent
PageTop[cacheHttpErrors]: <H1>HTTP Errors @ shadow</H1>
YLegend[cacheHttpErrors]: errors/sec
ShortLegend[cacheHttpErrors]: err/s
LegendI[cacheHttpErrors]: Errors&nbsp;
LegendO[cacheHttpErrors]:
Legend1[cacheHttpErrors]: Errors
Legend2[cacheHttpErrors]:

Target[cacheIcpPktsSentRecv]: cacheIcpPktsSent&cacheIcpPktsRecv:public@shadow:3401
MaxBytes[cacheIcpPktsSentRecv]: 10000000
Title[cacheIcpPktsSentRecv]: ICP Packets Sent/Received
Options[cacheIcpPktsSentRecv]: growright, nopercent
PageTop[cacheIcpPktsSentRecv]: <H1>ICP Packets Sent/Recieved @ shadow</H1>
YLegend[cacheIcpPktsSentRecv]: packets/sec
ShortLegend[cacheIcpPktsSentRecv]: pkts/s
LegendI[cacheIcpPktsSentRecv]: Pkts Sent&nbsp;
LegendO[cacheIcpPktsSentRecv]: Pkts Received&nbsp;
Legend1[cacheIcpPktsSentRecv]: Pkts Sent
Legend2[cacheIcpPktsSentRecv]: Pkts Received

Target[cacheIcpKbSentRecv]: cacheIcpKbSent&cacheIcpKbRecv:public@shadow:3401 * 1024
MaxBytes[cacheIcpKbSentRecv]: 1000000000
Title[cacheIcpKbSentRecv]: ICP Bytes Sent/Received
Options[cacheIcpKbSentRecv]: growright, nopercent
PageTop[cacheIcpKbSentRecv]: <H1>ICP Bytes Sent/Received @ shadow</H1>
YLegend[cacheIcpKbSentRecv]: Bytes/sec
ShortLegend[cacheIcpKbSentRecv]: Bytes/s
LegendI[cacheIcpKbSentRecv]: Sent&nbsp;
LegendO[cacheIcpKbSentRecv]: Received&nbsp;
Legend1[cacheIcpKbSentRecv]: Sent
Legend2[cacheIcpKbSentRecv]: Received

Target[cacheHttpInOutKb]: cacheHttpInKb&cacheHttpOutKb:public@shadow:3401 * 1024
MaxBytes[cacheHttpInOutKb]: 1000000000
Title[cacheHttpInOutKb]: HTTP In/Out Traffic @ shadow
Options[cacheHttpInOutKb]: growright, nopercent
PageTop[cacheHttpInOutKb]: <H1>HTTP In/Out Traffic @ shadow</H1>
YLegend[cacheHttpInOutKb]: Bytes/second
ShortLegend[cacheHttpInOutKb]: Bytes/s
LegendI[cacheHttpInOutKb]: HTTP In&nbsp;
LegendO[cacheHttpInOutKb]: HTTP Out&nbsp;
Legend1[cacheHttpInOutKb]: HTTP In
Legend2[cacheHttpInOutKb]: HTTP Out

Target[cacheCurrentSwapSize]: cacheCurrentSwapSize&cacheCurrentSwapSize:public@shadow:3401
MaxBytes[cacheCurrentSwapSize]: 1000000000
Title[cacheCurrentSwapSize]: Current Swap Size @ shadow
Options[cacheCurrentSwapSize]: gauge, growright, nopercent
PageTop[cacheCurrentSwapSize]: <H1>Current Swap Size @ shadow</H1>
YLegend[cacheCurrentSwapSize]: swap size
ShortLegend[cacheCurrentSwapSize]: Bytes
LegendI[cacheCurrentSwapSize]: Swap Size&nbsp;
LegendO[cacheCurrentSwapSize]:
Legend1[cacheCurrentSwapSize]: Swap Size
Legend2[cacheCurrentSwapSize]:

Target[cacheNumObjCount]: cacheNumObjCount&cacheNumObjCount:public@shadow:3401
MaxBytes[cacheNumObjCount]: 10000000
Title[cacheNumObjCount]: Num Object Count @ shadow
Options[cacheNumObjCount]: gauge, growright, nopercent
PageTop[cacheNumObjCount]: <H1>Num Object Count @ shadow</H1>
YLegend[cacheNumObjCount]: # of objects
ShortLegend[cacheNumObjCount]: objects
LegendI[cacheNumObjCount]: Num Objects&nbsp;
LegendO[cacheNumObjCount]:
Legend1[cacheNumObjCount]: Num Objects
Legend2[cacheNumObjCount]:

Target[cacheCpuUsage]: cacheCpuUsage&cacheCpuUsage:public@shadow:3401
MaxBytes[cacheCpuUsage]: 100
AbsMax[cacheCpuUsage]: 100
Title[cacheCpuUsage]: CPU Usage @ shadow
Options[cacheCpuUsage]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheCpuUsage]: dwmy
PageTop[cacheCpuUsage]: <H1>CPU Usage @ shadow</H1>
YLegend[cacheCpuUsage]: usage %
ShortLegend[cacheCpuUsage]:%
LegendI[cacheCpuUsage]: CPU Usage&nbsp;
LegendO[cacheCpuUsage]:
Legend1[cacheCpuUsage]: CPU Usage
Legend2[cacheCpuUsage]:

Target[cacheMemUsage]: cacheMemUsage&cacheMemUsage:public@shadow:3401 * 1024
MaxBytes[cacheMemUsage]: 2000000000
Title[cacheMemUsage]: Memory Usage
Options[cacheMemUsage]: gauge, growright, nopercent
PageTop[cacheMemUsage]: <H1>Total memory accounted for @ shadow</H1>
YLegend[cacheMemUsage]: Bytes
ShortLegend[cacheMemUsage]: Bytes
LegendI[cacheMemUsage]: Mem Usage&nbsp;
LegendO[cacheMemUsage]:
Legend1[cacheMemUsage]: Mem Usage
Legend2[cacheMemUsage]:

Target[cacheSysPageFaults]: cacheSysPageFaults&cacheSysPageFaults:public@shadow:3401
MaxBytes[cacheSysPageFaults]: 10000000
Title[cacheSysPageFaults]: Sys Page Faults @ shadow
Options[cacheSysPageFaults]: growright, nopercent
PageTop[cacheSysPageFaults]: <H1>Sys Page Faults @ shadow</H1>
YLegend[cacheSysPageFaults]: page faults/sec
ShortLegend[cacheSysPageFaults]: PF/s
LegendI[cacheSysPageFaults]: Page Faults&nbsp;
LegendO[cacheSysPageFaults]:
Legend1[cacheSysPageFaults]: Page Faults
Legend2[cacheSysPageFaults]:

Target[cacheSysVMsize]: cacheSysVMsize&cacheSysVMsize:public@shadow:3401 * 1024
MaxBytes[cacheSysVMsize]: 1000000000
Title[cacheSysVMsize]: Storage Mem Size @ shadow
Options[cacheSysVMsize]: gauge, growright, nopercent
PageTop[cacheSysVMsize]: <H1>Storage Mem Size @ shadow</H1>
YLegend[cacheSysVMsize]: mem size
ShortLegend[cacheSysVMsize]: Bytes
LegendI[cacheSysVMsize]: Mem Size&nbsp;
LegendO[cacheSysVMsize]:
Legend1[cacheSysVMsize]: Mem Size
Legend2[cacheSysVMsize]:

Target[cacheSysStorage]: cacheSysStorage&cacheSysStorage:public@shadow:3401
MaxBytes[cacheSysStorage]: 1000000000
Title[cacheSysStorage]: Storage Swap Size @ shadow
Options[cacheSysStorage]: gauge, growright, nopercent
PageTop[cacheSysStorage]: <H1>Storage Swap Size @ shadow</H1>
YLegend[cacheSysStorage]: swap size (KB)
ShortLegend[cacheSysStorage]: KBytes
LegendI[cacheSysStorage]: Swap Size&nbsp;
LegendO[cacheSysStorage]:
Legend1[cacheSysStorage]: Swap Size
Legend2[cacheSysStorage]:

Target[cacheSysNumReads]: cacheSysNumReads&cacheSysNumReads:public@shadow:3401
MaxBytes[cacheSysNumReads]: 10000000
Title[cacheSysNumReads]: HTTP I/O number of reads @ shadow
Options[cacheSysNumReads]: growright, nopercent
PageTop[cacheSysNumReads]: <H1>HTTP I/O number of reads @ shadow</H1>
YLegend[cacheSysNumReads]: reads/sec
ShortLegend[cacheSysNumReads]: reads/s
LegendI[cacheSysNumReads]: I/O&nbsp;
LegendO[cacheSysNumReads]:
Legend1[cacheSysNumReads]: I/O
Legend2[cacheSysNumReads]:

Target[cacheCpuTime]: cacheCpuTime&cacheCpuTime:public@shadow:3401
MaxBytes[cacheCpuTime]: 1000000000
Title[cacheCpuTime]: Cpu Time
Options[cacheCpuTime]: gauge, growright, nopercent
PageTop[cacheCpuTime]: <H1>Amount of cpu seconds consumed @ shadow</H1>
YLegend[cacheCpuTime]: cpu seconds
ShortLegend[cacheCpuTime]: cpu seconds
LegendI[cacheCpuTime]: Mem Time&nbsp;
LegendO[cacheCpuTime]:
Legend1[cacheCpuTime]: Mem Time
Legend2[cacheCpuTime]:

Target[cacheMaxResSize]: cacheMaxResSize&cacheMaxResSize:public@shadow:3401 * 1024
MaxBytes[cacheMaxResSize]: 1000000000
Title[cacheMaxResSize]: Max Resident Size
Options[cacheMaxResSize]: gauge, growright, nopercent
PageTop[cacheMaxResSize]: <H1>Maximum Resident Size @ shadow</H1>
YLegend[cacheMaxResSize]: Bytes
ShortLegend[cacheMaxResSize]: Bytes
LegendI[cacheMaxResSize]: Size&nbsp;
LegendO[cacheMaxResSize]:
Legend1[cacheMaxResSize]: Size
Legend2[cacheMaxResSize]:

Target[cacheCurrentLRUExpiration]: cacheCurrentLRUExpiration&cacheCurrentLRUExpiration:public@shadow:3401
MaxBytes[cacheCurrentLRUExpiration]: 1000000000
Title[cacheCurrentLRUExpiration]: LRU Expiration Age
Options[cacheCurrentLRUExpiration]: gauge, growright, nopercent
PageTop[cacheCurrentLRUExpiration]: <H1>Storage LRU Expiration Age @ shadow</H1>
YLegend[cacheCurrentLRUExpiration]: expir (days)
ShortLegend[cacheCurrentLRUExpiration]: days
LegendI[cacheCurrentLRUExpiration]: Age&nbsp;
LegendO[cacheCurrentLRUExpiration]:
Legend1[cacheCurrentLRUExpiration]: Age
Legend2[cacheCurrentLRUExpiration]:

Target[cacheCurrentUnlinkRequests]: cacheCurrentUnlinkRequests&cacheCurrentUnlinkRequests:public@shadow:3401
MaxBytes[cacheCurrentUnlinkRequests]: 1000000000
Title[cacheCurrentUnlinkRequests]: Unlinkd Requests
Options[cacheCurrentUnlinkRequests]: growright, nopercent
PageTop[cacheCurrentUnlinkRequests]: <H1>Requests given to unlinkd @ shadow</H1>
YLegend[cacheCurrentUnlinkRequests]: requests/sec
ShortLegend[cacheCurrentUnlinkRequests]: reqs/s
LegendI[cacheCurrentUnlinkRequests]: Unlinkd requests&nbsp;
LegendO[cacheCurrentUnlinkRequests]:
Legend1[cacheCurrentUnlinkRequests]: Unlinkd requests
Legend2[cacheCurrentUnlinkRequests]:

Target[cacheCurrentUnusedFileDescrCount]: cacheCurrentUnusedFileDescrCount&cacheCurrentUnusedFileDescrCount:public@shadow:3401
MaxBytes[cacheCurrentUnusedFileDescrCount]: 1000000000
Title[cacheCurrentUnusedFileDescrCount]: Available File Descriptors
Options[cacheCurrentUnusedFileDescrCount]: gauge, growright, nopercent
PageTop[cacheCurrentUnusedFileDescrCount]: <H1>Available number of file descriptors @ shadow</H1>
YLegend[cacheCurrentUnusedFileDescrCount]: # of FDs
ShortLegend[cacheCurrentUnusedFileDescrCount]: FDs
LegendI[cacheCurrentUnusedFileDescrCount]: File Descriptors&nbsp;
LegendO[cacheCurrentUnusedFileDescrCount]:
Legend1[cacheCurrentUnusedFileDescrCount]: File Descriptors
Legend2[cacheCurrentUnusedFileDescrCount]:

Target[cacheCurrentReservedFileDescrCount]: cacheCurrentReservedFileDescrCount&cacheCurrentReservedFileDescrCount:public@shadow:3401
MaxBytes[cacheCurrentReservedFileDescrCount]: 1000000000
Title[cacheCurrentReservedFileDescrCount]: Reserved File Descriptors
Options[cacheCurrentReservedFileDescrCount]: gauge, growright, nopercent
PageTop[cacheCurrentReservedFileDescrCount]: <H1>Reserved number of file descriptors @ shadow</H1>
YLegend[cacheCurrentReservedFileDescrCount]: # of FDs
ShortLegend[cacheCurrentReservedFileDescrCount]: FDs
LegendI[cacheCurrentReservedFileDescrCount]: File Descriptors&nbsp;
LegendO[cacheCurrentReservedFileDescrCount]:
Legend1[cacheCurrentReservedFileDescrCount]: File Descriptors
Legend2[cacheCurrentReservedFileDescrCount]:

Target[cacheClients]: cacheClients&cacheClients:public@shadow:3401
MaxBytes[cacheClients]: 1000000000
Title[cacheClients]: Number of Clients
Options[cacheClients]: gauge, growright, nopercent
PageTop[cacheClients]: <H1>Number of clients accessing cache @ shadow</H1>
YLegend[cacheClients]: clients/sec
ShortLegend[cacheClients]: clients/s
LegendI[cacheClients]: Num Clients&nbsp;
LegendO[cacheClients]:
Legend1[cacheClients]: Num Clients
Legend2[cacheClients]:

Target[cacheHttpAllSvcTime]: cacheHttpAllSvcTime.5&cacheHttpAllSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpAllSvcTime]: 1000000000
Title[cacheHttpAllSvcTime]: HTTP All Service Time
Options[cacheHttpAllSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpAllSvcTime]: <H1>HTTP all service time @ shadow</H1>
YLegend[cacheHttpAllSvcTime]: svc time (ms)
ShortLegend[cacheHttpAllSvcTime]: ms
LegendI[cacheHttpAllSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpAllSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpAllSvcTime]: Median Svc Time
Legend2[cacheHttpAllSvcTime]: Median Svc Time

Target[cacheHttpMissSvcTime]: cacheHttpMissSvcTime.5&cacheHttpMissSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpMissSvcTime]: 1000000000
Title[cacheHttpMissSvcTime]: HTTP Miss Service Time
Options[cacheHttpMissSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpMissSvcTime]: <H1>HTTP miss service time @ shadow</H1>
YLegend[cacheHttpMissSvcTime]: svc time (ms)
ShortLegend[cacheHttpMissSvcTime]: ms
LegendI[cacheHttpMissSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpMissSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpMissSvcTime]: Median Svc Time
Legend2[cacheHttpMissSvcTime]: Median Svc Time

Target[cacheHttpNmSvcTime]: cacheHttpNmSvcTime.5&cacheHttpNmSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpNmSvcTime]: 1000000000
Title[cacheHttpNmSvcTime]: HTTP Near Miss Service Time
Options[cacheHttpNmSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpNmSvcTime]: <H1>HTTP near miss service time @ shadow</H1>
YLegend[cacheHttpNmSvcTime]: svc time (ms)
ShortLegend[cacheHttpNmSvcTime]: ms
LegendI[cacheHttpNmSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpNmSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpNmSvcTime]: Median Svc Time
Legend2[cacheHttpNmSvcTime]: Median Svc Time

Target[cacheHttpHitSvcTime]: cacheHttpHitSvcTime.5&cacheHttpHitSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpHitSvcTime]: 1000000000
Title[cacheHttpHitSvcTime]: HTTP Hit Service Time
Options[cacheHttpHitSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpHitSvcTime]: <H1>HTTP hit service time @ shadow</H1>
YLegend[cacheHttpHitSvcTime]: svc time (ms)
ShortLegend[cacheHttpHitSvcTime]: ms
LegendI[cacheHttpHitSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpHitSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpHitSvcTime]: Median Svc Time
Legend2[cacheHttpHitSvcTime]: Median Svc Time

Target[cacheIcpQuerySvcTime]: cacheIcpQuerySvcTime.5&cacheIcpQuerySvcTime.60:public@shadow:3401
MaxBytes[cacheIcpQuerySvcTime]: 1000000000
Title[cacheIcpQuerySvcTime]: ICP Query Service Time
Options[cacheIcpQuerySvcTime]: gauge, growright, nopercent
PageTop[cacheIcpQuerySvcTime]: <H1>ICP query service time @ shadow</H1>
YLegend[cacheIcpQuerySvcTime]: svc time (ms)
ShortLegend[cacheIcpQuerySvcTime]: ms
LegendI[cacheIcpQuerySvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheIcpQuerySvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheIcpQuerySvcTime]: Median Svc Time
Legend2[cacheIcpQuerySvcTime]: Median Svc Time

Target[cacheIcpReplySvcTime]: cacheIcpReplySvcTime.5&cacheIcpReplySvcTime.60:public@shadow:3401
MaxBytes[cacheIcpReplySvcTime]: 1000000000
Title[cacheIcpReplySvcTime]: ICP Reply Service Time
Options[cacheIcpReplySvcTime]: gauge, growright, nopercent
PageTop[cacheIcpReplySvcTime]: <H1>ICP reply service time @ shadow</H1>
YLegend[cacheIcpReplySvcTime]: svc time (ms)
ShortLegend[cacheIcpReplySvcTime]: ms
LegendI[cacheIcpReplySvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheIcpReplySvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheIcpReplySvcTime]: Median Svc Time
Legend2[cacheIcpReplySvcTime]: Median Svc Time

Target[cacheDnsSvcTime]: cacheDnsSvcTime.5&cacheDnsSvcTime.60:public@shadow:3401
MaxBytes[cacheDnsSvcTime]: 1000000000
Title[cacheDnsSvcTime]: DNS Service Time
Options[cacheDnsSvcTime]: gauge, growright, nopercent
PageTop[cacheDnsSvcTime]: <H1>DNS service time @ shadow</H1>
YLegend[cacheDnsSvcTime]: svc time (ms)
ShortLegend[cacheDnsSvcTime]: ms
LegendI[cacheDnsSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheDnsSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheDnsSvcTime]: Median Svc Time
Legend2[cacheDnsSvcTime]: Median Svc Time

Target[cacheRequestHitRatio]: cacheRequestHitRatio.5&cacheRequestHitRatio.60:public@shadow:3401
MaxBytes[cacheRequestHitRatio]: 100
AbsMax[cacheRequestHitRatio]: 100
Title[cacheRequestHitRatio]: Request Hit Ratio @ shadow
Options[cacheRequestHitRatio]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheRequestHitRatio]: dwmy
PageTop[cacheRequestHitRatio]: <H1>Request Hit Ratio @ shadow</H1>
YLegend[cacheRequestHitRatio]: %
ShortLegend[cacheRequestHitRatio]: %
LegendI[cacheRequestHitRatio]: Median Hit Ratio (5min)&nbsp;
LegendO[cacheRequestHitRatio]: Median Hit Ratio (60min)&nbsp;
Legend1[cacheRequestHitRatio]: Median Hit Ratio
Legend2[cacheRequestHitRatio]: Median Hit Ratio

Target[cacheRequestByteRatio]: cacheRequestByteRatio.5&cacheRequestByteRatio.60:public@shadow:3401
MaxBytes[cacheRequestByteRatio]: 100
AbsMax[cacheRequestByteRatio]: 100
Title[cacheRequestByteRatio]: Byte Hit Ratio @ shadow
Options[cacheRequestByteRatio]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheRequestByteRatio]: dwmy
PageTop[cacheRequestByteRatio]: <H1>Byte Hit Ratio @ shadow</H1>
YLegend[cacheRequestByteRatio]: %
ShortLegend[cacheRequestByteRatio]:%
LegendI[cacheRequestByteRatio]: Median Hit Ratio (5min)&nbsp;
LegendO[cacheRequestByteRatio]: Median Hit Ratio (60min)&nbsp;
Legend1[cacheRequestByteRatio]: Median Hit Ratio
Legend2[cacheRequestByteRatio]: Median Hit Ratio

Target[cacheBlockingGetHostByAddr]: cacheBlockingGetHostByAddr&cacheBlockingGetHostByAddr:public@shadow:3401
MaxBytes[cacheBlockingGetHostByAddr]: 1000000000
Title[cacheBlockingGetHostByAddr]: Blocking gethostbyaddr
Options[cacheBlockingGetHostByAddr]: growright, nopercent
PageTop[cacheBlockingGetHostByAddr]: <H1>Blocking gethostbyaddr count @ shadow</H1>
YLegend[cacheBlockingGetHostByAddr]: blocks/sec
ShortLegend[cacheBlockingGetHostByAddr]: blocks/s
LegendI[cacheBlockingGetHostByAddr]: Blocking&nbsp;
LegendO[cacheBlockingGetHostByAddr]:
Legend1[cacheBlockingGetHostByAddr]: Blocking
Legend2[cacheBlockingGetHostByAddr]:




Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Fueled by super botnets, DDoS attacks grow meaner and ever-more powerful

http://arstechnica.com/security/2013/04/fueled-by-super-botnets-ddos-attacks-grow-meaner-and-ever-more-powerful/ By Dan Goodin Ars Technica Apr 17 2013 Coordinated attacks used to knock websites offline grew meaner and more powerful in the past three months, with an eight-fold increase in the average amount of junk traffic used to take sites down, according to a company that helps customers weather the so-called distributed denial-of-service campaigns. The average amount of bandwidth used in DDoS attacks mushroomed to an astounding 48.25 gigabits per second in the first quarter, with peaks as high as 130 Gbps, according to Hollywood, Florida-based Prolexic. During the same period last year, bandwidth in the average attack was 6.1 Gbps and in the fourth quarter of last year it was 5.9 Gbps. The average duration of attacks also grew to 34.5 hours, compared with 28.5 hours last year and 32.2 hours during the fourth quarter of 2012. Earlier this month, Prolexic engineers saw an attack that exceeded 160 Gbps, and officials said they wouldn’t be surprised if peaks break the 200 Gbps threshold by the end of June. The spikes are brought on by new attack techniques that Ars first chronicled in October. Rather than using compromised PCs in homes and small offices to flood websites with torrents of traffic, attackers are relying on Web servers, which often have orders of magnitude more bandwidth at their disposal. As Ars reported last week, an ongoing attack on servers running the WordPress blogging application is actively seeking new recruits that can also be harnessed to form never-before-seen botnets to bring still more firepower. Also fueling the large-scale assaults are well-financed attackers who are increasingly able to coordinate with fellow crime organizations, Prolexic officials wrote in quarterly global DDoS report published Wednesday. […] ______________________________________________ Visit the InfoSec News Security Bookstore Best Selling Security Books and More! http://www.shopinfosecnews.org


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Secret footsoldier targeting banks reveals meaner, leaner face of DDoS

http://arstechnica.com/security/2013/01/secret-footsoldier-targeting-banks-reveals-meaner-leaner-face-of-ddos/ By Dan Goodin Ars Technica Jan 8 2013 Over the past two weeks, a new wave of Web attacks has battered major US banks, causing disruptions for many of their online services. Now, an Israel-based security firm has uncovered one of the secret footsoldiers behind the mass assault: a compromised website that was rigged to unleash a torrent of junk traffic on three of the world’s biggest financial institutions. The discovery by Web application security firm Incapsula helps explain the strategy behind the four-month-old campaign, which has been carried out under the flag of a group calling itself Izz ad-Din al-Qassam—rather than compromise and recruit thousands or tens of thousands of end-user PCs to carry out the distributed denial-of-service attacks, why not target a handful of Web servers that have orders of magnitude more bandwidth and processing power? Over the weekend, Incapsula researchers noticed a general-interest website located in the UK that was exhibiting suspicious behavior. They quickly discovered a backdoor that had been planted on it that was programmed to receive instructions from remote attackers. An analysis showed the website, which had just recently contracted with Incapsula, was being directed to send a flood of HTTP and UDP packets to major banks including PNC Financial Services, HSBC, and Fifth Third Bank. “Since the commands were blocked by our service the attack was mitigated even before it started, so we can’t be absolutely sure about the scope of damage this attack would cause,” Incapsula Security Analyst Ronen Atias wrote in a blog post published Tuesday. “Still, it is safe to assume that it would be enough to seriously harm an average medium-sized website.” […]
______________________________________________ Visit the InfoSec News Security Bookstore Best Selling Security Books and More! http://www.shopinfosecnews.org

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Metasploit Exploit released for Trend Internet Security 2010

I was cruising the Exploit-DB.com site today just  to see the latest in the exploits in the wild and noticed right away that there was a new metasploit exploit released on October 1st for Trend Micro’s Internet Security Pro 2010. It always chills me when I see exploits for security vendors. I guess I see them as being special or something. Maybe I shouldn’t put them so much on a pedestal since I guess all programmers can make mistakes. However, the question is… should we expect security vendors to have better security than their customers or other software companies? I wonder if NSS Labs is going to come up with a framework for assessing or certifying security product vendor’s development processes? Hmm… That’d be nice to see.

See the exploit below:

http://www.exploit-db.com/exploits/15168/


Facebooktwittergoogle_plusredditpinterestlinkedinmail

Vulnerability Management in the cloud

Vulnerability Management - Source (ISACA.org)

While there are different stories about what cloud computing “is”,  there is one specific direction that virtualization is headed that could bring along with it some additional problems for the security industry. One issue I wanted to focus in on is centered around vulnerability management and how it is implemented in a cloud environment. Many customer’s are faced with the need to scan their cloud, but unable to do so.

Virtualization providers have been pushing their customers and hosting providers to adopt new infrastructure to automate the distribution of CPU processing time for their applications across multiple condensed hardware devices. This concept was originally conceived as “Grid-Computing” which was created to address the limits of processing power in single CPU systems. This new wave of virtualization technology is meant to automatically distribute processing time to maximize the utilization of hardware for reduced Cap Ex (Capital Expenditures) and ongoing support costs. VMware’s Cloud Director is a good example of the direction that virtualization is going and how the definition of “cloud computing” is changing.  Virtualized systems are quickly being condensed into combined multi-CPU appliances that integrate the network, application and storage systems together for more harmonious and efficient IT operations.

The vulnerability management problem:

While cloud management is definitely becoming much more robust, one issue that is apparent for cloud providers is the management of the vulnerabilities inside a particular customer’s cloud. In a distributed environment, if the allocation of systems changes by either adding or removing virtual systems/instances from your cloud you quickly face the fact that you may not be scanning the correct system for it’s vulnerabilities. This is especially important in environments that are “shared” across different customers. Since most Vulnerability Management products use CIDR blocks or CMDB databases for defining the profile for scanning, you could easily end up scanning an adjacent customer’s system and hitting their environment with scans due to either a lag between CMDB updates or due to static definitions of scan network address space.

The vulnerability management cloud solution:

My belief is that this vulnerability management problem will be addressed by the integration and sharing of asset information between the cloud and vulnerability scanning services. Cloud providers will more than likely need to provide application programming interfaces which will allow the scan engines/management consoles to read-in current asset or deployment information from the cloud and then dynamically update the IP address information before scans commence.

Furthermore, I feel that applications such as web, ftp and databases will be increasingly distributed across these same virtualized environments and automatically integrate with load distribution systems (load balancers) to ensure delivery of the application no matter where the applications move inside the cloud. The first signs of this trend are already apparent in the VN-Link functionality release as part of the Unified Computing System from Cisco however adoption has been slow due to legacy and capital deployment on account of the world’s market recession. This may even lead to having multiple customer applications being processed or running on the same virtual host with different TCP/UDP port numbers.

This information would also need to roll down to the reporting and ticketing functionality of the vulnerability management suite so that reports and tickets are dynamically generated using the most up-to-date information and no adjacent customer data leaks into the report or your ticketing system for managing remediation efforts. Please let me know your thoughts….


Facebooktwittergoogle_plusredditpinterestlinkedinmail

Lawrence Presenting at SecureWorldExpo: Obtaining “Context” During a Forensic Investigation

Image Source: St. Stedwards University

Delve into the world of the pre-forensics work that must be done as an investigation kicks off.  Learn to profile someone as best you can to identify the context for your forensic examination, discover how to properly interview someone, read their body language and even read their eye movements.

To go to the conference click here.

For those who either attended or cannot attend, we’ve uploaded a video here.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

What’s so good about vulnerability management?

Image Source: Darkreading

Many corporations in the world are now mandated by PCI to perform at least quarterly scans against their PCI in-scope computing systems. The main goal of this activity is to ensure vulnerabilities in systems are identified and fixed on a regular basis. I myself think this is one of the more important provisions of PCI and one that I believe is tantamount to maintaining a secure environment.

What most corporations initially do is start by using simple scanning tools such as nessus, Gfi languard, ISS scanner etc and perform on-demand scans. While this is all well and good and provides an immediate snapshot of a particular point in time. There are several major flaws that must be addressed through richer tools.

First, it is great to get vulnerability and patch data, however providing a systems engineer or administrator with only one single report with many if not hundreds of things to fix this method becomes quickly unreasonable for them to track and respond to. We often forget that this systems engineer is often tasked with many other duties they must prioritize including new installs, troubleshooting, bug patching, administration, configuration etc that demands most of their time. These activities are often far more time sensitive in their eyes as projects etc have people bugging them regularly for completion. It is also important to note that the business is pushing them for ever greater functionality/features.

Given this fact, a simple scan report is just not viable for them to prioritize and track against existing workload. this has givrn rise to vulnerability management a.k.a. the process of managing vulnerabilities to remediation through the use of ticketing/reporting to management.

Secondly, another important flaw that exists with just simple scanning is the lack of overall metrics with regard to measuring risk. Measuring risk is hard is hard to do in security, but if you have an automated scanning process that is scheduled on a regularly occuring basis (i.e. more than once every 3 months) your vulnerability data over that time can be measured as systems become either more exposed or less exposed as they are patched or new vulnerabilities are found. This is one way you can effectively measure the effectiveness of your patch management and your security program.

Thirdly, this ensures your company clearly see’s that security is a process and not just a one time effort. This distinction is important because you as a security practitioner will need data to prove you need a consistent and ongoing supply of money to maintain security. Security is continuous and ever changing, stagnation is a guarentee of breach.

Moral of this story… manage security, don’t just triage it and forget it.

Great tools for managing vulnerabilities are:
-Rapid7
-McAfee Vulnerability Manager
-Qualys


Facebooktwittergoogle_plusredditpinterestlinkedinmail

Don’t forget the weakest link

Image Source:PCISecurityStandards.org

With all of today’s focus on securing for PCI or SOX we often find ourselves leaving our security risk management priorities behind. As we all know there are many ways to breach the security of a corporation and many safeguards we have to select from.

Which brings me to the fact that there are many internal web applications used inside companies that
we sometimes forget that can cause the rest of our security to fail. Good examples of such sites are intranets, bug tracking apps, internal document websites, employee benefit portals, time tracking portals etc.

It only takes one of these sites using a non-encrypted session (i.e. no ssl) to render an entire corporate PCI or SOX security paradigm useless. One single use of Cain & Abel sniffer tool along with ARP spoofing can suck down the passwords your privileged users use and give rise to an attacker gaining access to your sensitive data.

Although most corporations ask employees to use different or more complex passwords on disperate applications, the move to centralized LDAP or AD authenticated environments means now passwords are no longer different on these systems.

The moral of this story is, please don’t ignore your weakest link. Security is end to end.


Facebooktwittergoogle_plusredditpinterestlinkedinmail