Tag Archives: channels

Mrtg Config File for Squid Proxy

Below is my MRTG file for monitoring squid.

 

######################################################################
# Multi Router Traffic Grapher — squid Configuration File
######################################################################
# This file is for use with mrtg-2.0
#
# Customized for monitoring Squid Cache
# by Chris Miles http://chrismiles.info/
# http://chrismiles.info/unix/mrtg/
# To use:
# – change WorkDir and LoadMIBs settings
# – change all “shadow” occurrences to your squid host
# – change all “chris” occurrences to your name/address
# – change the community strings if required (eg: “public”)
# – change the snmp port if required (eg: 3401)
#
# Note:
#
# * Keywords must start at the begin of a line.
#
# * Lines which follow a keyword line which do start
# with a blank are appended to the keyword line
#
# * Empty Lines are ignored
#
# * Lines starting with a # sign are comments.
# ####################
# Global Configuration
# ####################

# Where should the logfiles, and webpages be created?
WorkDir: /srv/www/htdocs/squid-mrtg

# ————————–
# Optional Global Parameters
# ————————–

# How many seconds apart should the browser (Netscape) be
# instructed to reload the page? If this is not defined, the
# default is 300 seconds (5 minutes).

# Refresh: 600

# How often do you call mrtg? The default is 5 minutes. If
# you call it less often, you should specify it here. This
# does two things:

# a) the generated HTML page does contain the right
# information about the calling interval …

# b) a META header in the generated HTML page will instruct
# caches about the time to live of this page …..

# In this example we tell mrtg that we will be calling it
# every 10 minutes. If you are calling mrtg every 5
# minutes, you can leave this line commented out.

# Interval: 10

# With this switch mrtg will generate .meta files for CERN
# and Apache servers which contain Expiration tags for the
# html and gif files. The *.meta files will be created in
# the same directory as the other files, so you might have
# to set “MetaDir .” in your srm.conf file for this to work
#
# NOTE: If you are running Apache-1.2 you can use the mod_expire
# to achieve the same effect … see the file htaccess-dist

WriteExpires: Yes

# If you want to keep the mrtg icons in some place other than the
# working directory, use the IconDir varibale to give its url.

# IconDir: /mrtgicons/
IconDir: /images/

LoadMIBs: /usr/share/squid/mib.txt

# #################################################
# Configuration for each Target you want to monitor
# #################################################

# The configuration keywords “Target” must be followed by a
# unique name. This will also be the name used for the
# webpages, logfiles and gifs created for that target.

# Note that the “Target” sections can be auto-generated with
# the cfgmaker tool. Check readme.html for instructions.
# ========

##
## Target —————————————-
##

# With the “Target” keyword you tell mrtg what it should
# monitor. The “Target” keyword takes arguments in a wide
# range of formats:

# * The most basic format is “port:community@router”
# This will generate a traffic graph for port ‘port’
# of the router ‘router’ and it will use the community
# ‘community’ for the snmp query.

# Target[ezwf]: 2:public@wellfleet-fddi.ethz.ch

# * Sometimes you are sitting on the wrong side of the
# link. And you would like to have mrtg report Incoming
# traffic as outgoing and visa versa. This can be achieved
# by adding the ‘-‘ sign in front of the “Target”
# description. It flips the in and outgoing traffic rates.

# Target[ezci]: -1:public@ezci-ether.ethz.ch

# * You can also explicitly define the OID to query by using the
# following syntax ‘OID_1&OID_2:community@router’
# The following example will retrieve error input and output
# octets/sec on interface 1. MRTG needs to graph two values, so
# you need to specify two OID’s such as temperature and humidity
# or error input and error output.

# Target[ezwf]: 1.3.6.1.2.1.2.2.1.14.1&1.3.6.1.2.1.2.2.1.20.1:public@myrouter

# * mrtg knows a number of symbolical SNMP variable
# names. See the file mibhelp.txt for a list of known
# names. One example are the ifInErrors and and ifOutErrors
# names. This means you can specify the above as:

# Target[ezwf]: ifInErrors.1&ifOutErrors.1:public@myrouter

# * if you want to monitor something which does not provide
# data via snmp you can use some external program to do
# the data gathering.

#
# The external command must return 4 lines of output:
# Line 1 : current state of the ‘incoming bytes counter’
# Line 2 : current state of the ‘outgoing bytes counter’
# Line 3 : string, telling the uptime of the target.
# Line 4 : string, telling the name of the target.

# Depending on the type of data your script returns you
# might want to use the ‘gauge’ or ‘absolute’ arguments
# for the “Options” keyword.

# Target[ezwf]: `/usr/local/bin/df2mrtg /dev/dsk/c0t2d0s0`

# * You can also use several statements in a mathematical
# expression. This could be used to aggregate both B channels
# in an ISDN connection or multiple T1’s that are aggregated
# into a single channel for greater bandwidth.
# Note the whitespace arround the target definitions.

# Target[ezwf]: 2:public@wellfleetA + 1:public@wellfleetA
# * 4:public@ciscoF

##
## RouterUptime —————————————
##
#
# In cases where you calculate the used bandwidth from
# several interfaces you normaly don’t get the routeruptime
# and routername displayed on the web page.
# If this interface are on the same router and the uptime and
# name should be displayed nevertheless you have to specify
# its community and address again with the RouterUptime keyword.

# Target[kacisco]: 1:public@194.64.66.250 + 2:public@194.64.66.250
# RouterUptime[kacisco]: public@194.64.66.250

##
## MaxBytes ——————————————-
##

# How many bytes per second can this port carry. Since most
# links are rated in bits per second, you need to divide
# their maximum bandwidth (in bits) by eight (8) in order to get
# bytes per second. This is very important to make your
# unscaled graphs display realistic information.
# T1 = 193000, 56K = 7000, Ethernet = 1250000. The “MaxBytes”
# value will be used by mrtg to decide whether it got a
# valid response from the router. If a number higher than
# “MaxBytes” is returned, it is ignored. Also read the section
# on AbsMax for further info.

# MaxBytes[ezwf]: 1250000

##
## Title ———————————————–
##

# Title for the HTML page which gets generated for the graph.

# Title[ezwf]: Traffic Analysis for ETZ C 95.1

##
## PageTop ———————————————
##

# Things to add to the top of the generated HTML page. Note
# that you can have several lines of text as long as the
# first column is empty.
# Note that the continuation lines will all end up on the same
# line in the html page. If you want linebreaks in the generated
# html use the ‘\n’ sequence.

# PageTop[ezwf]: <H1>Traffic Analysis for ETZ C95.1</H1>
# Our Campus Backbone runs over an FDDI line\n
# with a maximum transfer rate of 12.5 Mega Bytes per
# Second.

##
## PageFoot ———————————————
##

# Things to add at the very end of the mrtg generated html page

# PageFoot[ezwf]: <HR size=2 noshade>This page is managed by Blubber

# ————————————————–
# Optional Target Configuration Tags
# ————————————————–

##
## AddHead —————————————–
##

# Use this tag like the PageTop header, but its contents
# will be added between </TITLE> and </HEAD>.

# AddHead[ezwf]: <!– Just a comment for fun –>

##
## AbsMax ——————————————
##

# If you are monitoring a link which can handle more traffic
# than the MaxBytes value. Eg, a line which uses compression
# or some frame relay link, you can use the AbsMax keyword
# to give the absolute maximum value ever to be reached. We
# need to know this in order to sort out unrealistic values
# returned by the routers. If you do not set absmax, rateup
# will ignore values higher then MaxBytes.

# AbsMax[ezwf]: 2500000

##
## Unscaled ——————————————
##

# By default each graph is scaled vertically to make the
# actual data visible even when it is much lower than
# MaxBytes. With the “Unscaled” variable you can suppress
# this. It’s argument is a string, containing one letter
# for each graph you don’t want to be scaled: d=day w=week
# m=month y=year. In the example I suppress scaling for the
# yearly and the monthly graph.

# Unscaled[ezwf]: ym

##
## WithPeak ——————————————
##

# By default the graphs only contain the average transfer
# rates for incoming and outgoing traffic. The
# following option instructs mrtg to display the peak
# 5 minute transfer rates in the [w]eekly, [m]onthly and
# [y]early graph. In the example we define the monthly
# and the yearly graph to contain peak as well as average
# values.

# WithPeak[ezwf]: ym

##
## Supress ——————————————
##

# By Default mrtg produces 4 graphs. With this option you
# can suppress the generation of selected graphs. The format
# is analog to the above option. In this example we suppress
# the yearly graph as it is quite empty in the beginning.

# Suppress[ezwf]: y

##
## Directory
##

# By default, mrtg puts all the files that it generates for each
# router (the GIFs, the HTML page, the log file, etc.) in WorkDir.
# If the “Directory” option is specified, the files are instead put
# into a directory under WorkDir. (For example, given the options in
# this mrtg.cfg-dist file, the “Directory” option below would cause all
# the ezwf files to be put into /usr/tardis/pub/www/stats/mrtg/ezwf .)
#
# The directory must already exist; mrtg will not create it.

# Directory[ezwf]: ezwf

##
## XSize and YSize ——————————————
##

# By Default mrtgs graphs are 100 by 400 pixels wide (plus
# some more for the labels. In the example we get almost
# square graphs …
# Note: XSize must be between 20 and 600
# YSize must be larger than 20

# XSize[ezwf]: 300
# YSize[ezwf]: 300

##
## XZoom YZoom ————————————————-
##

# If you want your graphs to have larger pixels, you can
# “Zoom” them.

#XZoom[ezwf]: 2.0
#YZoom[ezwf]: 2.0

##
## XScale YScale ————————————————-
##

# If you want your graphs to be actually scaled use XScale
# and YScale. (Beware while this works, the results look ugly
# (to be frank) so if someone wants fix this: patches are
# welcome.

# XScale[ezwf]: 1.5
# YScale[ezwf]: 1.5
##
## Step ———————————————————–
##

# Change the default step with from 5 * 60 seconds to
# something else I have not tested this well …

# Step[ezwf]: 60

##
## Options ——————————————
##

# The “Options” Keyword allows you to set some boolean
# switches:
#
# growright – The graph grows to the left by default.
#
# bits – All the numbers printed are in bits instead
# of bytes … looks much more impressive 🙂
#
# noinfo – Supress the information about uptime and
# device name in the generated webpage.
#
# absolute – This is for data sources which reset their
# value when they are read. This means that
# rateup has not to build the difference between
# this and the last value read from the data
# source. Useful for external data gatherers.
#
# gauge – Treat the values gathered from target as absolute
# and not as counters. This would be useful to
# monitor things like diskspace, load and so
# on ….
#
# nopercent Don’t print usage percentages
#
# integer Don’t print only integers in the summary …
#

# Options[ezwf]: growright, bits

##
## Colours ——————————————
##

# The “Colours” tag allows you to override the default colour
# scheme. Note: All 4 of the required colours must be
# specified here The colour name (‘Colourx’ below) is the
# legend name displayed, while the RGB value is the real
# colour used for the display, both on the graph and n the
# html doc.

# Format is: Colour1#RRGGBB,Colour2#RRGGBB,Colour3#RRGGBB,Colour4#RRGGBB
# where: Colour1 = Input on default graph
# Colour2 = Output on default graph
# Colour3 = Max input
# Colour4 = Max output
# RRGGBB = 2 digit hex values for Red, Green and Blue

# Colours[ezwf]: GREEN#00eb0c,BLUE#1000ff,DARK GREEN#006600,VIOLET#ff00ff

##
## Background ——————————————
##

# With the “Background” tag you can configure the background
# colour of the generated HTML page

# Background[ezwf]: #a0a0a0a

##
## YLegend, ShortLegend, Legend[1234] ——————
##

# The following keywords allow you to override the text
# displayed for the various legends of the graph and in the
# HTML document
#
# * YLegend : The Y-Axis of the graph
# * ShortLegend: The ‘b/s’ string used for Max, Average and Current
# * Legend[1234IO]: The strings for the colour legend
#
#YLegend[ezwf]: Bits per Second
#ShortLegend[ezwf]: b/s
#Legend1[ezwf]: Incoming Traffic in Bits per Second
#Legend2[ezwf]: Outgoing Traffic in Bits per Second
#Legend3[ezwf]: Maximal 5 Minute Incoming Traffic
#Legend4[ezwf]: Maximal 5 Minute Outgoing Traffic
#LegendI[ezwf]: &nbsp;In:
#LegendO[ezwf]: &nbsp;Out:
# Note, if LegendI or LegendO are set to an empty string with
# LegendO[ezwf]:
# The corresponding line below the graph will not be printed at all.

# If you live in an international world, you might want to
# generate the graphs in different timezones. This is set in the
# TZ variable. Under certain operating systems like Solaris,
# this will provoke the localtime call to giv the time in
# the selected timezone …

# Timezone[ezwf]: Japan

# The Timezone is the standard Solaris timezone, ie Japan, Hongkong,
# GMT, GMT+1 etc etc.

# By default, mrtg (actually rateup) uses the strftime(3) ‘%W’ option
# to format week numbers in the monthly graphs. The exact semantics
# of this format option vary between systems. If you find that the
# week numbers are wrong, and your system’s strftime(3) routine
# supports it, you can try another format option. The POSIX ‘%V’
# option seems to correspond to a widely used week numbering
# convention. The week format character should be specified as a
# single letter; either W, V, or U.

# Weekformat[ezwf]: V

# #############################
# Two very special Target names
# #############################

# To save yourself some typing you can define a target
# called ‘^’. The text of every Keyword you define for this
# target will be PREPENDED to the corresponding Keyword of
# all the targets defined below this line. The same goes for
# a Target called ‘$’ but its options will be APPENDED.
#
# The example will make mrtg use a common header and a
# common contact person in all the pages generated from
# targets defined later in this file.
#
#PageTop[^]: <H1>Traffic Stats</H1><HR>
#PageTop[$]: Contact Peter Norton if you have any questions<HR>

PageFoot[^]: <i>Page managed by GeekGuy</a></i>

Target[cacheServerRequests]: cacheServerRequests&cacheServerRequests:public@shadow:3401
MaxBytes[cacheServerRequests]: 10000000
Title[cacheServerRequests]: Server Requests @ shadow
Options[cacheServerRequests]: growright, nopercent
PageTop[cacheServerRequests]: <h1>Server Requests @ shadow</h1>
YLegend[cacheServerRequests]: requests/sec
ShortLegend[cacheServerRequests]: req/s
LegendI[cacheServerRequests]: Requests&nbsp;
LegendO[cacheServerRequests]:
Legend1[cacheServerRequests]: Requests
Legend2[cacheServerRequests]:

Target[cacheServerErrors]: cacheServerErrors&cacheServerErrors:public@shadow:3401
MaxBytes[cacheServerErrors]: 10000000
Title[cacheServerErrors]: Server Errors @ shadow
Options[cacheServerErrors]: growright, nopercent
PageTop[cacheServerErrors]: <H1>Server Errors @ shadow</H1>
YLegend[cacheServerErrors]: errors/sec
ShortLegend[cacheServerErrors]: err/s
LegendI[cacheServerErrors]: Errors&nbsp;
LegendO[cacheServerErrors]:
Legend1[cacheServerErrors]: Errors
Legend2[cacheServerErrors]:

Target[cacheServerInOutKb]: cacheServerInKb&cacheServerOutKb:public@shadow:3401 * 1024
MaxBytes[cacheServerInOutKb]: 1000000000
Title[cacheServerInOutKb]: Server In/Out Traffic @ shadow
Options[cacheServerInOutKb]: growright, nopercent
PageTop[cacheServerInOutKb]: <H1>Server In/Out Traffic @ shadow</H1>
YLegend[cacheServerInOutKb]: Bytes/sec
ShortLegend[cacheServerInOutKb]: Bytes/s
LegendI[cacheServerInOutKb]: Server In&nbsp;
LegendO[cacheServerInOutKb]: Server Out&nbsp;
Legend1[cacheServerInOutKb]: Server In
Legend2[cacheServerInOutKb]: Server Out

Target[cacheClientHttpRequests]: cacheClientHttpRequests&cacheClientHttpRequests:public@shadow:3401
MaxBytes[cacheClientHttpRequests]: 10000000
Title[cacheClientHttpRequests]: Client Http Requests @ shadow
Options[cacheClientHttpRequests]: growright, nopercent
PageTop[cacheClientHttpRequests]: <H1>Client Http Requests @ shadow</H1>
YLegend[cacheClientHttpRequests]: requests/sec
ShortLegend[cacheClientHttpRequests]: req/s
LegendI[cacheClientHttpRequests]: Requests&nbsp;
LegendO[cacheClientHttpRequests]:
Legend1[cacheClientHttpRequests]: Requests
Legend2[cacheClientHttpRequests]:

Target[cacheHttpHits]: cacheHttpHits&cacheHttpHits:public@shadow:3401
MaxBytes[cacheHttpHits]: 10000000
Title[cacheHttpHits]: HTTP Hits @ shadow
Options[cacheHttpHits]: growright, nopercent
PageTop[cacheHttpHits]: <H1>HTTP Hits @ shadow</H1>
YLegend[cacheHttpHits]: hits/sec
ShortLegend[cacheHttpHits]: hits/s
LegendI[cacheHttpHits]: Hits&nbsp;
LegendO[cacheHttpHits]:
Legend1[cacheHttpHits]: Hits
Legend2[cacheHttpHits]:

Target[cacheHttpErrors]: cacheHttpErrors&cacheHttpErrors:public@shadow:3401
MaxBytes[cacheHttpErrors]: 10000000
Title[cacheHttpErrors]: HTTP Errors @ shadow
Options[cacheHttpErrors]: growright, nopercent
PageTop[cacheHttpErrors]: <H1>HTTP Errors @ shadow</H1>
YLegend[cacheHttpErrors]: errors/sec
ShortLegend[cacheHttpErrors]: err/s
LegendI[cacheHttpErrors]: Errors&nbsp;
LegendO[cacheHttpErrors]:
Legend1[cacheHttpErrors]: Errors
Legend2[cacheHttpErrors]:

Target[cacheIcpPktsSentRecv]: cacheIcpPktsSent&cacheIcpPktsRecv:public@shadow:3401
MaxBytes[cacheIcpPktsSentRecv]: 10000000
Title[cacheIcpPktsSentRecv]: ICP Packets Sent/Received
Options[cacheIcpPktsSentRecv]: growright, nopercent
PageTop[cacheIcpPktsSentRecv]: <H1>ICP Packets Sent/Recieved @ shadow</H1>
YLegend[cacheIcpPktsSentRecv]: packets/sec
ShortLegend[cacheIcpPktsSentRecv]: pkts/s
LegendI[cacheIcpPktsSentRecv]: Pkts Sent&nbsp;
LegendO[cacheIcpPktsSentRecv]: Pkts Received&nbsp;
Legend1[cacheIcpPktsSentRecv]: Pkts Sent
Legend2[cacheIcpPktsSentRecv]: Pkts Received

Target[cacheIcpKbSentRecv]: cacheIcpKbSent&cacheIcpKbRecv:public@shadow:3401 * 1024
MaxBytes[cacheIcpKbSentRecv]: 1000000000
Title[cacheIcpKbSentRecv]: ICP Bytes Sent/Received
Options[cacheIcpKbSentRecv]: growright, nopercent
PageTop[cacheIcpKbSentRecv]: <H1>ICP Bytes Sent/Received @ shadow</H1>
YLegend[cacheIcpKbSentRecv]: Bytes/sec
ShortLegend[cacheIcpKbSentRecv]: Bytes/s
LegendI[cacheIcpKbSentRecv]: Sent&nbsp;
LegendO[cacheIcpKbSentRecv]: Received&nbsp;
Legend1[cacheIcpKbSentRecv]: Sent
Legend2[cacheIcpKbSentRecv]: Received

Target[cacheHttpInOutKb]: cacheHttpInKb&cacheHttpOutKb:public@shadow:3401 * 1024
MaxBytes[cacheHttpInOutKb]: 1000000000
Title[cacheHttpInOutKb]: HTTP In/Out Traffic @ shadow
Options[cacheHttpInOutKb]: growright, nopercent
PageTop[cacheHttpInOutKb]: <H1>HTTP In/Out Traffic @ shadow</H1>
YLegend[cacheHttpInOutKb]: Bytes/second
ShortLegend[cacheHttpInOutKb]: Bytes/s
LegendI[cacheHttpInOutKb]: HTTP In&nbsp;
LegendO[cacheHttpInOutKb]: HTTP Out&nbsp;
Legend1[cacheHttpInOutKb]: HTTP In
Legend2[cacheHttpInOutKb]: HTTP Out

Target[cacheCurrentSwapSize]: cacheCurrentSwapSize&cacheCurrentSwapSize:public@shadow:3401
MaxBytes[cacheCurrentSwapSize]: 1000000000
Title[cacheCurrentSwapSize]: Current Swap Size @ shadow
Options[cacheCurrentSwapSize]: gauge, growright, nopercent
PageTop[cacheCurrentSwapSize]: <H1>Current Swap Size @ shadow</H1>
YLegend[cacheCurrentSwapSize]: swap size
ShortLegend[cacheCurrentSwapSize]: Bytes
LegendI[cacheCurrentSwapSize]: Swap Size&nbsp;
LegendO[cacheCurrentSwapSize]:
Legend1[cacheCurrentSwapSize]: Swap Size
Legend2[cacheCurrentSwapSize]:

Target[cacheNumObjCount]: cacheNumObjCount&cacheNumObjCount:public@shadow:3401
MaxBytes[cacheNumObjCount]: 10000000
Title[cacheNumObjCount]: Num Object Count @ shadow
Options[cacheNumObjCount]: gauge, growright, nopercent
PageTop[cacheNumObjCount]: <H1>Num Object Count @ shadow</H1>
YLegend[cacheNumObjCount]: # of objects
ShortLegend[cacheNumObjCount]: objects
LegendI[cacheNumObjCount]: Num Objects&nbsp;
LegendO[cacheNumObjCount]:
Legend1[cacheNumObjCount]: Num Objects
Legend2[cacheNumObjCount]:

Target[cacheCpuUsage]: cacheCpuUsage&cacheCpuUsage:public@shadow:3401
MaxBytes[cacheCpuUsage]: 100
AbsMax[cacheCpuUsage]: 100
Title[cacheCpuUsage]: CPU Usage @ shadow
Options[cacheCpuUsage]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheCpuUsage]: dwmy
PageTop[cacheCpuUsage]: <H1>CPU Usage @ shadow</H1>
YLegend[cacheCpuUsage]: usage %
ShortLegend[cacheCpuUsage]:%
LegendI[cacheCpuUsage]: CPU Usage&nbsp;
LegendO[cacheCpuUsage]:
Legend1[cacheCpuUsage]: CPU Usage
Legend2[cacheCpuUsage]:

Target[cacheMemUsage]: cacheMemUsage&cacheMemUsage:public@shadow:3401 * 1024
MaxBytes[cacheMemUsage]: 2000000000
Title[cacheMemUsage]: Memory Usage
Options[cacheMemUsage]: gauge, growright, nopercent
PageTop[cacheMemUsage]: <H1>Total memory accounted for @ shadow</H1>
YLegend[cacheMemUsage]: Bytes
ShortLegend[cacheMemUsage]: Bytes
LegendI[cacheMemUsage]: Mem Usage&nbsp;
LegendO[cacheMemUsage]:
Legend1[cacheMemUsage]: Mem Usage
Legend2[cacheMemUsage]:

Target[cacheSysPageFaults]: cacheSysPageFaults&cacheSysPageFaults:public@shadow:3401
MaxBytes[cacheSysPageFaults]: 10000000
Title[cacheSysPageFaults]: Sys Page Faults @ shadow
Options[cacheSysPageFaults]: growright, nopercent
PageTop[cacheSysPageFaults]: <H1>Sys Page Faults @ shadow</H1>
YLegend[cacheSysPageFaults]: page faults/sec
ShortLegend[cacheSysPageFaults]: PF/s
LegendI[cacheSysPageFaults]: Page Faults&nbsp;
LegendO[cacheSysPageFaults]:
Legend1[cacheSysPageFaults]: Page Faults
Legend2[cacheSysPageFaults]:

Target[cacheSysVMsize]: cacheSysVMsize&cacheSysVMsize:public@shadow:3401 * 1024
MaxBytes[cacheSysVMsize]: 1000000000
Title[cacheSysVMsize]: Storage Mem Size @ shadow
Options[cacheSysVMsize]: gauge, growright, nopercent
PageTop[cacheSysVMsize]: <H1>Storage Mem Size @ shadow</H1>
YLegend[cacheSysVMsize]: mem size
ShortLegend[cacheSysVMsize]: Bytes
LegendI[cacheSysVMsize]: Mem Size&nbsp;
LegendO[cacheSysVMsize]:
Legend1[cacheSysVMsize]: Mem Size
Legend2[cacheSysVMsize]:

Target[cacheSysStorage]: cacheSysStorage&cacheSysStorage:public@shadow:3401
MaxBytes[cacheSysStorage]: 1000000000
Title[cacheSysStorage]: Storage Swap Size @ shadow
Options[cacheSysStorage]: gauge, growright, nopercent
PageTop[cacheSysStorage]: <H1>Storage Swap Size @ shadow</H1>
YLegend[cacheSysStorage]: swap size (KB)
ShortLegend[cacheSysStorage]: KBytes
LegendI[cacheSysStorage]: Swap Size&nbsp;
LegendO[cacheSysStorage]:
Legend1[cacheSysStorage]: Swap Size
Legend2[cacheSysStorage]:

Target[cacheSysNumReads]: cacheSysNumReads&cacheSysNumReads:public@shadow:3401
MaxBytes[cacheSysNumReads]: 10000000
Title[cacheSysNumReads]: HTTP I/O number of reads @ shadow
Options[cacheSysNumReads]: growright, nopercent
PageTop[cacheSysNumReads]: <H1>HTTP I/O number of reads @ shadow</H1>
YLegend[cacheSysNumReads]: reads/sec
ShortLegend[cacheSysNumReads]: reads/s
LegendI[cacheSysNumReads]: I/O&nbsp;
LegendO[cacheSysNumReads]:
Legend1[cacheSysNumReads]: I/O
Legend2[cacheSysNumReads]:

Target[cacheCpuTime]: cacheCpuTime&cacheCpuTime:public@shadow:3401
MaxBytes[cacheCpuTime]: 1000000000
Title[cacheCpuTime]: Cpu Time
Options[cacheCpuTime]: gauge, growright, nopercent
PageTop[cacheCpuTime]: <H1>Amount of cpu seconds consumed @ shadow</H1>
YLegend[cacheCpuTime]: cpu seconds
ShortLegend[cacheCpuTime]: cpu seconds
LegendI[cacheCpuTime]: Mem Time&nbsp;
LegendO[cacheCpuTime]:
Legend1[cacheCpuTime]: Mem Time
Legend2[cacheCpuTime]:

Target[cacheMaxResSize]: cacheMaxResSize&cacheMaxResSize:public@shadow:3401 * 1024
MaxBytes[cacheMaxResSize]: 1000000000
Title[cacheMaxResSize]: Max Resident Size
Options[cacheMaxResSize]: gauge, growright, nopercent
PageTop[cacheMaxResSize]: <H1>Maximum Resident Size @ shadow</H1>
YLegend[cacheMaxResSize]: Bytes
ShortLegend[cacheMaxResSize]: Bytes
LegendI[cacheMaxResSize]: Size&nbsp;
LegendO[cacheMaxResSize]:
Legend1[cacheMaxResSize]: Size
Legend2[cacheMaxResSize]:

Target[cacheCurrentLRUExpiration]: cacheCurrentLRUExpiration&cacheCurrentLRUExpiration:public@shadow:3401
MaxBytes[cacheCurrentLRUExpiration]: 1000000000
Title[cacheCurrentLRUExpiration]: LRU Expiration Age
Options[cacheCurrentLRUExpiration]: gauge, growright, nopercent
PageTop[cacheCurrentLRUExpiration]: <H1>Storage LRU Expiration Age @ shadow</H1>
YLegend[cacheCurrentLRUExpiration]: expir (days)
ShortLegend[cacheCurrentLRUExpiration]: days
LegendI[cacheCurrentLRUExpiration]: Age&nbsp;
LegendO[cacheCurrentLRUExpiration]:
Legend1[cacheCurrentLRUExpiration]: Age
Legend2[cacheCurrentLRUExpiration]:

Target[cacheCurrentUnlinkRequests]: cacheCurrentUnlinkRequests&cacheCurrentUnlinkRequests:public@shadow:3401
MaxBytes[cacheCurrentUnlinkRequests]: 1000000000
Title[cacheCurrentUnlinkRequests]: Unlinkd Requests
Options[cacheCurrentUnlinkRequests]: growright, nopercent
PageTop[cacheCurrentUnlinkRequests]: <H1>Requests given to unlinkd @ shadow</H1>
YLegend[cacheCurrentUnlinkRequests]: requests/sec
ShortLegend[cacheCurrentUnlinkRequests]: reqs/s
LegendI[cacheCurrentUnlinkRequests]: Unlinkd requests&nbsp;
LegendO[cacheCurrentUnlinkRequests]:
Legend1[cacheCurrentUnlinkRequests]: Unlinkd requests
Legend2[cacheCurrentUnlinkRequests]:

Target[cacheCurrentUnusedFileDescrCount]: cacheCurrentUnusedFileDescrCount&cacheCurrentUnusedFileDescrCount:public@shadow:3401
MaxBytes[cacheCurrentUnusedFileDescrCount]: 1000000000
Title[cacheCurrentUnusedFileDescrCount]: Available File Descriptors
Options[cacheCurrentUnusedFileDescrCount]: gauge, growright, nopercent
PageTop[cacheCurrentUnusedFileDescrCount]: <H1>Available number of file descriptors @ shadow</H1>
YLegend[cacheCurrentUnusedFileDescrCount]: # of FDs
ShortLegend[cacheCurrentUnusedFileDescrCount]: FDs
LegendI[cacheCurrentUnusedFileDescrCount]: File Descriptors&nbsp;
LegendO[cacheCurrentUnusedFileDescrCount]:
Legend1[cacheCurrentUnusedFileDescrCount]: File Descriptors
Legend2[cacheCurrentUnusedFileDescrCount]:

Target[cacheCurrentReservedFileDescrCount]: cacheCurrentReservedFileDescrCount&cacheCurrentReservedFileDescrCount:public@shadow:3401
MaxBytes[cacheCurrentReservedFileDescrCount]: 1000000000
Title[cacheCurrentReservedFileDescrCount]: Reserved File Descriptors
Options[cacheCurrentReservedFileDescrCount]: gauge, growright, nopercent
PageTop[cacheCurrentReservedFileDescrCount]: <H1>Reserved number of file descriptors @ shadow</H1>
YLegend[cacheCurrentReservedFileDescrCount]: # of FDs
ShortLegend[cacheCurrentReservedFileDescrCount]: FDs
LegendI[cacheCurrentReservedFileDescrCount]: File Descriptors&nbsp;
LegendO[cacheCurrentReservedFileDescrCount]:
Legend1[cacheCurrentReservedFileDescrCount]: File Descriptors
Legend2[cacheCurrentReservedFileDescrCount]:

Target[cacheClients]: cacheClients&cacheClients:public@shadow:3401
MaxBytes[cacheClients]: 1000000000
Title[cacheClients]: Number of Clients
Options[cacheClients]: gauge, growright, nopercent
PageTop[cacheClients]: <H1>Number of clients accessing cache @ shadow</H1>
YLegend[cacheClients]: clients/sec
ShortLegend[cacheClients]: clients/s
LegendI[cacheClients]: Num Clients&nbsp;
LegendO[cacheClients]:
Legend1[cacheClients]: Num Clients
Legend2[cacheClients]:

Target[cacheHttpAllSvcTime]: cacheHttpAllSvcTime.5&cacheHttpAllSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpAllSvcTime]: 1000000000
Title[cacheHttpAllSvcTime]: HTTP All Service Time
Options[cacheHttpAllSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpAllSvcTime]: <H1>HTTP all service time @ shadow</H1>
YLegend[cacheHttpAllSvcTime]: svc time (ms)
ShortLegend[cacheHttpAllSvcTime]: ms
LegendI[cacheHttpAllSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpAllSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpAllSvcTime]: Median Svc Time
Legend2[cacheHttpAllSvcTime]: Median Svc Time

Target[cacheHttpMissSvcTime]: cacheHttpMissSvcTime.5&cacheHttpMissSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpMissSvcTime]: 1000000000
Title[cacheHttpMissSvcTime]: HTTP Miss Service Time
Options[cacheHttpMissSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpMissSvcTime]: <H1>HTTP miss service time @ shadow</H1>
YLegend[cacheHttpMissSvcTime]: svc time (ms)
ShortLegend[cacheHttpMissSvcTime]: ms
LegendI[cacheHttpMissSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpMissSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpMissSvcTime]: Median Svc Time
Legend2[cacheHttpMissSvcTime]: Median Svc Time

Target[cacheHttpNmSvcTime]: cacheHttpNmSvcTime.5&cacheHttpNmSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpNmSvcTime]: 1000000000
Title[cacheHttpNmSvcTime]: HTTP Near Miss Service Time
Options[cacheHttpNmSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpNmSvcTime]: <H1>HTTP near miss service time @ shadow</H1>
YLegend[cacheHttpNmSvcTime]: svc time (ms)
ShortLegend[cacheHttpNmSvcTime]: ms
LegendI[cacheHttpNmSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpNmSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpNmSvcTime]: Median Svc Time
Legend2[cacheHttpNmSvcTime]: Median Svc Time

Target[cacheHttpHitSvcTime]: cacheHttpHitSvcTime.5&cacheHttpHitSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpHitSvcTime]: 1000000000
Title[cacheHttpHitSvcTime]: HTTP Hit Service Time
Options[cacheHttpHitSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpHitSvcTime]: <H1>HTTP hit service time @ shadow</H1>
YLegend[cacheHttpHitSvcTime]: svc time (ms)
ShortLegend[cacheHttpHitSvcTime]: ms
LegendI[cacheHttpHitSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpHitSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpHitSvcTime]: Median Svc Time
Legend2[cacheHttpHitSvcTime]: Median Svc Time

Target[cacheIcpQuerySvcTime]: cacheIcpQuerySvcTime.5&cacheIcpQuerySvcTime.60:public@shadow:3401
MaxBytes[cacheIcpQuerySvcTime]: 1000000000
Title[cacheIcpQuerySvcTime]: ICP Query Service Time
Options[cacheIcpQuerySvcTime]: gauge, growright, nopercent
PageTop[cacheIcpQuerySvcTime]: <H1>ICP query service time @ shadow</H1>
YLegend[cacheIcpQuerySvcTime]: svc time (ms)
ShortLegend[cacheIcpQuerySvcTime]: ms
LegendI[cacheIcpQuerySvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheIcpQuerySvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheIcpQuerySvcTime]: Median Svc Time
Legend2[cacheIcpQuerySvcTime]: Median Svc Time

Target[cacheIcpReplySvcTime]: cacheIcpReplySvcTime.5&cacheIcpReplySvcTime.60:public@shadow:3401
MaxBytes[cacheIcpReplySvcTime]: 1000000000
Title[cacheIcpReplySvcTime]: ICP Reply Service Time
Options[cacheIcpReplySvcTime]: gauge, growright, nopercent
PageTop[cacheIcpReplySvcTime]: <H1>ICP reply service time @ shadow</H1>
YLegend[cacheIcpReplySvcTime]: svc time (ms)
ShortLegend[cacheIcpReplySvcTime]: ms
LegendI[cacheIcpReplySvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheIcpReplySvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheIcpReplySvcTime]: Median Svc Time
Legend2[cacheIcpReplySvcTime]: Median Svc Time

Target[cacheDnsSvcTime]: cacheDnsSvcTime.5&cacheDnsSvcTime.60:public@shadow:3401
MaxBytes[cacheDnsSvcTime]: 1000000000
Title[cacheDnsSvcTime]: DNS Service Time
Options[cacheDnsSvcTime]: gauge, growright, nopercent
PageTop[cacheDnsSvcTime]: <H1>DNS service time @ shadow</H1>
YLegend[cacheDnsSvcTime]: svc time (ms)
ShortLegend[cacheDnsSvcTime]: ms
LegendI[cacheDnsSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheDnsSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheDnsSvcTime]: Median Svc Time
Legend2[cacheDnsSvcTime]: Median Svc Time

Target[cacheRequestHitRatio]: cacheRequestHitRatio.5&cacheRequestHitRatio.60:public@shadow:3401
MaxBytes[cacheRequestHitRatio]: 100
AbsMax[cacheRequestHitRatio]: 100
Title[cacheRequestHitRatio]: Request Hit Ratio @ shadow
Options[cacheRequestHitRatio]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheRequestHitRatio]: dwmy
PageTop[cacheRequestHitRatio]: <H1>Request Hit Ratio @ shadow</H1>
YLegend[cacheRequestHitRatio]: %
ShortLegend[cacheRequestHitRatio]: %
LegendI[cacheRequestHitRatio]: Median Hit Ratio (5min)&nbsp;
LegendO[cacheRequestHitRatio]: Median Hit Ratio (60min)&nbsp;
Legend1[cacheRequestHitRatio]: Median Hit Ratio
Legend2[cacheRequestHitRatio]: Median Hit Ratio

Target[cacheRequestByteRatio]: cacheRequestByteRatio.5&cacheRequestByteRatio.60:public@shadow:3401
MaxBytes[cacheRequestByteRatio]: 100
AbsMax[cacheRequestByteRatio]: 100
Title[cacheRequestByteRatio]: Byte Hit Ratio @ shadow
Options[cacheRequestByteRatio]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheRequestByteRatio]: dwmy
PageTop[cacheRequestByteRatio]: <H1>Byte Hit Ratio @ shadow</H1>
YLegend[cacheRequestByteRatio]: %
ShortLegend[cacheRequestByteRatio]:%
LegendI[cacheRequestByteRatio]: Median Hit Ratio (5min)&nbsp;
LegendO[cacheRequestByteRatio]: Median Hit Ratio (60min)&nbsp;
Legend1[cacheRequestByteRatio]: Median Hit Ratio
Legend2[cacheRequestByteRatio]: Median Hit Ratio

Target[cacheBlockingGetHostByAddr]: cacheBlockingGetHostByAddr&cacheBlockingGetHostByAddr:public@shadow:3401
MaxBytes[cacheBlockingGetHostByAddr]: 1000000000
Title[cacheBlockingGetHostByAddr]: Blocking gethostbyaddr
Options[cacheBlockingGetHostByAddr]: growright, nopercent
PageTop[cacheBlockingGetHostByAddr]: <H1>Blocking gethostbyaddr count @ shadow</H1>
YLegend[cacheBlockingGetHostByAddr]: blocks/sec
ShortLegend[cacheBlockingGetHostByAddr]: blocks/s
LegendI[cacheBlockingGetHostByAddr]: Blocking&nbsp;
LegendO[cacheBlockingGetHostByAddr]:
Legend1[cacheBlockingGetHostByAddr]: Blocking
Legend2[cacheBlockingGetHostByAddr]:




Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] The greatest John McAfee email ever

http://venturebeat.com/2014/09/18/the-greatest-john-mcafee-email-ever/ By Richard Byrne Reilly VentureBeat September 18, 2014 At the Defcon security conference in Las Vegas in early August, I waited in line with my esteemed colleague Dean Takahashi for 40 minutes in order to get our pictures taken with perhaps the most unabashed instigator in the history of technology. John McAfee. McAfee, of course, is the security software legend who founded McAffee, Inc. For nearly a month I had been reaching out to McAfee in order to score an interview about his latest security startup, Brownlist. Brownlist, which aims to help the little guy battle big government, was unveiled at Defcon to a packed house of nearly 700 people who were hanging on McAfee’s every word. McAfee and I exchanged digits and posed for the picture. And he promised to be in touch. But he never emailed. Or called. Weeks passed. I began to lose interest. Meanwhile, McAfee had been on CNN, Bloomberg, and other channels, railing about technology and how he would once again change the paradigm. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Attackers poison legitimate apps to infect sensitive industrial control systems

http://arstechnica.com/security/2014/06/attackers-poison-legitimate-apps-to-infect-sensitive-industrial-control-systems/ By Dan Goodin Ars Technica June 24 2014 Corporate spies have found an effective way to plant their malware on the networks of energy companies and other industrial heavyweights—by hacking the websites of software companies and waiting for the targets to install trojanized versions of legitimate apps. That’s what operators of the Havex malware family have done with aplomb, according to a report published Tuesday by researchers from antivirus provider F-Secure. Over the past few months, the malware group has taken a specific interest in the types of industrial control systems (ICS) used to automate everything from switches in electrical substations to sensitive equipment in nuclear power plants. In addition to the normal infection channels of spam e-mail, the malware operators have added a new tack—replacing the normal installation files of third-party software with tainted copies that surreptitiously install a remote access trojan (RAT) on the computers of targeted companies. “It appears the attackers abuse vulnerabilities in the software used to run the websites to break in and replace legitimate software installers available for download to customers,” F-Secure researchers Daavid Hentunen and Antti Tikkanen wrote. “Our research uncovered three software vendor sites that were compromised in this manner. The software installers available on the sites were trojanized to include the Havex RAT. We suspect more similar cases exist but have not been identified yet.” […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] The Death and Re-birth of the Full-Disclosure Mail List

http://blog.osvdb.org/2014/03/26/the-death-and-re-birth-of-the-full-disclosure-mail-list/ By jerichoattrition March 26, 2014 After John Cartwright abruptly announced the closure of the Full Disclosure mail list, there was a lot of speculation as to why. I mailed John Cartwright the day after and asked some general questions. In so many words he indicated it was essentially the emotional wear and tear of running the list. While he did not name anyone specifically, the two biggest names being speculated were ‘NetDev’ due to years of being a headache, and the more recent thread started by Nicholas Lemonias. Through other channels, not via Cartwright, I obtained a copy of a legal threat made against at least one hosting provider for having copies of the mails he sent. This mail was no doubt sent to Cartwright among others. As such, I believe this is the “straw that broke the camels back” so to speak. A copy of that mail can be found at the bottom of this post and it should be a stark lesson that disclosure mail list admins are not only facing threats from vendors trying to stifle research, but now security researchers. This includes researchers who openly post to a list, have a full discussion about the issue, desperately attempt to defend their research, and then change their mind and want to erase it all from public record. As I previously noted, relying on Twitter and Pastebin dumps are not a reliable alternative to a mail list. Others agree with me including Gordon Lyon, the maintainer of seclists.org and author of Nmap. He has launched a replacement Full Disclosure list to pick up the torch. Note that if you were previously subscribed, the list users were not transferred. You will need to subscribe to the new list if you want to continue participating. The new list will be lightly moderated by a small team of volunteers. The community owes great thanks to both John and now Gordon for their service in helping to ensure that researchers have an outlet to disclose. Remember, it is a mail list on the surface; behind the scenes, they deal with an incredible number of trolls, headache, and legal threats. Until you run a list or service like this, you won’t know how emotionally draining it is. Note: The following mail was voluntarily shared with me and I was granted permission to publish it by a receiving party. It is entirely within my legal right to post this mail. From: Nicholas Lemonias. (lem.nikolas@googlemail.com) Date: Tue, Mar 18, 2014 at 9:11 PM Subject: Abuse from $ISP hosts To: abuse@ Dear Sirs, I am writing you to launch an official complaint relating to Data Protection Directives / and Data Protection Act (UK). Therefore my request relates to the retention of personal and confidential information by websites hosted by Secunia. These same information are also shared by UK local and governmental authorities and financial institutions, and thus there are growing concerns of misuse of such information. Consequently we would like to request that you please delete ALL records containing our personal information (names, emails, etc..) in whole, from your hosted websites (seclists.org) and that distribution of our information is ceased . We have mistakenly posted to the site, and however reserve the creation rights to that thread, and also reserve the right to have all personal information deleted, and ceased from any electronic dissemination, use either partially or in full. I hope that the issue is resolved urgently without the involvement of local authorities. I look forward to hearing from you soon. Thanks in advance, *Nicholas Lemonias* Update 7:30P EST: Andrew Wallace (aka NetDev) has released a brief statement regarding Full Disclosure. Further, Nicholas Lemonias has threatened me in various ways in a set of emails, all public now. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Sloppy Handling Of Patient Data Always A Danger

http://www.informationweek.com/healthcare/security-and-privacy/sloppy-handling-of-patient-data-always-a-danger/d/d-id/899835 By Alex Kane Rudansky InformationWeek 11/18/2013 The rules of the privacy game have changed and the stakes are higher than ever before when protecting patient information in transit. With advancements in both consumer and healthcare technology, protection of patient information is critically important and equally challenging to achieve. Providers want to get information from point A to point B in the easiest way possible, even if it means using insecure email channels and violating the Health Insurance Portability and Accountability Act (HIPAA). “If it’s going to be secure, it’s going to be harder to deal with,” said Aaron Titus, chief privacy officer and counsel at Identity Finder, a sensitive data management firm. “Doctors and end-users will always find a way to do their jobs following the path of least resistance.” Most HIPAA violations occur accidentally, usually due to a lack of understanding of the law, which was enacted in 1996 and updated under the 2009 Health Information Technology for Economic and Clinical Health Act (HITECH). […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Defcon Survey: Hackers want more crypto, less NSA

http://www.batemanbanter.com/2013/10/defcon-survey-hackers-want-more-crypto-less-nsa/ By Elinor Mills Bateman Banter October 1, 2013 Hackers are an interesting bunch and somewhat predictable, if I may be so bold as to generalize. Before Defcon this summer, I asked all the hackers I know to participate in a survey about their opinions on a variety of security industry-related topics, and I asked them to spread the word through social channels. It’s taken me a month, but I’ve finally tabulated the results. Many of the findings aren’t shocking, but the passion the respondents have for their work is, frankly, inspiring. The first thing I learned is that hackers don’t like long surveys. Actually, few people do. Maybe I needed to offer a reward for completing it. Granted, the survey had 34 questions, most multiple choice with a number of them soliciting essay answers. A whopping 96% of the respondents who started the online survey finished it. So, I’d like to say a heartfelt “Thank You!” to those 53 people who took the time to answer all the questions. And before I dive into the results, I should probably get the demographic data out of the way first because I’d probably be seeing somewhat different responses from younger or less experienced hackers. The respondents’ ages ranged from a low of 27 years to a high of 68, with most in the 25-35 range and a median of 39. The average number of years of experience was 13.5, nearly evenly divided between researcher, IT professional, engineer/programmer and VP-level executive, and more than one-third work at a security provider. So this is a very savvy crowd. Now for the results… […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Detangling the $45 Million Cyberheist

http://www.bankinfosecurity.com/detangling-45-million-cyberheist-a-5759 By Tracy Kitten Bank Info Security May 15, 2013 In the aftermath of the recent news about an international $45 million cyberheist and ATM cash-out scheme, experts say pinpointing the source of such a massive breach can prove to be extremely difficult. That’s because so many different entities are now involved in the global payments chain. “There are so many parties in the payments chain that it is very difficult to assign blame in these types of breaches,” says financial fraud expert Avivah Litan, an analyst with consultancy Gartner Inc., who blogged about the attack. “There can easily be seven roundtrip hops or more between an ATM cash disbursement request and the cash disbursement. The leakage can happen at any of those points or hops.” News reports this week named two payments processors that had their networks hacked, leading to the card data compromises in the $45 million cyberheist. But one is claiming it had no data intercepted, and the other has yet to make a statement. Al Pascual, senior security, risk and fraud analyst for Javelin Strategy & Research, says card data could have been obtained through any number of channels. “Couldn’t these criminals just buy the cards legitimately and then breach the processor to alter the limits?” he asks. “Seems easier to me. Obtaining card data is less challenging for criminals than gaining access to a processor and altering their internal controls, though.” […] ______________________________________________ Visit the InfoSec News Security Bookstore Best Selling Security Books and More! http://www.shopinfosecnews.org


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Defense White Paper Outlines French Cyberwarfare Priorities

http://defense-update.com/20130504_france_livre_blanc_cybersecurity.html Defense Update May 4, 2013 France has recently published a white paper on defense, the ‘Livre Blanc’ outlines the priorities planned for the next five years, in the areas of national defense – land, air, maritime and space, as a well as as in the areas of homeland security and cyberwarfare. This article covers the main areas addressed by Livre Blanc’s Cybersecurity and Cyber Warfare sections, highlighting specific emphasis and opportunities. Future posts will also cover other aspects of Frances’ national security. Defense-Update reports The fight against cyber threats has already outlined in the White Paper 2008, addressing the development stage and implementation of response measures. With the increasing scope and persistence of cyber threats, the need for updating and enhancing those measures is evident, protecting critical infrastructure systems and information databases. Securing and protection of information systems are needed to maintain the uninterrupted and healthy state of the French economy and employment. Overall, France considers implementing information and cyber security, including detection and to identification of perpetrators as measures of national sovereignty, thus, the French legislators are ready to implemented tougher means of cyber surveillance, compared to other European nations such as the UK and Germany. “To achieve this, the government should support scientific skills and technological performance.” The Lievre Blanc stated. As part of the implementation of higher security with government, state and public sector networks, security measures are to be implemented, along with communications channels linking commercial contractors, suppliers and services in the civil sector. A legislative and regulatory framework will be implemented, setting safety standards addressing security threats, guiding operators to take the necessary measures to detect and treat any incident involving their sensitive computer systems. […] ______________________________________________ Visit the InfoSec News Security Bookstore Best Selling Security Books and More! http://www.shopinfosecnews.org


Facebooktwittergoogle_plusredditpinterestlinkedinmail