Tag Archives: managed

My latest Gartner research: Market Opportunity Map: Security and Risk Management Software, Worldwide

20 April 2017  |  The security software market is transforming through four vectors: analytics, adoption of SaaS and managed services, expanded ecosystems, and regulations. Technology business unit leaders must realign their product and go-to-market strategies to address these key forces….

Gartner clients can access this research by clicking here.




Facebooktwittergoogle_plusredditpinterestlinkedinmail

My latest Gartner research: Market Insight: Security Market Transformation Disrupted by the Emergence of Smart, Pervasive and Efficient Security

1 February 2017  |  …fits into/addresses these situations. Analysis by Perry Carpenter and Lawrence Pingree Technologies such as cloud, software-defined networking (SDN), network…or managed services. Analysis by Ruggero Contu, Perry Carpenter and Lawrence Pingree By 2020, integrated security models, such as…

Gartner clients can access this research by clicking here.


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] The ZeroAccess botnet is back in business

http://www.computerworld.com/article/2877923/the-zeroaccess-botnet-is-back-in-business.html By Lucian Constantin IDG News Service Jan 30, 2015 A peer-to-peer botnet called ZeroAccess came out of a six-month hibernation this month after having survived two takedown attempts by law enforcement and security researchers. At its peak in 2013, ZeroAccess, also known as Sirefef, consisted of more than 1.9 million infected computers that were primarily used for click fraud and Bitcoin mining. That was until security researchers from Symantec found a flaw in the botnet’s resilient peer-to-peer architecture. This architecture allowed the bots to exchange files, instructions and information with each other without the need for central command-and-control servers, which are the Achilles’ heel of most botnets. By exploiting the flaw, Symantec managed to detach over half a million computers from ZeroAccess in July 2013 and launched an effort to clean them up in cooperation with ISPs and CERTs. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] How a 7-year-old girl hacked a public Wi-Fi network in 10 minutes

http://www.information-age.com/technology/security/123458891/how-7-year-old-girl-hacked-public-wi-fi-network-10-minutes By Ben Rossi Information Age 21 January 2015 Free Wi-Fi at a coffee shop or other public space is a welcome sign for millions of people everyday who want to get some work done, make a video call, or just catch up on a bit of online shopping. However, as results of a new experiment today prove, public Wi-Fi is so unsecure it can even be hacked by a seven-year-old child – and in just over ten minutes. The ethical hacking experiment was conducted as part of a new Wi-Fi safety public awareness campaign by VPN provider www.hidemyass.com, which aims to to highlight just how effortlessly hackers can compromise any of the UK’s almost 270,000 public Wi-Fi spots. With the consent of her family and in a controlled environment, IT-savvy seven-year-old Betsy Davies managed to hack a willing participant’s laptop while they were connected to a purpose-made open Wi-Fi network – designed to replicate those found on the high street. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] ICANN HACKED: Intruders poke around global DNS innards

http://www.theregister.co.uk/2014/12/17/icann_hacked_admin_access_to_zone_files/ By Kieren McCarthy The Register 17 Dec 2014 Domain-name overseer ICANN has been hacked and its DNS zone database compromised, the organization has said. Attackers sent staff spoofed emails appearing to coming from icann.org. The organization notes it was a “spear phishing” attack, suggesting employees clicked on a link in the messages that took them to a bogus login page – into which staff typed their usernames and passwords, providing hackers with the keys to their work email accounts. No sign of two-factor authentication, then. “The attack resulted in the compromise of the email credentials of several ICANN staff members,” ICANN’s statement on the matter reads, noting that the attack happened in late November and was discovered a week later. With those details, the hackers then managed to access a number of systems within ICANN, including the Centralized Zone Data System (CZDS), the wiki pages of the Governmental Advisory Committee (GAC), the domain registration Whois portal, and the organization’s blog. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] Cyberattack at JPMorgan Chase Also Hit Website of Bank’s Corporate Race

http://dealbook.nytimes.com/2014/10/15/cyberattack-at-jpmorgan-chase-also-hit-website-of-banks-corporate-race/ By MATTHEW GOLDSTEIN, NICOLE PERLROTH and JESSICA SILVER-GREENBERG The New York Times OCTOBER 15, 2014 The JPMorgan Chase Corporate Challenge, a series of charitable races held each year in big cities across the world, is one of those feel-good events that bring together professionals from scores of big companies. It was also a target for the same cyberthieves who successfully breached the bank’s digital perimeters, compromising the accounts of 76 million households and seven million small businesses, according to people with knowledge of the matter. The JPMorgan Chase Corporate Challenge website, which is managed by an outside vendor, has been conspicuously inaccessible since early August, with visitors to the site seeing only a lonely list of coming races. The link between the breach on that website and the broader attack, which the bank said did not compromise any financial information, has not been previously reported. The bank said it discovered the breach in the Corporate Challenge website on Aug. 7, about a week after it learned of the broader intrusion into its computer network. By infiltrating the race website, hackers were able to gain access to passwords and contact information for participants, the bank informed them. […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

[ISN] How are hospitals handling medical device security?

http://healthitsecurity.com/2014/09/30/how-are-hospitals-handling-medical-device-security/ By Patrick Ouellette Health IT Security September 30, 2014 Dale Nordenberg, moderator of the medical device security panel discussion at this year’s HIMSS Privacy and Security Forum, made an interesting point in saying that medical devices fit somewhere between BioMed, IT and security. Given the likelihood that they fall through the cracks, what are are the best ways for healthcare organizations to monitor the risks associated with these devices? Nordenberg, a medical device expert, discussed security experiences and safeguard tactics with panelists Kristopher Kusche, VP of Information Services, Technology Services at Albany Medical Center, and Darren Lacey, Chief Information Security Officer (CISO) of Johns Hopkins University and Johns Hopkins Medicine. The first major topic of conversation was the manner in which Kusche approaches risk assessments for medical devices. Kusche said he had 20,000 medical devices across two hospitals, which outnumbers the 18,000 managed IT products, such as computers, the organization has on the network. As a Joint Commission accredited hospital, he said that Albany Medical Center has been assessing every device for risk for a long time because it was a Joint Commission requirement. The only major difference now is the addition of cybersecurity to that risk assessment. “When the FDA released its cybersecurity recommendations in June 2013, we took them to heart,” he said. “After having done full cybersecurity assessments for our IT components and systems for HIPAA, the next logical step was to perform assessments on medical devices.” […]


Facebooktwittergoogle_plusredditpinterestlinkedinmail

Mrtg Config File for Squid Proxy

Below is my MRTG file for monitoring squid.

 

######################################################################
# Multi Router Traffic Grapher — squid Configuration File
######################################################################
# This file is for use with mrtg-2.0
#
# Customized for monitoring Squid Cache
# by Chris Miles http://chrismiles.info/
# http://chrismiles.info/unix/mrtg/
# To use:
# – change WorkDir and LoadMIBs settings
# – change all “shadow” occurrences to your squid host
# – change all “chris” occurrences to your name/address
# – change the community strings if required (eg: “public”)
# – change the snmp port if required (eg: 3401)
#
# Note:
#
# * Keywords must start at the begin of a line.
#
# * Lines which follow a keyword line which do start
# with a blank are appended to the keyword line
#
# * Empty Lines are ignored
#
# * Lines starting with a # sign are comments.
# ####################
# Global Configuration
# ####################

# Where should the logfiles, and webpages be created?
WorkDir: /srv/www/htdocs/squid-mrtg

# ————————–
# Optional Global Parameters
# ————————–

# How many seconds apart should the browser (Netscape) be
# instructed to reload the page? If this is not defined, the
# default is 300 seconds (5 minutes).

# Refresh: 600

# How often do you call mrtg? The default is 5 minutes. If
# you call it less often, you should specify it here. This
# does two things:

# a) the generated HTML page does contain the right
# information about the calling interval …

# b) a META header in the generated HTML page will instruct
# caches about the time to live of this page …..

# In this example we tell mrtg that we will be calling it
# every 10 minutes. If you are calling mrtg every 5
# minutes, you can leave this line commented out.

# Interval: 10

# With this switch mrtg will generate .meta files for CERN
# and Apache servers which contain Expiration tags for the
# html and gif files. The *.meta files will be created in
# the same directory as the other files, so you might have
# to set “MetaDir .” in your srm.conf file for this to work
#
# NOTE: If you are running Apache-1.2 you can use the mod_expire
# to achieve the same effect … see the file htaccess-dist

WriteExpires: Yes

# If you want to keep the mrtg icons in some place other than the
# working directory, use the IconDir varibale to give its url.

# IconDir: /mrtgicons/
IconDir: /images/

LoadMIBs: /usr/share/squid/mib.txt

# #################################################
# Configuration for each Target you want to monitor
# #################################################

# The configuration keywords “Target” must be followed by a
# unique name. This will also be the name used for the
# webpages, logfiles and gifs created for that target.

# Note that the “Target” sections can be auto-generated with
# the cfgmaker tool. Check readme.html for instructions.
# ========

##
## Target —————————————-
##

# With the “Target” keyword you tell mrtg what it should
# monitor. The “Target” keyword takes arguments in a wide
# range of formats:

# * The most basic format is “port:community@router”
# This will generate a traffic graph for port ‘port’
# of the router ‘router’ and it will use the community
# ‘community’ for the snmp query.

# Target[ezwf]: 2:public@wellfleet-fddi.ethz.ch

# * Sometimes you are sitting on the wrong side of the
# link. And you would like to have mrtg report Incoming
# traffic as outgoing and visa versa. This can be achieved
# by adding the ‘-‘ sign in front of the “Target”
# description. It flips the in and outgoing traffic rates.

# Target[ezci]: -1:public@ezci-ether.ethz.ch

# * You can also explicitly define the OID to query by using the
# following syntax ‘OID_1&OID_2:community@router’
# The following example will retrieve error input and output
# octets/sec on interface 1. MRTG needs to graph two values, so
# you need to specify two OID’s such as temperature and humidity
# or error input and error output.

# Target[ezwf]: 1.3.6.1.2.1.2.2.1.14.1&1.3.6.1.2.1.2.2.1.20.1:public@myrouter

# * mrtg knows a number of symbolical SNMP variable
# names. See the file mibhelp.txt for a list of known
# names. One example are the ifInErrors and and ifOutErrors
# names. This means you can specify the above as:

# Target[ezwf]: ifInErrors.1&ifOutErrors.1:public@myrouter

# * if you want to monitor something which does not provide
# data via snmp you can use some external program to do
# the data gathering.

#
# The external command must return 4 lines of output:
# Line 1 : current state of the ‘incoming bytes counter’
# Line 2 : current state of the ‘outgoing bytes counter’
# Line 3 : string, telling the uptime of the target.
# Line 4 : string, telling the name of the target.

# Depending on the type of data your script returns you
# might want to use the ‘gauge’ or ‘absolute’ arguments
# for the “Options” keyword.

# Target[ezwf]: `/usr/local/bin/df2mrtg /dev/dsk/c0t2d0s0`

# * You can also use several statements in a mathematical
# expression. This could be used to aggregate both B channels
# in an ISDN connection or multiple T1’s that are aggregated
# into a single channel for greater bandwidth.
# Note the whitespace arround the target definitions.

# Target[ezwf]: 2:public@wellfleetA + 1:public@wellfleetA
# * 4:public@ciscoF

##
## RouterUptime —————————————
##
#
# In cases where you calculate the used bandwidth from
# several interfaces you normaly don’t get the routeruptime
# and routername displayed on the web page.
# If this interface are on the same router and the uptime and
# name should be displayed nevertheless you have to specify
# its community and address again with the RouterUptime keyword.

# Target[kacisco]: 1:public@194.64.66.250 + 2:public@194.64.66.250
# RouterUptime[kacisco]: public@194.64.66.250

##
## MaxBytes ——————————————-
##

# How many bytes per second can this port carry. Since most
# links are rated in bits per second, you need to divide
# their maximum bandwidth (in bits) by eight (8) in order to get
# bytes per second. This is very important to make your
# unscaled graphs display realistic information.
# T1 = 193000, 56K = 7000, Ethernet = 1250000. The “MaxBytes”
# value will be used by mrtg to decide whether it got a
# valid response from the router. If a number higher than
# “MaxBytes” is returned, it is ignored. Also read the section
# on AbsMax for further info.

# MaxBytes[ezwf]: 1250000

##
## Title ———————————————–
##

# Title for the HTML page which gets generated for the graph.

# Title[ezwf]: Traffic Analysis for ETZ C 95.1

##
## PageTop ———————————————
##

# Things to add to the top of the generated HTML page. Note
# that you can have several lines of text as long as the
# first column is empty.
# Note that the continuation lines will all end up on the same
# line in the html page. If you want linebreaks in the generated
# html use the ‘\n’ sequence.

# PageTop[ezwf]: <H1>Traffic Analysis for ETZ C95.1</H1>
# Our Campus Backbone runs over an FDDI line\n
# with a maximum transfer rate of 12.5 Mega Bytes per
# Second.

##
## PageFoot ———————————————
##

# Things to add at the very end of the mrtg generated html page

# PageFoot[ezwf]: <HR size=2 noshade>This page is managed by Blubber

# ————————————————–
# Optional Target Configuration Tags
# ————————————————–

##
## AddHead —————————————–
##

# Use this tag like the PageTop header, but its contents
# will be added between </TITLE> and </HEAD>.

# AddHead[ezwf]: <!– Just a comment for fun –>

##
## AbsMax ——————————————
##

# If you are monitoring a link which can handle more traffic
# than the MaxBytes value. Eg, a line which uses compression
# or some frame relay link, you can use the AbsMax keyword
# to give the absolute maximum value ever to be reached. We
# need to know this in order to sort out unrealistic values
# returned by the routers. If you do not set absmax, rateup
# will ignore values higher then MaxBytes.

# AbsMax[ezwf]: 2500000

##
## Unscaled ——————————————
##

# By default each graph is scaled vertically to make the
# actual data visible even when it is much lower than
# MaxBytes. With the “Unscaled” variable you can suppress
# this. It’s argument is a string, containing one letter
# for each graph you don’t want to be scaled: d=day w=week
# m=month y=year. In the example I suppress scaling for the
# yearly and the monthly graph.

# Unscaled[ezwf]: ym

##
## WithPeak ——————————————
##

# By default the graphs only contain the average transfer
# rates for incoming and outgoing traffic. The
# following option instructs mrtg to display the peak
# 5 minute transfer rates in the [w]eekly, [m]onthly and
# [y]early graph. In the example we define the monthly
# and the yearly graph to contain peak as well as average
# values.

# WithPeak[ezwf]: ym

##
## Supress ——————————————
##

# By Default mrtg produces 4 graphs. With this option you
# can suppress the generation of selected graphs. The format
# is analog to the above option. In this example we suppress
# the yearly graph as it is quite empty in the beginning.

# Suppress[ezwf]: y

##
## Directory
##

# By default, mrtg puts all the files that it generates for each
# router (the GIFs, the HTML page, the log file, etc.) in WorkDir.
# If the “Directory” option is specified, the files are instead put
# into a directory under WorkDir. (For example, given the options in
# this mrtg.cfg-dist file, the “Directory” option below would cause all
# the ezwf files to be put into /usr/tardis/pub/www/stats/mrtg/ezwf .)
#
# The directory must already exist; mrtg will not create it.

# Directory[ezwf]: ezwf

##
## XSize and YSize ——————————————
##

# By Default mrtgs graphs are 100 by 400 pixels wide (plus
# some more for the labels. In the example we get almost
# square graphs …
# Note: XSize must be between 20 and 600
# YSize must be larger than 20

# XSize[ezwf]: 300
# YSize[ezwf]: 300

##
## XZoom YZoom ————————————————-
##

# If you want your graphs to have larger pixels, you can
# “Zoom” them.

#XZoom[ezwf]: 2.0
#YZoom[ezwf]: 2.0

##
## XScale YScale ————————————————-
##

# If you want your graphs to be actually scaled use XScale
# and YScale. (Beware while this works, the results look ugly
# (to be frank) so if someone wants fix this: patches are
# welcome.

# XScale[ezwf]: 1.5
# YScale[ezwf]: 1.5
##
## Step ———————————————————–
##

# Change the default step with from 5 * 60 seconds to
# something else I have not tested this well …

# Step[ezwf]: 60

##
## Options ——————————————
##

# The “Options” Keyword allows you to set some boolean
# switches:
#
# growright – The graph grows to the left by default.
#
# bits – All the numbers printed are in bits instead
# of bytes … looks much more impressive 🙂
#
# noinfo – Supress the information about uptime and
# device name in the generated webpage.
#
# absolute – This is for data sources which reset their
# value when they are read. This means that
# rateup has not to build the difference between
# this and the last value read from the data
# source. Useful for external data gatherers.
#
# gauge – Treat the values gathered from target as absolute
# and not as counters. This would be useful to
# monitor things like diskspace, load and so
# on ….
#
# nopercent Don’t print usage percentages
#
# integer Don’t print only integers in the summary …
#

# Options[ezwf]: growright, bits

##
## Colours ——————————————
##

# The “Colours” tag allows you to override the default colour
# scheme. Note: All 4 of the required colours must be
# specified here The colour name (‘Colourx’ below) is the
# legend name displayed, while the RGB value is the real
# colour used for the display, both on the graph and n the
# html doc.

# Format is: Colour1#RRGGBB,Colour2#RRGGBB,Colour3#RRGGBB,Colour4#RRGGBB
# where: Colour1 = Input on default graph
# Colour2 = Output on default graph
# Colour3 = Max input
# Colour4 = Max output
# RRGGBB = 2 digit hex values for Red, Green and Blue

# Colours[ezwf]: GREEN#00eb0c,BLUE#1000ff,DARK GREEN#006600,VIOLET#ff00ff

##
## Background ——————————————
##

# With the “Background” tag you can configure the background
# colour of the generated HTML page

# Background[ezwf]: #a0a0a0a

##
## YLegend, ShortLegend, Legend[1234] ——————
##

# The following keywords allow you to override the text
# displayed for the various legends of the graph and in the
# HTML document
#
# * YLegend : The Y-Axis of the graph
# * ShortLegend: The ‘b/s’ string used for Max, Average and Current
# * Legend[1234IO]: The strings for the colour legend
#
#YLegend[ezwf]: Bits per Second
#ShortLegend[ezwf]: b/s
#Legend1[ezwf]: Incoming Traffic in Bits per Second
#Legend2[ezwf]: Outgoing Traffic in Bits per Second
#Legend3[ezwf]: Maximal 5 Minute Incoming Traffic
#Legend4[ezwf]: Maximal 5 Minute Outgoing Traffic
#LegendI[ezwf]: &nbsp;In:
#LegendO[ezwf]: &nbsp;Out:
# Note, if LegendI or LegendO are set to an empty string with
# LegendO[ezwf]:
# The corresponding line below the graph will not be printed at all.

# If you live in an international world, you might want to
# generate the graphs in different timezones. This is set in the
# TZ variable. Under certain operating systems like Solaris,
# this will provoke the localtime call to giv the time in
# the selected timezone …

# Timezone[ezwf]: Japan

# The Timezone is the standard Solaris timezone, ie Japan, Hongkong,
# GMT, GMT+1 etc etc.

# By default, mrtg (actually rateup) uses the strftime(3) ‘%W’ option
# to format week numbers in the monthly graphs. The exact semantics
# of this format option vary between systems. If you find that the
# week numbers are wrong, and your system’s strftime(3) routine
# supports it, you can try another format option. The POSIX ‘%V’
# option seems to correspond to a widely used week numbering
# convention. The week format character should be specified as a
# single letter; either W, V, or U.

# Weekformat[ezwf]: V

# #############################
# Two very special Target names
# #############################

# To save yourself some typing you can define a target
# called ‘^’. The text of every Keyword you define for this
# target will be PREPENDED to the corresponding Keyword of
# all the targets defined below this line. The same goes for
# a Target called ‘$’ but its options will be APPENDED.
#
# The example will make mrtg use a common header and a
# common contact person in all the pages generated from
# targets defined later in this file.
#
#PageTop[^]: <H1>Traffic Stats</H1><HR>
#PageTop[$]: Contact Peter Norton if you have any questions<HR>

PageFoot[^]: <i>Page managed by GeekGuy</a></i>

Target[cacheServerRequests]: cacheServerRequests&cacheServerRequests:public@shadow:3401
MaxBytes[cacheServerRequests]: 10000000
Title[cacheServerRequests]: Server Requests @ shadow
Options[cacheServerRequests]: growright, nopercent
PageTop[cacheServerRequests]: <h1>Server Requests @ shadow</h1>
YLegend[cacheServerRequests]: requests/sec
ShortLegend[cacheServerRequests]: req/s
LegendI[cacheServerRequests]: Requests&nbsp;
LegendO[cacheServerRequests]:
Legend1[cacheServerRequests]: Requests
Legend2[cacheServerRequests]:

Target[cacheServerErrors]: cacheServerErrors&cacheServerErrors:public@shadow:3401
MaxBytes[cacheServerErrors]: 10000000
Title[cacheServerErrors]: Server Errors @ shadow
Options[cacheServerErrors]: growright, nopercent
PageTop[cacheServerErrors]: <H1>Server Errors @ shadow</H1>
YLegend[cacheServerErrors]: errors/sec
ShortLegend[cacheServerErrors]: err/s
LegendI[cacheServerErrors]: Errors&nbsp;
LegendO[cacheServerErrors]:
Legend1[cacheServerErrors]: Errors
Legend2[cacheServerErrors]:

Target[cacheServerInOutKb]: cacheServerInKb&cacheServerOutKb:public@shadow:3401 * 1024
MaxBytes[cacheServerInOutKb]: 1000000000
Title[cacheServerInOutKb]: Server In/Out Traffic @ shadow
Options[cacheServerInOutKb]: growright, nopercent
PageTop[cacheServerInOutKb]: <H1>Server In/Out Traffic @ shadow</H1>
YLegend[cacheServerInOutKb]: Bytes/sec
ShortLegend[cacheServerInOutKb]: Bytes/s
LegendI[cacheServerInOutKb]: Server In&nbsp;
LegendO[cacheServerInOutKb]: Server Out&nbsp;
Legend1[cacheServerInOutKb]: Server In
Legend2[cacheServerInOutKb]: Server Out

Target[cacheClientHttpRequests]: cacheClientHttpRequests&cacheClientHttpRequests:public@shadow:3401
MaxBytes[cacheClientHttpRequests]: 10000000
Title[cacheClientHttpRequests]: Client Http Requests @ shadow
Options[cacheClientHttpRequests]: growright, nopercent
PageTop[cacheClientHttpRequests]: <H1>Client Http Requests @ shadow</H1>
YLegend[cacheClientHttpRequests]: requests/sec
ShortLegend[cacheClientHttpRequests]: req/s
LegendI[cacheClientHttpRequests]: Requests&nbsp;
LegendO[cacheClientHttpRequests]:
Legend1[cacheClientHttpRequests]: Requests
Legend2[cacheClientHttpRequests]:

Target[cacheHttpHits]: cacheHttpHits&cacheHttpHits:public@shadow:3401
MaxBytes[cacheHttpHits]: 10000000
Title[cacheHttpHits]: HTTP Hits @ shadow
Options[cacheHttpHits]: growright, nopercent
PageTop[cacheHttpHits]: <H1>HTTP Hits @ shadow</H1>
YLegend[cacheHttpHits]: hits/sec
ShortLegend[cacheHttpHits]: hits/s
LegendI[cacheHttpHits]: Hits&nbsp;
LegendO[cacheHttpHits]:
Legend1[cacheHttpHits]: Hits
Legend2[cacheHttpHits]:

Target[cacheHttpErrors]: cacheHttpErrors&cacheHttpErrors:public@shadow:3401
MaxBytes[cacheHttpErrors]: 10000000
Title[cacheHttpErrors]: HTTP Errors @ shadow
Options[cacheHttpErrors]: growright, nopercent
PageTop[cacheHttpErrors]: <H1>HTTP Errors @ shadow</H1>
YLegend[cacheHttpErrors]: errors/sec
ShortLegend[cacheHttpErrors]: err/s
LegendI[cacheHttpErrors]: Errors&nbsp;
LegendO[cacheHttpErrors]:
Legend1[cacheHttpErrors]: Errors
Legend2[cacheHttpErrors]:

Target[cacheIcpPktsSentRecv]: cacheIcpPktsSent&cacheIcpPktsRecv:public@shadow:3401
MaxBytes[cacheIcpPktsSentRecv]: 10000000
Title[cacheIcpPktsSentRecv]: ICP Packets Sent/Received
Options[cacheIcpPktsSentRecv]: growright, nopercent
PageTop[cacheIcpPktsSentRecv]: <H1>ICP Packets Sent/Recieved @ shadow</H1>
YLegend[cacheIcpPktsSentRecv]: packets/sec
ShortLegend[cacheIcpPktsSentRecv]: pkts/s
LegendI[cacheIcpPktsSentRecv]: Pkts Sent&nbsp;
LegendO[cacheIcpPktsSentRecv]: Pkts Received&nbsp;
Legend1[cacheIcpPktsSentRecv]: Pkts Sent
Legend2[cacheIcpPktsSentRecv]: Pkts Received

Target[cacheIcpKbSentRecv]: cacheIcpKbSent&cacheIcpKbRecv:public@shadow:3401 * 1024
MaxBytes[cacheIcpKbSentRecv]: 1000000000
Title[cacheIcpKbSentRecv]: ICP Bytes Sent/Received
Options[cacheIcpKbSentRecv]: growright, nopercent
PageTop[cacheIcpKbSentRecv]: <H1>ICP Bytes Sent/Received @ shadow</H1>
YLegend[cacheIcpKbSentRecv]: Bytes/sec
ShortLegend[cacheIcpKbSentRecv]: Bytes/s
LegendI[cacheIcpKbSentRecv]: Sent&nbsp;
LegendO[cacheIcpKbSentRecv]: Received&nbsp;
Legend1[cacheIcpKbSentRecv]: Sent
Legend2[cacheIcpKbSentRecv]: Received

Target[cacheHttpInOutKb]: cacheHttpInKb&cacheHttpOutKb:public@shadow:3401 * 1024
MaxBytes[cacheHttpInOutKb]: 1000000000
Title[cacheHttpInOutKb]: HTTP In/Out Traffic @ shadow
Options[cacheHttpInOutKb]: growright, nopercent
PageTop[cacheHttpInOutKb]: <H1>HTTP In/Out Traffic @ shadow</H1>
YLegend[cacheHttpInOutKb]: Bytes/second
ShortLegend[cacheHttpInOutKb]: Bytes/s
LegendI[cacheHttpInOutKb]: HTTP In&nbsp;
LegendO[cacheHttpInOutKb]: HTTP Out&nbsp;
Legend1[cacheHttpInOutKb]: HTTP In
Legend2[cacheHttpInOutKb]: HTTP Out

Target[cacheCurrentSwapSize]: cacheCurrentSwapSize&cacheCurrentSwapSize:public@shadow:3401
MaxBytes[cacheCurrentSwapSize]: 1000000000
Title[cacheCurrentSwapSize]: Current Swap Size @ shadow
Options[cacheCurrentSwapSize]: gauge, growright, nopercent
PageTop[cacheCurrentSwapSize]: <H1>Current Swap Size @ shadow</H1>
YLegend[cacheCurrentSwapSize]: swap size
ShortLegend[cacheCurrentSwapSize]: Bytes
LegendI[cacheCurrentSwapSize]: Swap Size&nbsp;
LegendO[cacheCurrentSwapSize]:
Legend1[cacheCurrentSwapSize]: Swap Size
Legend2[cacheCurrentSwapSize]:

Target[cacheNumObjCount]: cacheNumObjCount&cacheNumObjCount:public@shadow:3401
MaxBytes[cacheNumObjCount]: 10000000
Title[cacheNumObjCount]: Num Object Count @ shadow
Options[cacheNumObjCount]: gauge, growright, nopercent
PageTop[cacheNumObjCount]: <H1>Num Object Count @ shadow</H1>
YLegend[cacheNumObjCount]: # of objects
ShortLegend[cacheNumObjCount]: objects
LegendI[cacheNumObjCount]: Num Objects&nbsp;
LegendO[cacheNumObjCount]:
Legend1[cacheNumObjCount]: Num Objects
Legend2[cacheNumObjCount]:

Target[cacheCpuUsage]: cacheCpuUsage&cacheCpuUsage:public@shadow:3401
MaxBytes[cacheCpuUsage]: 100
AbsMax[cacheCpuUsage]: 100
Title[cacheCpuUsage]: CPU Usage @ shadow
Options[cacheCpuUsage]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheCpuUsage]: dwmy
PageTop[cacheCpuUsage]: <H1>CPU Usage @ shadow</H1>
YLegend[cacheCpuUsage]: usage %
ShortLegend[cacheCpuUsage]:%
LegendI[cacheCpuUsage]: CPU Usage&nbsp;
LegendO[cacheCpuUsage]:
Legend1[cacheCpuUsage]: CPU Usage
Legend2[cacheCpuUsage]:

Target[cacheMemUsage]: cacheMemUsage&cacheMemUsage:public@shadow:3401 * 1024
MaxBytes[cacheMemUsage]: 2000000000
Title[cacheMemUsage]: Memory Usage
Options[cacheMemUsage]: gauge, growright, nopercent
PageTop[cacheMemUsage]: <H1>Total memory accounted for @ shadow</H1>
YLegend[cacheMemUsage]: Bytes
ShortLegend[cacheMemUsage]: Bytes
LegendI[cacheMemUsage]: Mem Usage&nbsp;
LegendO[cacheMemUsage]:
Legend1[cacheMemUsage]: Mem Usage
Legend2[cacheMemUsage]:

Target[cacheSysPageFaults]: cacheSysPageFaults&cacheSysPageFaults:public@shadow:3401
MaxBytes[cacheSysPageFaults]: 10000000
Title[cacheSysPageFaults]: Sys Page Faults @ shadow
Options[cacheSysPageFaults]: growright, nopercent
PageTop[cacheSysPageFaults]: <H1>Sys Page Faults @ shadow</H1>
YLegend[cacheSysPageFaults]: page faults/sec
ShortLegend[cacheSysPageFaults]: PF/s
LegendI[cacheSysPageFaults]: Page Faults&nbsp;
LegendO[cacheSysPageFaults]:
Legend1[cacheSysPageFaults]: Page Faults
Legend2[cacheSysPageFaults]:

Target[cacheSysVMsize]: cacheSysVMsize&cacheSysVMsize:public@shadow:3401 * 1024
MaxBytes[cacheSysVMsize]: 1000000000
Title[cacheSysVMsize]: Storage Mem Size @ shadow
Options[cacheSysVMsize]: gauge, growright, nopercent
PageTop[cacheSysVMsize]: <H1>Storage Mem Size @ shadow</H1>
YLegend[cacheSysVMsize]: mem size
ShortLegend[cacheSysVMsize]: Bytes
LegendI[cacheSysVMsize]: Mem Size&nbsp;
LegendO[cacheSysVMsize]:
Legend1[cacheSysVMsize]: Mem Size
Legend2[cacheSysVMsize]:

Target[cacheSysStorage]: cacheSysStorage&cacheSysStorage:public@shadow:3401
MaxBytes[cacheSysStorage]: 1000000000
Title[cacheSysStorage]: Storage Swap Size @ shadow
Options[cacheSysStorage]: gauge, growright, nopercent
PageTop[cacheSysStorage]: <H1>Storage Swap Size @ shadow</H1>
YLegend[cacheSysStorage]: swap size (KB)
ShortLegend[cacheSysStorage]: KBytes
LegendI[cacheSysStorage]: Swap Size&nbsp;
LegendO[cacheSysStorage]:
Legend1[cacheSysStorage]: Swap Size
Legend2[cacheSysStorage]:

Target[cacheSysNumReads]: cacheSysNumReads&cacheSysNumReads:public@shadow:3401
MaxBytes[cacheSysNumReads]: 10000000
Title[cacheSysNumReads]: HTTP I/O number of reads @ shadow
Options[cacheSysNumReads]: growright, nopercent
PageTop[cacheSysNumReads]: <H1>HTTP I/O number of reads @ shadow</H1>
YLegend[cacheSysNumReads]: reads/sec
ShortLegend[cacheSysNumReads]: reads/s
LegendI[cacheSysNumReads]: I/O&nbsp;
LegendO[cacheSysNumReads]:
Legend1[cacheSysNumReads]: I/O
Legend2[cacheSysNumReads]:

Target[cacheCpuTime]: cacheCpuTime&cacheCpuTime:public@shadow:3401
MaxBytes[cacheCpuTime]: 1000000000
Title[cacheCpuTime]: Cpu Time
Options[cacheCpuTime]: gauge, growright, nopercent
PageTop[cacheCpuTime]: <H1>Amount of cpu seconds consumed @ shadow</H1>
YLegend[cacheCpuTime]: cpu seconds
ShortLegend[cacheCpuTime]: cpu seconds
LegendI[cacheCpuTime]: Mem Time&nbsp;
LegendO[cacheCpuTime]:
Legend1[cacheCpuTime]: Mem Time
Legend2[cacheCpuTime]:

Target[cacheMaxResSize]: cacheMaxResSize&cacheMaxResSize:public@shadow:3401 * 1024
MaxBytes[cacheMaxResSize]: 1000000000
Title[cacheMaxResSize]: Max Resident Size
Options[cacheMaxResSize]: gauge, growright, nopercent
PageTop[cacheMaxResSize]: <H1>Maximum Resident Size @ shadow</H1>
YLegend[cacheMaxResSize]: Bytes
ShortLegend[cacheMaxResSize]: Bytes
LegendI[cacheMaxResSize]: Size&nbsp;
LegendO[cacheMaxResSize]:
Legend1[cacheMaxResSize]: Size
Legend2[cacheMaxResSize]:

Target[cacheCurrentLRUExpiration]: cacheCurrentLRUExpiration&cacheCurrentLRUExpiration:public@shadow:3401
MaxBytes[cacheCurrentLRUExpiration]: 1000000000
Title[cacheCurrentLRUExpiration]: LRU Expiration Age
Options[cacheCurrentLRUExpiration]: gauge, growright, nopercent
PageTop[cacheCurrentLRUExpiration]: <H1>Storage LRU Expiration Age @ shadow</H1>
YLegend[cacheCurrentLRUExpiration]: expir (days)
ShortLegend[cacheCurrentLRUExpiration]: days
LegendI[cacheCurrentLRUExpiration]: Age&nbsp;
LegendO[cacheCurrentLRUExpiration]:
Legend1[cacheCurrentLRUExpiration]: Age
Legend2[cacheCurrentLRUExpiration]:

Target[cacheCurrentUnlinkRequests]: cacheCurrentUnlinkRequests&cacheCurrentUnlinkRequests:public@shadow:3401
MaxBytes[cacheCurrentUnlinkRequests]: 1000000000
Title[cacheCurrentUnlinkRequests]: Unlinkd Requests
Options[cacheCurrentUnlinkRequests]: growright, nopercent
PageTop[cacheCurrentUnlinkRequests]: <H1>Requests given to unlinkd @ shadow</H1>
YLegend[cacheCurrentUnlinkRequests]: requests/sec
ShortLegend[cacheCurrentUnlinkRequests]: reqs/s
LegendI[cacheCurrentUnlinkRequests]: Unlinkd requests&nbsp;
LegendO[cacheCurrentUnlinkRequests]:
Legend1[cacheCurrentUnlinkRequests]: Unlinkd requests
Legend2[cacheCurrentUnlinkRequests]:

Target[cacheCurrentUnusedFileDescrCount]: cacheCurrentUnusedFileDescrCount&cacheCurrentUnusedFileDescrCount:public@shadow:3401
MaxBytes[cacheCurrentUnusedFileDescrCount]: 1000000000
Title[cacheCurrentUnusedFileDescrCount]: Available File Descriptors
Options[cacheCurrentUnusedFileDescrCount]: gauge, growright, nopercent
PageTop[cacheCurrentUnusedFileDescrCount]: <H1>Available number of file descriptors @ shadow</H1>
YLegend[cacheCurrentUnusedFileDescrCount]: # of FDs
ShortLegend[cacheCurrentUnusedFileDescrCount]: FDs
LegendI[cacheCurrentUnusedFileDescrCount]: File Descriptors&nbsp;
LegendO[cacheCurrentUnusedFileDescrCount]:
Legend1[cacheCurrentUnusedFileDescrCount]: File Descriptors
Legend2[cacheCurrentUnusedFileDescrCount]:

Target[cacheCurrentReservedFileDescrCount]: cacheCurrentReservedFileDescrCount&cacheCurrentReservedFileDescrCount:public@shadow:3401
MaxBytes[cacheCurrentReservedFileDescrCount]: 1000000000
Title[cacheCurrentReservedFileDescrCount]: Reserved File Descriptors
Options[cacheCurrentReservedFileDescrCount]: gauge, growright, nopercent
PageTop[cacheCurrentReservedFileDescrCount]: <H1>Reserved number of file descriptors @ shadow</H1>
YLegend[cacheCurrentReservedFileDescrCount]: # of FDs
ShortLegend[cacheCurrentReservedFileDescrCount]: FDs
LegendI[cacheCurrentReservedFileDescrCount]: File Descriptors&nbsp;
LegendO[cacheCurrentReservedFileDescrCount]:
Legend1[cacheCurrentReservedFileDescrCount]: File Descriptors
Legend2[cacheCurrentReservedFileDescrCount]:

Target[cacheClients]: cacheClients&cacheClients:public@shadow:3401
MaxBytes[cacheClients]: 1000000000
Title[cacheClients]: Number of Clients
Options[cacheClients]: gauge, growright, nopercent
PageTop[cacheClients]: <H1>Number of clients accessing cache @ shadow</H1>
YLegend[cacheClients]: clients/sec
ShortLegend[cacheClients]: clients/s
LegendI[cacheClients]: Num Clients&nbsp;
LegendO[cacheClients]:
Legend1[cacheClients]: Num Clients
Legend2[cacheClients]:

Target[cacheHttpAllSvcTime]: cacheHttpAllSvcTime.5&cacheHttpAllSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpAllSvcTime]: 1000000000
Title[cacheHttpAllSvcTime]: HTTP All Service Time
Options[cacheHttpAllSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpAllSvcTime]: <H1>HTTP all service time @ shadow</H1>
YLegend[cacheHttpAllSvcTime]: svc time (ms)
ShortLegend[cacheHttpAllSvcTime]: ms
LegendI[cacheHttpAllSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpAllSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpAllSvcTime]: Median Svc Time
Legend2[cacheHttpAllSvcTime]: Median Svc Time

Target[cacheHttpMissSvcTime]: cacheHttpMissSvcTime.5&cacheHttpMissSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpMissSvcTime]: 1000000000
Title[cacheHttpMissSvcTime]: HTTP Miss Service Time
Options[cacheHttpMissSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpMissSvcTime]: <H1>HTTP miss service time @ shadow</H1>
YLegend[cacheHttpMissSvcTime]: svc time (ms)
ShortLegend[cacheHttpMissSvcTime]: ms
LegendI[cacheHttpMissSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpMissSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpMissSvcTime]: Median Svc Time
Legend2[cacheHttpMissSvcTime]: Median Svc Time

Target[cacheHttpNmSvcTime]: cacheHttpNmSvcTime.5&cacheHttpNmSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpNmSvcTime]: 1000000000
Title[cacheHttpNmSvcTime]: HTTP Near Miss Service Time
Options[cacheHttpNmSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpNmSvcTime]: <H1>HTTP near miss service time @ shadow</H1>
YLegend[cacheHttpNmSvcTime]: svc time (ms)
ShortLegend[cacheHttpNmSvcTime]: ms
LegendI[cacheHttpNmSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpNmSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpNmSvcTime]: Median Svc Time
Legend2[cacheHttpNmSvcTime]: Median Svc Time

Target[cacheHttpHitSvcTime]: cacheHttpHitSvcTime.5&cacheHttpHitSvcTime.60:public@shadow:3401
MaxBytes[cacheHttpHitSvcTime]: 1000000000
Title[cacheHttpHitSvcTime]: HTTP Hit Service Time
Options[cacheHttpHitSvcTime]: gauge, growright, nopercent
PageTop[cacheHttpHitSvcTime]: <H1>HTTP hit service time @ shadow</H1>
YLegend[cacheHttpHitSvcTime]: svc time (ms)
ShortLegend[cacheHttpHitSvcTime]: ms
LegendI[cacheHttpHitSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheHttpHitSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheHttpHitSvcTime]: Median Svc Time
Legend2[cacheHttpHitSvcTime]: Median Svc Time

Target[cacheIcpQuerySvcTime]: cacheIcpQuerySvcTime.5&cacheIcpQuerySvcTime.60:public@shadow:3401
MaxBytes[cacheIcpQuerySvcTime]: 1000000000
Title[cacheIcpQuerySvcTime]: ICP Query Service Time
Options[cacheIcpQuerySvcTime]: gauge, growright, nopercent
PageTop[cacheIcpQuerySvcTime]: <H1>ICP query service time @ shadow</H1>
YLegend[cacheIcpQuerySvcTime]: svc time (ms)
ShortLegend[cacheIcpQuerySvcTime]: ms
LegendI[cacheIcpQuerySvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheIcpQuerySvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheIcpQuerySvcTime]: Median Svc Time
Legend2[cacheIcpQuerySvcTime]: Median Svc Time

Target[cacheIcpReplySvcTime]: cacheIcpReplySvcTime.5&cacheIcpReplySvcTime.60:public@shadow:3401
MaxBytes[cacheIcpReplySvcTime]: 1000000000
Title[cacheIcpReplySvcTime]: ICP Reply Service Time
Options[cacheIcpReplySvcTime]: gauge, growright, nopercent
PageTop[cacheIcpReplySvcTime]: <H1>ICP reply service time @ shadow</H1>
YLegend[cacheIcpReplySvcTime]: svc time (ms)
ShortLegend[cacheIcpReplySvcTime]: ms
LegendI[cacheIcpReplySvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheIcpReplySvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheIcpReplySvcTime]: Median Svc Time
Legend2[cacheIcpReplySvcTime]: Median Svc Time

Target[cacheDnsSvcTime]: cacheDnsSvcTime.5&cacheDnsSvcTime.60:public@shadow:3401
MaxBytes[cacheDnsSvcTime]: 1000000000
Title[cacheDnsSvcTime]: DNS Service Time
Options[cacheDnsSvcTime]: gauge, growright, nopercent
PageTop[cacheDnsSvcTime]: <H1>DNS service time @ shadow</H1>
YLegend[cacheDnsSvcTime]: svc time (ms)
ShortLegend[cacheDnsSvcTime]: ms
LegendI[cacheDnsSvcTime]: Median Svc Time (5min)&nbsp;
LegendO[cacheDnsSvcTime]: Median Svc Time (60min)&nbsp;
Legend1[cacheDnsSvcTime]: Median Svc Time
Legend2[cacheDnsSvcTime]: Median Svc Time

Target[cacheRequestHitRatio]: cacheRequestHitRatio.5&cacheRequestHitRatio.60:public@shadow:3401
MaxBytes[cacheRequestHitRatio]: 100
AbsMax[cacheRequestHitRatio]: 100
Title[cacheRequestHitRatio]: Request Hit Ratio @ shadow
Options[cacheRequestHitRatio]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheRequestHitRatio]: dwmy
PageTop[cacheRequestHitRatio]: <H1>Request Hit Ratio @ shadow</H1>
YLegend[cacheRequestHitRatio]: %
ShortLegend[cacheRequestHitRatio]: %
LegendI[cacheRequestHitRatio]: Median Hit Ratio (5min)&nbsp;
LegendO[cacheRequestHitRatio]: Median Hit Ratio (60min)&nbsp;
Legend1[cacheRequestHitRatio]: Median Hit Ratio
Legend2[cacheRequestHitRatio]: Median Hit Ratio

Target[cacheRequestByteRatio]: cacheRequestByteRatio.5&cacheRequestByteRatio.60:public@shadow:3401
MaxBytes[cacheRequestByteRatio]: 100
AbsMax[cacheRequestByteRatio]: 100
Title[cacheRequestByteRatio]: Byte Hit Ratio @ shadow
Options[cacheRequestByteRatio]: absolute, gauge, noinfo, growright, nopercent
Unscaled[cacheRequestByteRatio]: dwmy
PageTop[cacheRequestByteRatio]: <H1>Byte Hit Ratio @ shadow</H1>
YLegend[cacheRequestByteRatio]: %
ShortLegend[cacheRequestByteRatio]:%
LegendI[cacheRequestByteRatio]: Median Hit Ratio (5min)&nbsp;
LegendO[cacheRequestByteRatio]: Median Hit Ratio (60min)&nbsp;
Legend1[cacheRequestByteRatio]: Median Hit Ratio
Legend2[cacheRequestByteRatio]: Median Hit Ratio

Target[cacheBlockingGetHostByAddr]: cacheBlockingGetHostByAddr&cacheBlockingGetHostByAddr:public@shadow:3401
MaxBytes[cacheBlockingGetHostByAddr]: 1000000000
Title[cacheBlockingGetHostByAddr]: Blocking gethostbyaddr
Options[cacheBlockingGetHostByAddr]: growright, nopercent
PageTop[cacheBlockingGetHostByAddr]: <H1>Blocking gethostbyaddr count @ shadow</H1>
YLegend[cacheBlockingGetHostByAddr]: blocks/sec
ShortLegend[cacheBlockingGetHostByAddr]: blocks/s
LegendI[cacheBlockingGetHostByAddr]: Blocking&nbsp;
LegendO[cacheBlockingGetHostByAddr]:
Legend1[cacheBlockingGetHostByAddr]: Blocking
Legend2[cacheBlockingGetHostByAddr]:


Facebooktwittergoogle_plusredditpinterestlinkedinmail