Friday, December 27, 2013

How can I chroot sftp-only SSH users into their homes?

All this pain is thanks to several security issues as detailed here.

Basically the chroot directory has to be owned by root and can't be any group-write access. Lovely. So you essentially need to turn your chroot into a holding cell and within that you can have your editable content.
sudo chown root /home/bob
sudo chmod go-w /home/bob
sudo mkdir /home/bob/writeable
sudo chown bob:sftponly /home/bob/writeable
sudo chmod ug+rwX /home/bob/writeable

And bam, you can log in and write in /writeable.

found at:

selinux blocks access via sftp for chrooted user

you may want to install setroubleshoot. audit2allow is installed as part of that install.

If selinux blocks access via sftp for chrooted user

grep denied /var/log/audit/audit.log | audit2allow -M postgreylocal

this is how postgrey will looks like and give permission to remove, rename, delete, create file/directory

module postgreylocal 1.0;

require {
        type user_home_t;
        type chroot_user_t;
        class dir { rename write rmdir remove_name create add_name };
        class file { write create unlink link setattr };

#============= chroot_user_t ==============

#!!!! This avc can be allowed using one of the these booleans:
#     ssh_chroot_rw_homedirs, ssh_chroot_full_access
allow chroot_user_t user_home_t:dir { rename rmdir };

#!!!! This avc is allowed in the current policy
allow chroot_user_t user_home_t:dir { write remove_name create add_name };

#!!!! This avc can be allowed using one of the these booleans:
#     ssh_chroot_rw_homedirs, ssh_chroot_full_access
allow chroot_user_t user_home_t:file { unlink link };

#!!!! This avc is allowed in the current policy
allow chroot_user_t user_home_t:file { write create setattr };

after that run command
semodule -i postgreylocal.pp

Friday, December 20, 2013

Find and kill a process in one line using bash and regex

How can I extract the process id automatically and kill it in the same line?

In bash, you should be able to do:

kill $(ps aux | grep '[p]ython' | awk '{print $2}')

Details on its workings are as follows:
  • The ps gives you the list of all the processes.
  • The grep filters that based on your search string, [p] is a trick to stop you picking up the actual grep process itself.
  • The awk just gives you the second field of each line, which is the PID.
  • The $(x) construct means to execute x then take its output and put it on the command line. The output of that ps pipeline inside that construct above is the list of process IDs so you end up with a command like kill 1234 1122 7654.
Here's a transcript showing it in action:
pax> sleep 3600 &
[1] 2225
pax> sleep 3600 &
[2] 2226
pax> sleep 3600 &
[3] 2227
pax> sleep 3600 &
[4] 2228
pax> sleep 3600 &
[5] 2229
pax> kill $(ps aux | grep '[s]leep' | awk '{print $2}')
[5]+  Terminated              sleep 3600
[1]   Terminated              sleep 3600
[2]   Terminated              sleep 3600
[3]-  Terminated              sleep 3600
[4]+  Terminated              sleep 3600
pax> _

and you can see it terminating all the sleepers.

Explaining the grep '[p]ython' bit in a bit more detail:
When you do sleep 3600 & followed by ps -ef | grep sleep, you tend to get two processes with sleep in it, the sleep 3600 and the grep sleep (because they both have sleep in them, that's not rocket science).
However, ps -ef | grep '[s]leep' won't create a process with sleep in it, it instead creates grep '[s]leep' and here's the tricky bit: the grep doesn't find it because it's looking for the regular expression "any character from the character class [s] (which is s) followed by leep.
In other words, it's looking for sleep but the grep process is grep '[s]leep' which doesn't have sleep in it.
When I was shown this (by someone here on SO), I immediately started using it because
  • it's one less process than adding | grep -v grep; and
  • it's elegant and sneaky, a rare combination :-)
found at

Monday, November 25, 2013

Centos 6.4 how to check if sshd is infected with Fokirtor

Centos 6.4 how to check if sshd is infected with Fokirtor

At first you need to install

yum install python-psutil

after that

cp /sbin/pidof /bin/pidof

and download and execute this script

# A simple check to see if running ssh processes contain any string that have
# been designated an indication of Fokirtor by Symantec.
# More info here:
# (c) 2013, Kumina bv, [email protected]
# You are free to use, modify and distribute this check in any way you see
# fit. Just don't say you wrote it.
# This check is created for Debian Squeeze/Wheezy, no idea if it'll work in
# other distros. You'll need gdb-minimal (for gcore) installed.
# We need to be root
if [ `/usr/bin/id -u` -ne 0 ]; then
echo "You need root for this script. Sorry."
        exit 1
# For all pids of the ssh process, do the check
for pid in `/bin/pidof sshd`; do
        /usr/bin/gdb </dev/null --nx --batch \
          -ex "set pagination off" -ex "set height 0 " -ex "set width 0" \
          -ex "attach $pid" -ex "gcore $t" -ex detach -ex quit
        for str in hbt= key= dhost= sp= sk= dip=; do
                /usr/bin/strings $t | /bin/grep "${str}[[:digit:]]"
                if [ $? -eq 0 ]; then
i=$(($i + 1))
        /bin/rm $t
        if [ $i -eq 6 ]; then
echo "CRITICAL: Fokirtor strings found in sshd process ${pid}!"
                exit 2
echo "OK: No indication of Fokirtor found."
exit 0
After that you will see output like this:
[Thread debugging using libthread_db enabled] 0x00007f5b2e4d7513 in __select_nocancel () from /lib64/ Saved corefile /tmp/tmp.Q89Sku0vPN [Thread debugging using libthread_db enabled] 0x00007f5b2e4d1630 in __read_nocancel () from /lib64/ Saved corefile /tmp/tmp.QLWtlfoMok [Thread debugging using libthread_db enabled] 0x00007f5eb920d513 in __select_nocancel () from /lib64/ Saved corefile /tmp/tmp.1d41QbCaA3 [Thread debugging using libthread_db enabled] 0x00007f5eb9207630 in __read_nocancel () from /lib64/ Saved corefile /tmp/tmp.lXIzRAYB4g [Thread debugging using libthread_db enabled] 0x00007eff8f06c513 in __select_nocancel () from /lib64/ Saved corefile /tmp/tmp.e4QmwlYJtT
OK: No indication of Fokirtor found.  

Thursday, November 14, 2013

Bash: Timestamp in bash history

Bash: Timestamp in bash history

BashThe bash history is a useful thing to remember commands which were entered on a system. But it’s not only useful to help your mind – you can also keep track of the entered commands. This is especially interesting on multi user systems. You are able to check the executed commands after the user logs out. That is extra interesting when you spotted some problems like missing files on a system – you would be able to check if someone removed that file.
But by default you can only track the commands entered and you won’t know when they were entered. This could be very important. Thankfully there is a way to add timestamps to the bash history since Bash version 3.0.

See how to configure your bash to save the timestamp for each command execution…

It is quite easy to configure. You just need to set one environment variable HISTTIMEFORMAT. The HISTTIMEFORMAT variable needs to be added to your bashrc scripts. I prefer to add it to a system wide script rather than a user specific script. So I append the code to  

/etc/bash.bashrc on my Ubuntu system.


The HISTTIMEFORMAT uses the format of strftime. You can find the available macros in man 3 strftime or for example here.

After modifying your file start a new shell, type some commands, call history and see the magic:

:> history
  501  2009-01-29 21:12:16 history
  502  2009-01-29 21:12:54 sudo vi /etc/bash.bashrc 
  503  2009-01-29 21:13:04 /bin/bash 
  504  2009-01-29 21:13:11 history

The timestamps are saved directly above each command in the ~/.bash_history file after you exit the shell:

sudo vi /etc/bash.bashrc 
less .bash_history 
You just made your system a little better to control.

found at:

A bunch of commands to change UIDS and GIDS

A bunch of commands to change UIDS and GIDS

Here's the commands to run as root to change the UID and GID for a user. 
Simply change the variables in angled brackets to match your settings:
usermod -u <NEWUID> <LOGIN>    
groupmod -g <NEWGID> <GROUP>
find / -user <OLDUID> -exec chown -h <NEWUID> {} \;
find / -group <OLDGID> -exec chgrp -h <NEWGID> {} \;
usermod -g <NEWGID> <LOGIN>

usermod and groupmod simply change the UID and GID for their respective named counterpart usermod also changes the UID for the files in the homedir but naturally we can't assume the only place files have been created is in the user's homedir.
The find command recurses the filesystem from / and changes everything with uid of OLDUID to be owned by NEWUID and them changes the group for the files owned by the OLDGROUP
The final usermod command changes the login group for the user

found at :

Tuesday, November 5, 2013

Reset “Use Secure in Front End or Admin” in Database – Magento

Reset “Use Secure in Front End or Admin” in Database – Magento

by Nick Cron
I ran into an issue this week where I switched on SSL on a development site and then realized the SSL cert was not installed correctly. 

This is a big issue in Magento because there is no way to get back to the admin to switch it back off.

If this ever happens do the following to switch back:
1. Open up your admin panel (cPanel or other)
2. Go to phpMyAdmin (if MySql)
3. Find your Magento Database
4. Find table “core_config_data”
5. Look for the columns “web/secure/use_in_frontend” and “web/secure/use_in_adminhtml”
6. Edit both values, make them equal to “0″

After this is done you will be back in action.

found at

Friday, October 25, 2013

Measure Hard Disk Data Transfer Speed

Measure Hard Disk Data Transfer Speed

Login as the root and enter the following command:

$ sudo hdparm -tT /dev/sda
$ sudo hdparm -tT /dev/hda

Sample outputs:
 Timing cached reads:   7864 MB in  2.00 seconds = 3935.41 MB/sec
 Timing buffered disk reads:  204 MB in  3.00 seconds =  67.98 MB/sec

For meaningful results, this operation should be repeated 2-3 times. This displays the speed of reading directly from the Linux buffer cache without disk access. This measurement is essentially an indication of the throughput of the processor, cache, and memory of the system under test. 

Here is a for loop example, to run test 3 time in a row:

for i in 1 2 3; do hdparm -tT /dev/hda; done

  • -t :perform device read timings
  • -T : perform cache read timings
  • /dev/sda : Hard disk device file
To find out SATA hard disk speed, enter:

sudo hdparm -I /dev/sda | grep -i speed

    * Gen1 signaling speed (1.5Gb/s)
    * Gen2 signaling speed (3.0Gb/s)
Above output indicate that my hard disk can use both 1.5Gb/s or 3.0Gb/s speed. Please note that your BIOS / Motherboard must have support for SATA-II.

dd Command

You can use the dd command as follows to get speed info too:

dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
rm /tmp/output.img

Sample outputs:
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 23.6472 seconds, 90.8 MB/s 
found at 

Wake on Lan in Centos 6

Start a computer from a remote machine by Wake on Lan.
[1]     Configuration of the computer you'd like to turn on from remote machine.

 yum -y install ethtool

ethtool -s eth0 wol g
vi /etc/sysconfig/network-scripts/ifcfg-eth0
# add at the last line


check for MAC
ifconfig eth0 | grep HWaddr | awk '{print $5}'

# take a memo

shutdown -h now

[2]     Operation on the computer at a remore place.

yum -y install net-tools
# ether-wake [MAC address of the computer you'd like to turn on]

ether-wake 00:22:68:5E:34:06
 # send magic packets


If have more than one interface need to specify it, for example:

ether-wake -i eth0 00:22:68:5E:34:06

found at:

Thursday, October 17, 2013

How to Increase the size of a Linux LVM by expanding the virtual machine disk

How to Increase the size of a Linux LVM by expanding the virtual machine disk

This post will cover how to increase the disk space for a VMware virtual machine running Linux that is using logical volume manager (LVM). Firstly we will be increasing the size of the actual disk on the VMware virtual machine, so at the hardware level – this is the VM’s .vmdk file. Once this is complete we will get into the virtual machine and make the necessary changes through the operating system in order to take advantage of the additional space that has been provided by the hard drive being extended. This will involve creating a new partition with the new space, expanding the volume group and logical group, then finally resizing the file system.

As there are a number of different ways to increase disk space I have also posted some different methods here:
Important Note: Be very careful when working with the commands in this article as they have the potential to cause a lot of damage to your data. If you are working with virtual machines make sure you take a snapshot of your virtual machine beforehand, or otherwise have some other form of up to date backup before proceeding. Note that a snapshot must not be taken until after the virtual disk has been increased, otherwise you will not be able to increase it. It could also be worth cloning the virtual machine first and testing out this method on the clone.
Prerequisites: As this method uses the additional space to create a primary partition, you must not already have 4 partitions as you will not be able to create more than 4. If you do not have space for another partition then you will need to consider a different method, there are some others in the above list.
Throughout my examples I will be working with a VMware virtual machine running Debian 6, this was set up with a 20gb disk and we will be increasing it by 10gb for a total final size of 30gb.

Identifying the partition type

As this method focuses on working with LVM, we will first confirm that our partition type is actually Linux LVM by running the below command.
fdisk -l
As you can see in the above image /dev/sda5 is listed as “Linux LVM” and it has the ID of 8e. The 8e hex code shows that it is a Linux LVM, while 83 shows a Linux native partition. Now that we have confirmed we are working with an LVM we can continue. For increasing the size of a Linux native partition (hex code 83) see this article.
Below is the disk information showing that our initial setup only has the one 20gb disk currently, which is under the logical volume named /dev/mapper/Mega-root – this is what we will be expanding with the new disk.
disk free
Note that /dev/mapper/Mega-root is the volume made up from /dev/sda5 currently – this is what we will be expanding.

Increasing the virtual hard disk

First off we increase the allocated disk space on the virtual machine itself. This is done by right clicking the virtual machine in vSphere, selecting edit settings, and then selecting the hard disk. In the below image I have changed the previously set hard disk of 20gb to 30gb while the virtual machine is up and running. Once complete click OK, this is all that needs to be done in VMware for this process.
vSphere settings
If you are not able to modify the size of the disk, the provisioned size setting is greyed out. This can happen if the virtual machine has a snapshot in place, these will need to be removed prior to making the changes to the disk. Alternatively you may need to shut down the virtual machine if it does not allow you to add or increase disks on the fly, if this is the case make the change then power it back on.

Detect the new disk space

Once the physical disk has been increased at the hardware level, we need to get into the operating system and create a new partition that makes use of this space to proceed.
Before we can do this we need to check that the new unallocated disk space is detected by the server, you can use “fdisk -l” to list the primary disk. You will most likely see that the disk space is still showing as the same original size, at this point you can either reboot the server and it will detect the changes on boot or you can rescan your devices to avoid rebooting by running the below command. Note you may need to change host0 depending on your setup.
echo "- - -" > /sys/class/scsi_host/host0/scan
Below is an image after performing this and confirming that the new space is displaying.

Partition the new disk space

As outlined in my previous images the disk in my example that I am working with is /dev/sda, so we use fdisk to create a new primary partition to make use of the new expanded disk space. Note that we do not have 4 primary partitions already in place, making this method possible.
fdisk /dev/sda
We are now using fdisk to create a new partition, the inputs I have entered in are shown below in bold. Note that you can press ‘m’ to get a full listing of the fdisk commands.
‘n’ was selected for adding a new partition.
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
‘p’ is then selected as we are making a primary partition.
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
As I already have /dev/sda1 and /dev/sda2 as shown in previous images, I have gone with using ’3′ for this new partition which will be created as /dev/sda3
Partition number (1-4): 3
We just press enter twice above as by default the first and last cylinders of the unallocated space should be correct. After this the partition is then ready.
First cylinder (2611-3916, default 2611): "enter"
Using default value 2611
Last cylinder, +cylinders or +size{K,M,G} (2611-3916, default 3916): "enter"
Using default value 3916
‘t’ is selected to change to a partition’s system ID, in this case we change to ’3′ which is the one we just created.
Command (m for help): t
Partition number (1-5): 3
The hex code ’8e’ was entered as this is the code for a Linux LVM which is what we want this partition to be, as we will be joining it with the original /dev/sda5 Linux LVM.
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)
‘w’ is used to write the table to disk and exit, basically all the changes that have been done will be saved and then you will be exited from fdisk.
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
You will see a warning which basically means in order to use the new table with the changes a system reboot is required. If you can not see the new partition using “fdisk -l” you may be able to run “partprobe -s” to rescan the partitions. In my test I did not require either of those things at this stage (I do a reboot later on), straight after pressing ‘w’ in fdisk I was able to see the new /dev/sda3 partition of my 10gb of space as displayed in the below image.
That’s all for partitioning, we now have a new partition which is making use of the previously unallocated disk space from the increase in VMware.

Increasing the logical volume

We use the pvcreate command which creates a physical volume for later use by the logical volume manager (LVM). In this case the physical volume will be our new /dev/sda3 partition.
[email protected]:~# pvcreate /dev/sda3
  Device /dev/sda3 not found (or ignored by filtering).
In order to get around this I believe a reboot is required, as in this instance the disk does not appear to be there correctly despite showing in “fdisk -l”. After a reboot use the same command which will succeed.
[email protected]:~# pvcreate /dev/sda3
  Physical volume "/dev/sda3" successfully created
Next we need to confirm the name of the current volume group using the vgdisplay command. The name will vary depending on your setup, for me it is the name of my test server. vgdisplay provides lots of information on the volume group, I have only shown the name and the current size of it for this example.
[email protected]:~# vgdisplay
  --- Volume group ---
  VG Name               Mega
VG Size               19.76 GiB
Now we extend the ‘Mega’ volume group by adding in the physical volume of /dev/sda3 which we created using the pvcreate command earlier.
[email protected]:~# vgextend Mega /dev/sda3
  Volume group "Mega" successfully extended
Using the pvscan command we scan all disks for physical volumes, this should confirm the original /dev/sda5 partition and the newly created physical volume /dev/sda3
[email protected]:~# pvscan
  PV /dev/sda5   VG Mega   lvm2 [19.76 GiB / 0    free]
  PV /dev/sda3   VG Mega   lvm2 [10.00 GiB / 10.00 GiB free]
  Total: 2 [29.75 GiB] / in use: 2 [29.75 GiB] / in no VG: 0 [0   ]
Next we need to increase the logical volume (rather than the physical volume) which basically means we will be taking our original logical volume and extending it over our new partition/physical volume of /dev/sda3.
Firstly confirm the name of the logical volume using lvdisplay. This name will vary depending on your setup.
[email protected]:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/Mega/root
The logical volume is then extended using the lvextend command.
[email protected]:~# lvextend /dev/Mega/root /dev/sda3
  Extending logical volume root to 28.90 GiB
  Logical volume root successfully resized
There is then one final step which is to resize the file system so that it can take advantage of this additional space, this is done using the resize2fs command. Note that this may take some time to complete, it took about 30 seconds for my additional space.
[email protected]:~# resize2fs /dev/Mega/root
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/Mega/root is mounted on /; on-line resizing required
old desc_blocks = 2, new_desc_blocks = 2
Performing an on-line resize of /dev/Mega/root to 7576576 (4k) blocks.
The filesystem on /dev/Mega/root is now 7576576 blocks long.
That’s it, now with the ‘df’ command we can see that the total available disk space has been increased.
disk free after expansion


With this method we have increased the virtual disk drive through VMware, created a new partition out of this newly unallocated space within the guest OS, turned it into a physical volume, extended the volume group, and then finally extended the original logical volume over the newer physical volume resulting in overall disk space being increased successfully.

found at:

Thursday, August 29, 2013

Raspberry PI - How to change desktop wallpaper?

When I right click on the desktop, a submenu pops up with these choices: terminal emulator, web browser, desktops, obconf, reconfigure, restart, exit. How can I change the desktop wallpaper? I don't see an option to do this with this submenu?

Normally when you are using Raspbian "wheezy" with the LXDE Desktop Environment Right clicking should pop up:

Create New
Selact All
Invert Selection
Sort Files
Desktop Preferences

If not ?


Open a terminal and enter pcmanfm --desktop-pref.

When the desktop preferences window pops up, click on the "Advanced" tab and deselect "Show menus provided by window managers when desktop is clicked."

That's it and PCManFM File Manager that we had by default is back!

Cisco how to check interface index using snmp

Try command

$ snmpwalk -v2c -c community-string HOST

Outcome will be:

IF-MIB::ifName.1 = STRING: Fa0
IF-MIB::ifName.2 = STRING: Fa1
             This -^- is the interface-number

so when using for example nagios plugin check_itraffic use this number as interface parameter

[SOLVED ]Nagios / Cenreon This plugin must be either run as root or setuid root.

 Warning: This plugin must be either run as root or setuid root. 
To run as root, you can use a tool like sudo. 
To set the setuid permissions, use the command: 
chmod u+s yourpluginfile 

1. chown root:nagios check_dhcp
2. chmod u+s check_dhcp 
some plugins needs chown apache:nagios 
(otherwise plugin`s outcome is out of bounds 255)

Wednesday, August 28, 2013

Colored bash in CentOS

The following is ripped from the Gentoo /etc/bash/bashrc with minor modifications for slight differences in CentOS:

# Set colorful PS1 only on colorful terminals.
# dircolors --print-database uses its own built-in database
# instead of using /etc/DIR_COLORS.  Try to use the external file
# first to take advantage of user additions.  Use internal bash
# globbing instead of external grep binary.
safe_term=${TERM//[^[:alnum:]]/?}   # sanitize TERM
[[ -z ${match_lhs}    ]] \
    && type -P dircolors >/dev/null \
    && match_lhs=$(dircolors --print-database)
[[ $'\n'${match_lhs} == *$'\n'"TERM "${safe_term}* ]] && use_color=true

if ${use_color} ; then
    if [[ ${EUID} == 0 ]] ; then
        PS1='\[\033[01;31m\]\h\[\033[01;34m\] \W \$\[\033[00m\] '
        PS1='\[\033[01;32m\]\[email protected]\h\[\033[01;34m\] \w \$\[\033[00m\] '
    if [[ ${EUID} == 0 ]] ; then
        # show [email protected] when we don't have colors
        PS1='\[email protected]\h \W \$ '
        PS1='\[email protected]\h \w \$ '

unset use_color safe_term match_lhs
This script should be run after /etc/bashrc is run. Specifically, it depends on COLORS being set. This is done in /etc/profile.d/, which is sourced at the end of /etc/bashrc. One could put this in the users .bashrc after /etc/bashrc is sourced, or, for new users, in /etc/skel/.bashrc.

found at:

Tuesday, August 27, 2013

Disable Ads on YouTube With This Simple Command

There are a lot of ways to block ads, but with a simple command in the developer console, you can disable all ads on YouTube via an experiment.
Google frequently tries out new features with experiments via TestTube. A less advertised experiment can disable all ads on the site. Here's how to turn it on:
  1. Open up a YouTube video (any will do).
  2. Open up the developer console (Ctrl-Shift-J for Chrome, Ctrl-Shift-K for Firefox)
  3. Enter the following code:
document.cookie="VISITOR_INFO1_LIVE=oKckVSqvaGw; path=/;";window.location.reload();

Boom. No more ads. Since this is something that Google is allowing, it's possible it could go away in the future, but while it works, you get a lovely ad-free viewing experience without any plugins. It even works on those pesky video ads.

Monday, August 26, 2013

Unable to load dynamic library '/usr/lib/php/modules/'

When I run command
php -v

this error come up
PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/modules/' - /usr/lib/php/modules/ cannot open shared object file: No such file or directory in Unknown on line 0 PHP 5.3.3 (cli) (built: Feb 22 2013 02:37:06)


This is cause by mcrypt extension.
Edit /etc/php.d/mcrypt.ini
and change
; Enable mcrypt extension module
to this
; Enable mcrypt extension module
found at

Monday, August 5, 2013

Increasing the size of a virtual disk vmware on guest CentOS

  1. Power off the virtual machine.
  2. Edit the virtual machine settings and extend the virtual disk size. 
  3. Power on the virtual machine.
  4. Identify the device name, which is by default /dev/sda, and confirm the new size by running the command:# fdisk -l
  5. Create a new primary partition:
    1. Run the command:# fdisk /dev/sda (depending the results of the step 4)
    2. Press p to print the partition table to identify the number of partitions. By default there are 2: sda1 and sda2.
    3. Press n to create a new primary partition. 
    4. Press p for primary.
    5. Press 3 for the partition number, depending the output of the partition table print.
    6. Press Enter two times.
    7. Press w to write the changes to the partition table.
  6. Restart the virtual machine.
  7. Run this command to verify that the changes were saved to the partition table and that the new partition has an 83 type:

    # fdisk -l
  8. Run this command to convert the new partition to a physical volume:

    # pvcreate /dev/sda3
  9. Run this command to extend the physical volume:

    # vgextend VolGroup00 /dev/sda3

    Note: To determine which volume group to extend, use the command vgdisplay.
  10. Run this command to verify how many physical extents are available to the Volume Group:# vgdisplay VolGroup00 | grep "Free"
  11. Run the following command to extend the Logical Volume:

    # lvextend -L+#G /dev/VolGroup00/LogVol00

    Where # is the number of Free space in GB available as per the previous command.

    Note: to determine which logical volume to extend, use the command lvdisplay.
  12. Run the following command to expand the ext3 filesystem online, inside of the Logical Volume:

    # ext2online /dev/VolGroup00/LogVol00
    Note: Use resize2fs instead of ext2online if it is not a Red Hat virtual machine.
  13. Run the following command to verify that the / filesystem has the new space available:

    # df -h 

Monday, June 24, 2013

How to find out which process is listening upon a port?

To discover the process name, ID (pid), and other details you need to run:
lsof -i :port
So to see which process is listening upon port 80 we can run:
[email protected]:~# lsof -i :80
This gives us the following output:
apache2 10437     root    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10438 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10439 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10440 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10441 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 10442 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 25966 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
apache2 25968 www-data    3u  IPv6 22890556       TCP *:www (LISTEN)
Here you can see the command running (apache2), the username it is running as www-data, and some other details.
Similarly we can see which process is bound to port 22:
[email protected]:~# lsof -i :22
sshd     8936 root    3u  IPv6 12161280       TCP *:ssh (LISTEN)
To see all the ports open for listening upon the current host you can use another command netstat (contained in the net-tools package):
[email protected]:~# netstat -a |grep LISTEN |grep -v unix
tcp        0      0 *:2049                  *:*                     LISTEN     
tcp        0      0 *:743                   *:*                     LISTEN     
tcp        0      0 localhost.localdo:mysql *:*                     LISTEN     
tcp        0      0 *:5900                  *:*                     LISTEN     
tcp        0      0 localhost.locald:sunrpc *:*                     LISTEN     
tcp        0      0 *:8888                  *:*                     LISTEN     
tcp        0      0 localhost.localdom:smtp *:*                     LISTEN     
tcp6       0      0 *:www                   *:*                     LISTEN     
tcp6       0      0 *:distcc                *:*                     LISTEN     
tcp6       0      0 *:ssh                   *:*                     LISTEN     
Here you can see that there are processes listening upon ports 2049, 743, 5900, and several others.
(The second grep we used above was to ignore Unix domain sockets).
If you're curious to see which programs and services are used in those sockets you can look them up as we've already shown:
[email protected]:~# lsof -i :8888
gnump3d 25834 gnump3d    3u  IPv4 61035200       TCP *:8888 (LISTEN)
This tells us that the process bound to port 8888 is the gnump3d MP3 streamer.
Port 2049 and 743 are both associated with NFS. The rest can be tracked down in a similar manner. (You'll notice that some ports actually have their service names printed next to them, such as the smtp entry for port 25).
lsof is a very powerful tool which can be used for lots of jobs. If you're unfamiliar with it I recommend reading the manpage via:
man lsof
If you do so you'll discover that the -i flag can take multiple different types of arguments, to allow you to check more than one port at a time, and use IPv6 addresses too.
It's often used to see which files are open upon mounted devices, so you can kill the processes and unmount them cleanly.

found at

Wednesday, June 19, 2013

How To Autocomplete Commands Preceded By 'sudo'

When writing a command in the terminal, you can autocomplete it by pressing the TAB key. Example: type "nau" in the terminal and press TAB -> "nautilus" should show up (if you have Nautilus installed, obviously).

However, the autocomplete doesn't work if you are trying to run a command with "sudo". For example, typing "sudo nau" and then pressing the TAB key will not autocomplete the command to "sudo nautilus".

Here is how to get autocomplete to work in the Terminal while using "sudo". Simply open the ".bashrc" hidden file from your home folder. If you use GNOME, paste this in a terminal to open it:

sudo gedit ~/.bashrc

Then paste this at the bottom of the file:
if [ "$PS1" ]; then
complete -cf sudo

Then type this in a terminal to reload:

Now try the example in the beginning of the file "sudo nau" and press TAB. It should now work.

found at

How do I change bash history completion to complete what's already on the line?

# ~/.inputrc
"\e[A": history-search-backward
"\e[B": history-search-forward

or equivalently,
# ~/.bashrc
bind '"\e[A": history-search-backward'
bind '"\e[B": history-search-forward'
Normally, Up and Down are bound to the Readline functions previous-history and next-history respectively. I prefer to bind PgUp/PgDn to these functions, instead of displacing the normal operation of Up/Down.
# ~/.inputrc
"\e[5~": history-search-backward
"\e[6~": history-search-forward

After you modify ~/.inputrc, restart your shell or use Ctrl+X, Ctrl+R to tell it to re-read ~/.inputrc.

found at

Monday, May 27, 2013

Installing NFS on CentOS 6.2

This is a how to install the NFS service on a Linux CentOS 6.2 box and making it accessible to others. The scenario is the following:
  • Grant read-only access to the /home/public directory to all networks
  • Grant read/write access to the /home/common directory to all networks 
At the end of this guide you will get:
  • A running NFS server with various LAN shared directories
  • A active set of firewall rules allowing the access to NFS ports
  • A permanently mounted NFS shared on a CentOS / Ubuntu client     
I assume you already have:

  • a fresh running Linux CentOS 6.2 server 
  • a sudoer user, named bozz on this guide
  • an accessible RPM repository / mirror
  • a Linux client with CentOS / Ubuntu


  1. Login as bozz user on the server
  2. Check if rpcbind is installed:
  3. $ rpm -q rpcbind
    if not, install it:
    $ sudo yum install rpcbind
  4. Install NFS-related packages:
  5. $ sudo yum install nfs-utils nfs-utils-lib
  6. Once installed, configure the nfs, nfslock and rpcbind to run as daemons:
  7. $ sudo chkconfig --level 35 nfs on
    $ sudo chkconfig --level 35 nfslock on 
    $ sudo chkconfig --level 35 rpcbind on
    then start the rpcbind and nfs daemons:
    $ sudo service rpcbind start
    $ sudo service nfslock start 
    $ sudo service nfs start 
    NFS daemons
    • rpcbind: (portmap in older versions of Linux) the primary daemon upon which all the others rely, rpcbind manages connections for applications that use the RPC specification. By default, rpcbind listens to TCP port 111 on which an initial connection is made. This is then used to negotiate a range of TCP ports, usually above port 1024, to be used for subsequent data transfers. You need to run rpcbind on both the NFS server and client. 
    • nfs: starts the RPC processes needed to serve shared NFS file systems. The nfs daemon needs to be run on the NFS server only. 
    • nfslock: Used to allow NFS clients to lock files on the server via RPC processes. The nfslock daemon needs to be run on both the NFS server and client.

  8. Test whether NFS is running correctly with the rpcinfo command. You should get a listing of running RPC programs that must include mountd, portmapper, nfs, and nlockmgr:

  9. $ rpcinfo -p localhost
       program vers proto   port  service
        100000    4   tcp    111  portmapper
        100000    3   tcp    111  portmapper
        100000    2   tcp    111  portmapper
        100000    4   udp    111  portmapper
        100000    3   udp    111  portmapper
        100000    2   udp    111  portmapper
        100024    1   udp  40481  status
        100024    1   tcp  49796  status
        100011    1   udp    875  rquotad
        100011    2   udp    875  rquotad
        100011    1   tcp    875  rquotad
        100011    2   tcp    875  rquotad
        100003    2   tcp   2049  nfs
        100003    3   tcp   2049  nfs
        100003    4   tcp   2049  nfs
        100227    2   tcp   2049  nfs_acl
        100227    3   tcp   2049  nfs_acl
        100003    2   udp   2049  nfs
        100003    3   udp   2049  nfs
        100003    4   udp   2049  nfs
        100227    2   udp   2049  nfs_acl
        100227    3   udp   2049  nfs_acl
        100021    1   udp  32769  nlockmgr
        100021    3   udp  32769  nlockmgr
        100021    4   udp  32769  nlockmgr
        100021    1   tcp  32803  nlockmgr
        100021    3   tcp  32803  nlockmgr
        100021    4   tcp  32803  nlockmgr
        100005    1   udp    892  mountd
        100005    1   tcp    892  mountd
        100005    2   udp    892  mountd
        100005    2   tcp    892  mountd
        100005    3   udp    892  mountd
        100005    3   tcp    892  mountd

  10. The /etc/exports file is the main NFS configuration file, and it consists of two columns. The first column lists the directories you want to make available to the network. The second column has two parts. The first part lists the networks or DNS domains that can get access to the directory, and the second part lists NFS options in brackets. Edit /etc/exports and append the desired shares:
  11. $ sudo nano /etc/exports
    then append:
    /home/public *(ro,sync,all_squash)
    /home/common *(rw,sync,all_squash)
    • /home/public: directory to share  with read-only access to all networks
    • /home/common: directory to share with read/write access to all networks
    • *: allow access from all networks
    • ro: read-only access
    • rw: read/write access 
    • sync: synchronous access 
    • root_squash: prevents root users connected remotely from having root privileges and assigns them the user ID for the user nfsnobody. This effectively "squashes" the power of the remote root user to the lowest local user, preventing unauthorized alteration of files on the remote server. Alternatively, the no_root_squash option turns off root squashing. To squash every remote user, including root, use the all_squash option. To specify the user and group IDs to use with remote users from a particular host, use the anonuid and anongid options, respectively. In this case, a special user account can be created for remote NFS users to share and specify (anonuid=,anongid=), where is the user ID number and is the group ID number.

  12. Create the directories to be published with the correct permissions:
  13. $ sudo mkdir -p /home/public
    $ sudo chown nfsnobody:nfsnobody /home/public
    $ sudo mkdir -p /home/common
    $ sudo chown nfsnobody:nfsnobody /home/common
    it should end like this:
    $ ls -l /home/
    drwxr-xr-x. 2 nfsnobody nfsnobody  4096 Feb 20 12:55 common
    drwxr-xr-x. 7 nfsnobody nfsnobody  4096 Feb 17 14:44 public
  14. [OPTIONAL] Allow bozz user to locally write on the created directories by appending it  to nfsnobody group and granting write permissions to the group:
  15. $ sudo usermod -a -G nfsnobody bozz
    $ sudo chmod g+w /home/public
    $ sudo chmod g+w /home/common
    it should end like this:
    $ ls -l /home/
    drwxrwxr-x. 2 nfsnobody nfsnobody  4096 Feb 20 12:40 common
    drwxrwxr-x. 7 nfsnobody nfsnobody  4096 Feb 17 14:44 public
  16. Security issues. To allow remote access some firewall rules and other NFS settings must be changed. You need to open the following ports:
    • TCP/UDP 111 - RPC 4.0 portmapper
    • TCP/UDP 2049 - NFSD (nfs server)
    • Portmap static ports, Various TCP/UDP ports defined in /etc/sysconfig/nfs file.
    the portmapper assigns each NFS service to a port dynamically at service startup time, but dynamic ports cannot be protected by iptables. First, you need to configure NFS services to use fixed ports. Edit /etc/sysconfig/nfs, enter:
    $ sudo nano /etc/sysconfig/nfs
    and set:
    then restart nfs daemons:
    $ sudo service rpcbind restart
    $ sudo service nfs restart
    update iptables rules by editing /etc/sysconfig/iptables, enter:
    $ sudo nano /etc/sysconfig/iptables
    and append the following rules:
    -A INPUT -s -m state --state NEW -p udp --dport 111 -j ACCEPT
    -A INPUT -s -m state --state NEW -p tcp --dport 111 -j ACCEPT
    -A INPUT -s -m state --state NEW -p tcp --dport 2049 -j ACCEPT
    -A INPUT -s  -m state --state NEW -p tcp --dport 32803 -j ACCEPT
    -A INPUT -s  -m state --state NEW -p udp --dport 32769 -j ACCEPT
    -A INPUT -s  -m state --state NEW -p tcp --dport 892 -j ACCEPT
    -A INPUT -s  -m state --state NEW -p udp --dport 892 -j ACCEPT
    -A INPUT -s  -m state --state NEW -p tcp --dport 875 -j ACCEPT
    -A INPUT -s  -m state --state NEW -p udp --dport 875 -j ACCEPT
    -A INPUT -s  -m state --state NEW -p tcp --dport 662 -j ACCEPT
    -A INPUT -s -m state --state NEW -p udp --dport 662 -j ACCEPT
    restart iptables daemon:
    $ sudo service iptables restart
  17. Mount NFS shared directories: Install client NFS packages first:
  18.   on Ubuntu client:
    $ sudo apt-get install nfs-common
    on CentOS client:
    $ sudo yum install nfs-utils nfs-utils-lib
    inquiry for the list of all shared directories:
    $ showmount -e SERVERADDRESS
    mount server's /home/public on client's /public:
    $ sudo mkdir -p /public
    $ sudo mount SERVERADDRESS:/home/public /public
    $ df -h
    mount server's /home/common on client's /common:
    $ sudo mkdir -p /common
    $ sudo mount SERVERADDRESS:/home/common /common
    $ df -h
  19. Mount NFS automatically after reboot on the client. Edit /etc/fstab, enter:
  20. $ sudo nano /etc/fstab
    append the following line:
    #Directory                   Mount Point    Type   Options       Dump   FSCK
    SERVER_IP_ADDRESS:/home/public /public nfs hard 0 0
    SERVER_IP_ADDRESS:/home/common /common nfs hard 0 0
    to test the correctness of /etc/fstab before restarting, you can try to manually mount /public and /common:
    $ sudo mount /public
    $ sudo mount /common

    found at:

Friday, May 24, 2013

How To Break Into A Cisco ASA If You Do Not Have The Enable Password

From time to time, I get a service call asking me to break into a Cisco router or an ASA or a PIX. In most cases, the device was deployed a long time ago and nobody remembers the password. Or they have a copy of the config but the password was stored in the encrypted format.
If you have the password in encrypted format, you might luck out if it is a commonly-used value such as 8Ry2YjIyt7RRXU24 (password is blank) or 2KFQnbNIdI.2KYOU (password is “cisco”). You can try to brute force it with John the Ripper, or Cain and Abel, or some precomputed rainbow table. The time required to brute force a complex password will depend on the character set used in the password, the length of the password, and the speed of the computer that is running Cain & Abel. Might take an ice age to brute force it. Would it be worth the time?
You might have better luck with a bit of lateral thinking. Just paste the encrypted password into Google and see if anyone has posted their own config in some Google-indexed forum somewhere. If their encrypted password is the same value as your mystery password, they are using that same password. Can you ask the poster what their password is?
Other lateral puzzle approaches include: looking for other places that the password may have been stored. Let’s hope it’s not on a Post-It note underneath the keyboard. On a typical network, the documentation is all stored in the same place; a file share, a local directory, a KeePass archive. Maybe a hard copy in the server room. Some of it may not be encrypted. Many admins will use the same password in different systems. The ASA enable password might be the same as the domain administrator password. Might be in the old admin’s email archive. You never know, the sort of sensitive shit people email unencrypted to themselves. That’s the main reason I have to nuke lost Blackberrys from the corporate BES. No screen lock password on your Blackberry AND you emailed naughty photos to yourself? Dude.
You can try to guess the password. Name of company. Name of admin. Name of admin’s dog/cat/child/soccer team/favorite pornstar.
If you have physical access to the ASA, you can probably reset the password. Pretty painless. Just boot into ROMMON mode, change the configuration register value to 0×41 so that the ASA boots without loading the startup config. This means you’re in without needing a password. Then you can copy the startup config into the running config, and you can change the password.

Step-by-Step Instructions

Reboot the ASA. When you see the following text, press the BREAK or ESC key.
Use BREAK or ESC to interrupt boot.
Use SPACE to begin boot immediately.
You are now in ROMMON mode, as indicated by the prompt.
rommon #0>
Type confreg.
rommon #0> confreg

Current Configuration Register: 0x00000001
Configuration Summary:
 boot default image from Flash
Take note of the value of the Current Configuration Register. You are going to be prompted to answer several questions, and based on your answers, the ASA’s Configuration Register is going to be changed to a different value. You’ll want to set the Configuation Register back to its original value after you have reset the ASA password.
Do you wish to change this configuration? y/n [n]: y
enable boot to ROMMON prompt? y/n [n]:
enable TFTP netboot? y/n [n]:
enable Flash boot? y/n [n]: y
select specific Flash image index? y/n [n]:
disable system configuration? y/n [n]: y
go to ROMMON prompt if netboot fails? y/n [n]:
enable passing NVRAM file specs in auto-boot mode? y/n [n]:
disable display of BREAK or ESC key prompt during auto-boot? y/n [n]:

Current Configuration Register: 0x00000041
Configuration Summary:
 boot default image from Flash
 ignore system configuration

Update Config Register (0x41) in NVRAM...
Type boot. Now the ASA is going to boot the OS, but it will load the default config instead of the startup config.
rommon #1> boot
Get into privileged EXEC mode and hit ENTER when prompted for the enable password. Then copy the startup config into the running config.
ciscoasa> en
ciscoasa# copy start run

Destination filename [running-config]?
Get into global configuration mode and make the changes that you want, e.g. change the enable password. You have total access now, so you can change anything that you want.
ciscoasa# conf t
ciscoasa(config)# enable password cisco
When you have finished making all the changes to the config, reset the Configuration Register back to its original value and save the config.
ciscoasa(config)# config-register 0x1
ciscoasa(config)# wr mem

What is the Configuration Register?

The Configuration Register value is a hex value that specifies various boot parameters for the ASA, such as which boot image to use, whether or not to boot the startup config, or whether to perform the ROMMON countdown.
You can set it while you are in ROMMON mode with the confreg command. For example, you could type confreg 0×41 and you won’t be prompted to answer all those questions in the instructions above. (Because the questions only serve as a human-friendly way to formulate the value of the Configuration Register. By specifying “0×41″, you have already provided the value.) However, if you just type confreg, it will display the current value of the Configuration Register. This is important if you need to find out the existing value of the Configuration Register.
rommon #0> confreg 0x41
You can also set the value of the Configuration Register while you are in the global configuration mode with the config-register command.
ciscoasa# conf t
ciscoasa(config)# config-register 0x1
found at 

Thursday, May 23, 2013

QOS Priority Levels

One of the most feared technologies by CCIE candidates is QOS (Quality of Service). This is understandably because most first world countries seldom have problems with bandwidth or getting more if needed. So the necessity for juggling traffic around, by means of QOS strategies is almost non existent. On the other hand, engineers in developing countries tend to be familiar with various QOS technologies, because of frequent bandwidth shortages as a result of the high bandwidth costs.

Here is a concise table listing the all the values for both BYTE fields:

TOS-BYTE = (3bits IP PREC + 5bits legacy)

IP Precedence  Description
IP PREC Binary
(3 bits)
IP PREC Decimal Value
FLASH 011 3


DiffServ Field = (6bits DSCP + 2bits ECN)

DSCP PHB Groups (8x + 2y) DSCP-Field Binary (6 bits) DSCP-Field Decimal (6 bits) DS-Field Binary (1 byte) DS-Field Decimal Format DS-Field Hex Value
Default 000 000 0 000 000 00 0 0×0
CS1 001 000 8 001 000 00 32 0×20
AF11 001 010 10 001 010 00 40 0×28
AF12 001 100 12 001 100 00 48 0×30
AF13 001 110 14 001 110 00 56 0×38
CS2 010 000 16 010 000 00 64 0×40
AF21 010 010 18 010 010 00 72 0×48
AF22 010 100 20 010 100 00 80 0×50
AF23 010 110 22 010 110 00 88 0×58
CS3 011 000 24 011 000 00 96 0×60
AF31 011 010 26 011 010 00 104 0×68
AF32 011 100 28 011 100 00 112 0×70
AF33 011 110 30 011 110 00 120 0×78
CS4 100 000 32 100 000 00 128 0×80
AF41 100 010 34 100 010 00 136 0×88
AF42 100 100 36 100 100 00 144 0×90
AF43 100 110 38 100 110 00 152 0×98
CS5 101 000 40 101 000 00 160 0xA0
EF 101 110 46 101 110 00 184 0xB8
CS6 110 000 48 110 000 00 192 0xC0
CS7 111 000 56 111 000 00 224 0xE0
The CS (Class-Selector) codepoints above are in the form ‘xxx000′. The first three bits ‘xxx’ are the IP precedence bits for backwards compatibility, while the last 3 bits are set to zero. Each IP precedence value is mapped to a DiffServ value known as Class-Selector codepoints. If a packet is received from a non-DiffServ aware router that used IP precedence markings, the DiffServ router can still understand the encoding as a Class-Selector codepoint.
The DiffServ model also introduced two types of forwarding classes : AF & EF.
The EF (Expedited Forwarding) traffic is often given strict priority queuing above all other traffic classes. The design aim of EF is to provides a low loss, low latency, low jitter, end-to-end expedited service through the network. These characteristics are suitable for voice, video and other real-time services.
The AF (Assured forwarding) behavior allows the operator to provide assurance of delivery as long as the traffic does not exceed some subscribed rate. Traffic that exceeds the subscription rate faces a higher probability of being dropped if congestion occurs. The AF per-hop behavior group defines 4 separate AF Classes. Within each Class(1 to 4), packets are given a drop precedence (high =3 , medium =2  or low =1).  The 1st three bits of the six-bit DSCP field define the Class, the next two bits define the Drop-Probability, and the last bit is reserved (= zero). AF is present in the format AFxy, where ‘x’ represents the AF-Class (HIGHER class value is more PREFERRED) and ‘y’ represents the Drop-Probability (HIGHER value is more likely to be DROPPED).
AF23, for example, denotes class 2 and a high drop preference of 3. If AF23 was competing with AF21, AF23 will be dropped before AF21, since they in the same class. But if you had AF33 & AF21, AF33 is a more important class, therefor AF21 will be dropped first.
A nice formula to work out the decimal value of the AF bits, will be 8x+2y. Example AF31 = (8*3) + (2*1) , thus AF31 = 26.
Optionally you dont have to match any of the predefined DiffServ values. You can match any of the 64 DSCP values (0-63), by configuring just that decimal value.

  • The second last heading “DS-Field Decimal Value is synonymous with the TOS-Byte decimal field. This is the value used in extended pings to generate ICMP traffic with a specific QOS value.
  • That last heading “DS-Field HEX Value is the value you will see/use in a Verbose Netflow output.
found at:

Tuesday, April 30, 2013

ASA Smart Call Home common uses and periodic monitoring

ASA Smart Call Home common uses and periodic monitoring


Purpose of this document

Smart Call Home is a feature introduced into the ASA firewalls in version 8.2 that allows for periodic monitoring of the firewall device. This document how to leverage this feature to monitor and troubleshoot network issues.

Configuring Smart Call Home

To configure Smart Call Home, use the following document:

Common Uses

Configuration Backups

Gathering configuration backups  periodically is useful in case of device replacement or change control.  It helps to identify the last working configuration and archives changes  made to the firewall.

hostname (config)# service call-home
hostname (config)# call-home
hostname (cfg-call-home)# contact-email-addr [email protected]
hostname (cfg-call-home)# mail-server priority 1 hostname (cfg-call-home)# profile ConfigBackup-1 hostname (cfg-call-home-profile)# destination address email [email protected] hostname (cfg-call-home-profile)# destination transport-method email hostname (cfg-call-home-profile)# subscribe-to-alert-group configuration export full periodic monthly

The configuration alert-group (as configured above with the export full, non-default option) includes the commands:
- show call-home registered-module status | exclude disabled
- show running-config
- show startup-config
- show access-list | include elements

In the above example, the firewall will send these outputs to the email address [email protected] monthly.

Network Profiling using Snapshots

Network profiling is an important process that allows a network administrator to understand current utilization levels of their network. This is important for monitoring current load, feature usage as well as anamolous behaviour. Having good archived historical network profile data helps to troubleshoot the most complex networking problems such as oversubscription and load issues. Additionally, it provides an early warning system to help net admins to understand when their network is reaching capacity.

Snapshots are a Smart Call Home feature that allows the user to customize which commands are sent by the ASA.

In the below example, the network administrator is interested in understanding the network utilization of their ASA. As a result, the snapshot profile is built to gather outputs relevant to network utilization:
hostname (config)# service call-home
hostname (config)# call-home
hostname (cfg-call-home)# contact-email-addr [email protected]
hostname (cfg-call-home)# mail-server priority 1
hostname (cfg-call-home)# alert-group-config snapshot
hostname (cfg-call-home-snapshot)# add-command "show traffic" hostname (cfg-call-home-snapshot)# add-command "show interface detail" hostname (cfg-call-home-snapshot)# add-command "show perfmon"
hostname (cfg-call-home-snapshot)# add-command "show conn count" hostname (cfg-call-home-snapshot)# add-command "show xlate count"
hostname (cfg-call-home-snapshot)# add-command "show service-policy"
hostname (cfg-call-home)# profile NetworkProfiling-1 hostname (cfg-call-home-profile)# destination address email [email protected] hostname (cfg-call-home-profile)# destination transport-method email hostname (cfg-call-home-profile)# subscribe-to-alert-group snapshot periodic interval 120

These outputs will be gathered periodically every 120 minutes as emails, which the network adminstrator can then parse and format into graphs or charts. In the above example, the network administrator will be able to graph the current traffic rate through all the interfaces, the current rate of connection as well as the current connection and xlate counts. Additionally, the net admin was interested in knowing how much traffic through the firewall was being sent through the service-policy, which is the last output included in the snapshot.

Device Oversubscription Issues

Networking profiling is very useful to monitor the current status of a network. But, when there is a network load related issue, snapshots can be used to more efficiently isolate the problem.

When a network adminsitrator suspects that the firewall is reaching a load limit, they can leverage Smart Call Home and the snapshot feature to provide very specific data that helps to isolate the oversubscription related issues. For more information regarding this specific issue, please refer to the following document:

Specific to Smart Call Home, the following snapshot profile will help to gather the necessary data:
hostname (config)# service call-home
hostname (config)# call-home
hostname (cfg-call-home)# contact-email-addr [email protected]
hostname (cfg-call-home)# mail-server priority 1
hostname (cfg-call-home)# alert-group-config snapshot
hostname (cfg-call-home-snapshot)# add-command "show cpu detailed"
hostname (cfg-call-home-snapshot)# add-command "show processes cpu-usage"
hostname (cfg-call-home-snapshot)# add-command "show processes cpu-hog"
hostname (cfg-call-home-snapshot)# add-command "show interface detail | i line|overrun|no buffer"
hostname (cfg-call-home-snapshot)# add-command "show memory detail"
hostname (cfg-call-home)# profile Oversubscription-1 hostname (cfg-call-home-profile)# destination address email [email protected] hostname (cfg-call-home-profile)# destination transport-method email hostname (cfg-call-home-profile)# subscribe-to-alert-group snapshot periodic interval 120
By using the document linked above, the net admin understands that oversubscription can be primarily caused by cpu utilization and network load. Since the net admin is already gathering network profile information, the only additional information required is with regards to device level utilization. The snapshot profile above gathers information regarding cpu utilization, interface oversubscription and memory levels.

The Smart Call Home information gathered in both the network profiling and device oversubscription can be graphed to better understand whether the oversubscription behaviour is periodic or consistent. A consistent problem may indicate a network attack or infected host, while a periodic behaviour tends to be caused by network load.

VPN Utilization

Since VPN features are licensed on the ASA platforms, it is important for a network administrator to understand utilization levels of the VPN deployment. This will help to forecast VPN expansion requirements to accomodate network growth.

Below is a profile that provides the necessary VPN information:
hostname (config)# service call-home
hostname (config)# call-home
hostname (cfg-call-home)# contact-email-addr [email protected]
hostname (cfg-call-home)# mail-server priority 1
hostname (cfg-call-home)# alert-group-config snapshot
hostname (cfg-call-home-snapshot)# add-command "show vpn-sessiondb"
hostname (cfg-call-home-snapshot)# add-command "show crypto ipsec sa"
hostname (cfg-call-home-snapshot)# add-command "show crypto isakmp sa"
hostname (cfg-call-home-snapshot)# add-command "show webvpn statistics"
hostname (cfg-call-home-snapshot)# add-command "show crypto protocol statistics all"
hostname (cfg-call-home)# profile VPNUtilization-1 hostname (cfg-call-home-profile)# destination address email [email protected] hostname (cfg-call-home-profile)# destination transport-method email hostname (cfg-call-home-profile)# subscribe-to-alert-group snapshot periodic interval 120

found at:


Internet Storm Center Infocon Status

Internet Storm Center Infocon Status
Internet Storm Center Infocon Status