Netscaler VIP Realtime traffic usage

I googled everywhere, checked documents and cant reach any information how to measure how much traffic is passing through loadbalancer individual VIPs.

So my solutions is to measure RX/TX bytes/s via SNMP. To accomplish it follow the steps below.

An SNMP browser ( I used Ireasoning MIB Browser )
Netscaler SNMPv2 MIBs ( you may download it from your Netscaler appliance )

I assume that your appliance SNMP enabled and functioning normally.

Load the MIB you downloaded from netscaler and enter the SNMP settings. Navigate to nsVserverGroup with MIB browser and send SNMP walk.


You have get the names starting with vsvrName on the left side. Select your favorite VIP :)

Now lets take the first line in out example. Full OID will be like below


so crop the vsvrName part and you will have you unique OID for your VIP

. < this is out VIP

Now, we have to put RX or TX rate OID in front of
. Let’s find the RX

check the list below and find the lines starting with vsvrRxBytesRate. You may check any value you want because we want to learn what OID is representing vsvrRxBytesRate


so as you can see . is whole OID and . is representing the vsvrRxBytesRate

Now if you combine both OID you will get the RX bytes/s of that VIP. But this is not a gauge, it is a value.. It may not as good as traffic counter but the best you can do is that :)


After you had the value multiply with 8 to get the bits value and divide it to 1024 to get kbits/s..

Also there are lots of other OID pick as you want using this method.




Automating Fortigate Backups

As you know we cant schedule fortigate backups. So you may schedule a cron job to backup your fortigate box and send the backup via ftp.

Requirements :
Any Linux Server
FTP Server ( may be the same linux machine )

First we need an expect script to send commands to fortigate box then we will execute it via sh script.I wish someone can port it to powershell :)
Continue reading “Automating Fortigate Backups”

SFTP – subsystem request failed on channel X

You are using sftp to connect your server in a chrooted environment and you met the error below.

subsystem request failed on channel 0
Couldn’t read packet: Connection reset by peer

This causes because of wrong external library configured in your sshd_config. So, edit your sshd_config and use internal-sftp. Because openssh already has internal sftp functionality and you dont need any external libraries.

Find the line beginning with “Subsystem”. Comment out and add the following line.

Subsystem sftp internal-sftp

Veeam – Unable to truncate transaction logs RPC 1726

After Veeam 6.5 to 7.0 upgrade, some VMs which have application aware enabled gave the following error.

Unable to truncate transaction logs. Details: RPC function call failed. Function name: [BlobCall]. Target machine:
[xx.xx.xx.xx]. RPC error:The remote procedure call failed. Code: 1726

Also you may see Veeam guest agent services are terminating unexpectedly in your windows event logs..

Faulting application name: VeeamGuestAgent.exe, version:, time stamp: 0x520b81e7
Faulting module name: dbghelp.dll, version: 6.1.7601.17514, time stamp: 0x4ce7c5ac
Exception code: 0xc00000fd

I checked ( permissions, VSS writers etc etc ) on VM side but no success.

In my solution,
I disabled application aware processing on backup job. Took a backup. Then re-enabled application aware processing and it is now working fine. I think veeam server pushed a new guest agent on VM side. Now I see no more service termination events and logs are truncated successfully. Hope this helps in your case.

The federation certificate expiration in VMware vDirector Orgs

Organization Administrators can receive a message like below

The federation certificate expiration is mm/dd/yyyy hh:mm:ss . An expired certificate may disable federation with the identity provider setup with your organization. The certificate can be regenerated from the Federation Settings page.

So you may enter Organization and Regenerate certificate.

Corresponding Organization/Administration/federation
Click Regenerate




Installing adaptec storage manager(asm) on linux (ubuntu/debian)

to manage your raid controller on your linux server cli follow the steps below. In this scenario, we are using adaptec 5805 raid controller.

STEP 1 – first download necessary files to install and untar them

root@lnx:/home# wget
--2013-07-15 11:08:05--
Resolving (
Connecting to (||:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 116573943 (111M) [application/x-tgz]
Saving to: `asm_linux_x64_v7_31_18856.tgz'
100%[===================================================>] 116,573,943 12.3M/s in 16s
2013-07-15 11:08:22 (6.79 MB/s) - `asm_linux_x64_v7_31_18856.tgz' saved [116573943/116573943]

Continue reading “Installing adaptec storage manager(asm) on linux (ubuntu/debian)”

Arcserve Linux Agent and requirement

If you meet the following error while installing Arcserve Linux Agent

"The components you selected require this library file: Typically, Linux comes with this library file. It is located in the /usr/lib path."

try to install compat libraries.
For CentOs
yum install compat-libstdc++*
For Debian/Ubuntu
apt-get install libstdc++*
then rerun the installation script.

Parallels Plesk 11 installation failed ‘missing an installation prerequisite’

If you meet the error about missing prerequisite and sql-mo fails you should download and install “Microsoft® System CLR Types for Microsoft® SQL Server® 2012 component” manually. Then you may rerun ai.exe. it should not fail.

Failed to install ‘C:\ParallelsInstaller\parallels\PANEL-WIN_11.0.9\thirdparty-msi-Windows-any-x86_64\sqlsmo_x64.msi’: Fatal error during installation. (Error code 1603)
Error: Action sequence ‘install: installing MSSQL Server Management Objects…’ of package ‘sql-smo’ has been failed with code 1603
Not all packages were installed


This RRD was created on another architecture

Just moved cacti DB and RRD files to a new server . After moving RRD from 32bit OS to 64bit OS you will meet
"ERROR: This RRD was created on another architecture" in your http error logs and you will not see your historic graphs.

You should dump your rrd files on 32bit OS and transfer them to your new server(64bit).. Then you should restore your xmls..

on 32bit OS
for i in `find -name "*.rrd"`; do path_to_rrdtool dump $i > $i.xml; done

on 64bit OS
for i in `find -name "*.xml"`; do path_to_rrdtool restore $i `echo $i |sed s/.xml//g`; done