ceph-deploy error : GPG error trusty InRelease: Clearsigned file isn’t valid, got ‘NODATA’

Just tried to install ceph hammer version on ubuntu 14.04.5 for test purposes. First of all ceph-deploy version newer than 1.5.25 doesn’t install hammer even you define “–release hammer” parameter.
So, I installed that version.
sudo pip install ceph-deploy==1.5.25
But, In this version the repo domain is defined as “ceph.com” and ceph-deploy adds the following line into apt sources.
deb https://ceph.com/debian-hammer/ trusty main

Now the problem is that ceph.com redirects to download.ceph.com. When ceph-deploy executes apt-get, apt-get will give GPG error as in the following example.

E: GPG error: http://ceph.com trusty InRelease: Clearsigned file isn't valid, got 'NODATA' (does the network require authentication?)
99% [1 Sources bzip2 0 B] [7 InRelease gpgv 618 B] [5 Packages 5489 kB/5859 kB Splitting up /var/lib/apt/lists/partial/ceph.com_debian-
hammer_dists_trusty_InReIgn http://ceph.com trusty InRelease

To accomplish it I edited the file below.
/usr/local/lib/python2.7/dist-packages/ceph_deploy/hosts/debian/install.py
Then find the line below and replace ceph.com with download.ceph.com. Afterwards, you should install ceph hammer packages on the nodes without GPG error.

if version_kind == 'stable':
url = 'http://ceph.com/debian-{version}/'.format(

How to install Calamari for Ceph Cluster on Ubuntu 14.04

ceph calamari dashboard
Calamari is a web-based monitoring and management for Ceph. In this post we will install Calamari on a working ceph cluster. Calamari node and all Ceph nodes are running ubuntu 14.04. We will use ceph-deploy utility to install packages. This article is just for test purposes and give you an idea about Calamari installation.
We have 3 nodes in Ceph Cluster
cpm01 – Ceph Mon
cpm02 – Ceph OSD
cpm03 – Ceph OSD

Prepare a Ubuntu 14.04 machine ( can be VM ) and follow the steps below. Default installation.
Step 1
edit your /etc/hosts file and add all of your ceph nodes
10.4.4.1 cpm01
10.4.4.2 cpm02
10.4.4.3 cpm03
etc
.
.

create a user on your ceph nodes ( same username as you use in Calamari node )
ssh user@ceph-server
sudo useradd -d /home/calamariuser -m calamariuser
sudo passwd calamariuser

On ceph nodes, add your new user to sudoers in order to run without password prompt
echo "calamariuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/calamariuser
sudo chmod 0440 /etc/sudoers.d/calamariuser

On Calamari node, generate ssh key and copy it to all your ceph nodes.
do not run this command with sudo or root user. run it under calamariuser. Enter with defaults no password.
ssh-keygen

now copy the generated key to your ceph nodes
ssh-copy-id calamariuser@cpm01
ssh-copy-id calamariuser@cpm02
ssh-copy-id calamariuser@cpm03
etc
.
.

to make things tidy and in place
cd ~
mkdir calamarifiles
cd calamarifiles

Now download and install “ceph-deploy” utility. Don’t install ceph-deploy from default ubuntu repos. Because it doesn’t have calamari commands.( ver 1.40 ). So we will install latest deb packages from ceph.
wget http://download.ceph.com/debian-firefly/pool/main/c/ceph-deploy/ceph-deploy_1.5.28trusty_all.deb
sudo dpkg -i ceph-deploy_1.5.28trusty_all.deb
run "sudo apt-get -f install" if you meet any dependency problem.

Download the calamari deb packages
wget http://download.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/c/calamari/calamari-server_1.3.1.1-1trusty_amd64.deb
wget http://download.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/c/calamari-clients/calamari-clients_1.3.1.1-1trusty_all.deb
wget http://download.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/d/diamond/diamond_3.4.67_all.deb

Step 2
Installing Salt packages
wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -
add the line in /etc/apt/sources.list
deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main
Then run the commands below
sudo apt-get update
sudo apt-get install salt-syndic
sudo apt-get install salt-minion
sudo apt-get install salt-master
sudo apt-get install -y apache2 libapache2-mod-wsgi libcairo2 supervisor python-cairo libpq5 postgresql

sudo dpkg -i calamari-server*.deb calamari-clients*.deb
After this step you may see some python dependency errors. like “python-gevent python-twisted python-greenlet python-txamqp”
It’s normal. just run sudo apt-get -f install to fix dependencies.

Once, everything is OK. run the following command and follow the instructions. It will ask you some account information for management.
sudo calamari-ctl initialize
Finally, open your web browser and visit Calamari node address then log into the Calamari user interface with the account you created.

Now,you should see a screen like below
calamarifirst

Let’s connect our nodes.

Step 1
As mentioned in the web interface we will use ceph-deploy to connect and install required packages to the nodes.
Edit cephdeploy.conf and add the master definition in the file. This information will be used by your ceph nodes to connect.

On calamari node,
nano ~/.cephdeploy.conf
add these lines top of the file. Change the master address. Save and exit.
[ceph-deploy-calamari]
master = your calamari FQDN address

Now run the following command and follow debug messages. Everything should be done without error.
ceph-deploy calamari connect [ ...]
Example
ceph-deploy calamari connect cpm01 copm02 cpm03

Step 2
Now we should copy the diamond_3.4.67_all.deb file to all ceph nodes and install it.Diamond is a python daemon that collects system metrics.

On calamari node,
cd ~/calamarifiles
scp diamond_3.4.67_all.deb calamariuser@cephnode:/tmp/

Now SSH to each ceph node and install it.
cd /tmp
dpkg -i diamond_3.4.67_all.deb

fix any dependencies if met as before.

Step 3

SSH and check each ceph node if they are all have the master: calamarinodeFQDN the following config file. Otherwise, add and restart minion service.
sudo nano /etc/salt/minion.d/calamari.conf
sudo service salt-minion restart

Make sure ceph nodes solve your calamari node FQDN. Otherwise add it to their /etc/hosts file

Step 4
Now refresh your calamari web interface. You should see your nodes requesting to register. Follow the screen instructions.
That’s all. you have a working monitoring and management system for ceph.
Feel free to ask any question

CEPH IOError: connexion already closed

While deploying ceph cluster with ceph-deploy utility and if you receive the error below, you should add the user to sudoers which is deploying ceph on remote node.
In this senario “cephusr” is the account which is deploying ceph on remote host. Run these commands on each node you are deploying ceph. OS in this example is Ubuntu 14.04
echo "cephusr ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephusr
sudo chmod 0440 /etc/sudoers.d/cephusr

ERROR
Traceback (most recent call last):
File “/usr/bin/ceph-deploy”, line 21, in
sys.exit(main())
File “/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py”, line 62, in newfunc
return f(*a, **kw)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py”, line 136, in main
return args.func(args)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/install.py”, line 37, in install
distro = hosts.get(hostname, username=args.username)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/hosts/__init__.py”, line 37, in get
conn.import_module(remotes)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/connection.py”, line 47, in import_module
self.remote_module = ModuleExecute(self.gateway, module, self.logger)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/connection.py”, line 53, in __init__
self.channel = gateway.remote_exec(module)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway.py”, line 117, in remote_exec
channel = self.newchannel()
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py”, line 967, in newchannel
return self._channelfactory.new()
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py”, line 743, in new
raise IOError(“connexion already closed: %s” % (self.gateway,))
CEPH IOError: connexion already closed: Gateway id=’gw0′ not-receiving, thread model, 0 active channels

PHP OpCache – worth to use

PHP v5.5 comes with a caching engine named OpCache which stores precompiled scripts in the memory like APC. After the first execution of the scripts, the precompiled script is stored in memory, which will lead to performance boost in your PHP application.

You can find how to enable and configure opcahce in the web. Simply, I want to show you if opcache effects php execution and performance or not.

Continue reading “PHP OpCache – worth to use”

GHOST: glibc gethostbyname buffer overflow – CVE-2015-0235

You can test your system against GHOST: glibc gethostbyname buffer overflow  CVE-2015-0235.

wget http://www.cirgan.net/GHOST.c
or compiled one
wget http://www.cirgan.net/GHOST

root@testme /home # gcc GHOST.c -o GHOST
root@testme /home # ./GHOST
not vulnerable

 

GNU Bash (ShellShock) Vulnerability – CVE-2014-6271

A critical vuln has been discovered recently. Check for more information

http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-6271

also you can check your server via shell with the command below.

env x='() { :;}; echo vulnerable!’ bash -c ””

a patched system output looks like this

bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x’

 

How to install OpenVPN Server

Below you will find how to install OpenVPN server.  At the end of the article;

1- We will have a VPN server running under linux..
2- We will be using linux pam accounts to authenticate clients
3- All clients connected can access local network and each other
4- All clients will use the VPN server to access the internet.
5- VPN server will act as Remote to Site
6- We will have a sample windows client configuration to connect.

Setup
Ubuntu Server 12.04
WAN 192.168.1.33/30
LAN 172.16.70.0/24
VPN 10.8.0.0/24
Continue reading “How to install OpenVPN Server”

SFTP – subsystem request failed on channel X

You are using sftp to connect your server in a chrooted environment and you met the error below.

subsystem request failed on channel 0
Couldn’t read packet: Connection reset by peer

This causes because of wrong external library configured in your sshd_config. So, edit your sshd_config and use internal-sftp. Because openssh already has internal sftp functionality and you dont need any external libraries.

Find the line beginning with “Subsystem”. Comment out and add the following line.

Subsystem sftp internal-sftp

Installing adaptec storage manager(asm) on linux (ubuntu/debian)

to manage your raid controller on your linux server cli follow the steps below. In this scenario, we are using adaptec 5805 raid controller.

STEP 1 – first download necessary files to install and untar them

root@lnx:/home# wget http://download.adaptec.com/raid/storage_manager/asm_linux_x64_v7_31_18856.tgz
--2013-07-15 11:08:05-- http://download.adaptec.com/raid/storage_manager/asm_linux_x64_v7_31_18856.tgz
Resolving download.adaptec.com (download.adaptec.com)... 93.184.221.133
Connecting to download.adaptec.com (download.adaptec.com)|93.184.221.133|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 116573943 (111M) [application/x-tgz]
Saving to: `asm_linux_x64_v7_31_18856.tgz'
100%[===================================================>] 116,573,943 12.3M/s in 16s
2013-07-15 11:08:22 (6.79 MB/s) - `asm_linux_x64_v7_31_18856.tgz' saved [116573943/116573943]

Continue reading “Installing adaptec storage manager(asm) on linux (ubuntu/debian)”

Arcserve Linux Agent and libstdc++-libc6.1-1.so.2 requirement

If you meet the following error while installing Arcserve Linux Agent

"The components you selected require this library file: libstdc++-libc6.1-1.so.2. Typically, Linux comes with this library file. It is located in the /usr/lib path."

try to install compat libraries.
For CentOs
yum install compat-libstdc++*
For Debian/Ubuntu
apt-get install libstdc++*
then rerun the installation script.