ceph-deploy error : GPG error trusty InRelease: Clearsigned file isn’t valid, got ‘NODATA’

Just tried to install ceph hammer version on ubuntu 14.04.5 for test purposes. First of all ceph-deploy version newer than 1.5.25 doesn’t install hammer even you define “–release hammer” parameter.
So, I installed that version.
sudo pip install ceph-deploy==1.5.25
But, In this version the repo domain is defined as “ceph.com” and ceph-deploy adds the following line into apt sources.
deb https://ceph.com/debian-hammer/ trusty main

Now the problem is that ceph.com redirects to download.ceph.com. When ceph-deploy executes apt-get, apt-get will give GPG error as in the following example.

E: GPG error: http://ceph.com trusty InRelease: Clearsigned file isn't valid, got 'NODATA' (does the network require authentication?)
99% [1 Sources bzip2 0 B] [7 InRelease gpgv 618 B] [5 Packages 5489 kB/5859 kB Splitting up /var/lib/apt/lists/partial/ceph.com_debian-
hammer_dists_trusty_InReIgn http://ceph.com trusty InRelease

To accomplish it I edited the file below.
/usr/local/lib/python2.7/dist-packages/ceph_deploy/hosts/debian/install.py
Then find the line below and replace ceph.com with download.ceph.com. Afterwards, you should install ceph hammer packages on the nodes without GPG error.

if version_kind == 'stable':
url = 'http://ceph.com/debian-{version}/'.format(

Fix “The name limit for the local computer network adapter card was exceeded”

One of the hyper-v cluster node became out of tcp sockets. We can’t migrate or update node or VMs under it. BTW,don’t think that this socket problem is hyper-v related. You can meet any windows server 2008/2012 machine.

Anyway, I started to investigate.
the server had over 20k timewait connections.
C:\>netstat -ano | find /c “TIME_WAIT”
20150

Most of the connections were TCP 80.. strange there is no IIS under it and it is running with system PID :)
C:\>netstat -ano | findstr “TIME_WAIT” | more


TCP 10.88.56.70:80 x.x.x.x:33023 TIME_WAIT 0
TCP 10.88.56.70:80 x.x.x.x:33023 TIME_WAIT 0
TCP 10.88.56.70:80 x.x.x.x:33023 TIME_WAIT 0
TCP 10.88.56.70:80 x.x.x.x:33023 TIME_WAIT 0
TCP 10.88.56.70:80 x.x.x.x:33023 TIME_WAIT 0
TCP 10.88.56.70:80 x.x.x.x:33023 TIME_WAIT 0

As it is a system process. tried my luck. To understand better whats going on TCP 80 .. It is a module of microsoft, WSMAN!!!
netsh http show servicestate

Server session ID: FF00000A20000002
Version: 1.0
State: Active
Properties:
Max bandwidth: 4294967295
Timeouts:
Entity body timeout (secs): 120
Drain entity body timeout (secs): 120
Request queue timeout (secs): 120
Idle connection timeout (secs): 120
Header wait timeout (secs): 120
Minimum send rate (bytes/sec): 150
URL groups:
URL group ID: FE00000A40000002
State: Active
Request queue name: Request queue is unnamed.
Properties:
Max bandwidth: inherited
Max connections: inherited
Timeouts:
Timeout values inherited
Number of registered URLs: 3
Registered URLs:
HTTP://+:80/WSMAN/
HTTP://+:5985/WSMAN/
HTTP://+:47001/WSMAN/

I had to increase the dynamic port range of the server in order to move VMs under it. Because no management service can talk to each other. Also DNS isn’t working.
So, Windows Server default dynamic port count is 16384
C:\>netsh int ipv4 show dynamicport tcp

Protocol tcp Dynamic Port Range
---------------------------------
Start Port : 49152
Number of Ports : 16384

Change the port range something you like. Afterwards, everything should work. You can open new connections. When you have your work done, reboot the server because old “time waits” always stays there :)
C:\>netsh int ipv4 set dynamicport tcp start=30000 numberofports=30000
Ok.

This problem can occur any windows machine which needs socket intensive communication. I just try to give you an example of what I met. So, windows file servers, windows application servers, or you ms-sql servers which can receive lots of tcp connections may face this problem.

How to install Calamari for Ceph Cluster on Ubuntu 14.04

ceph calamari dashboard
Calamari is a web-based monitoring and management for Ceph. In this post we will install Calamari on a working ceph cluster. Calamari node and all Ceph nodes are running ubuntu 14.04. We will use ceph-deploy utility to install packages. This article is just for test purposes and give you an idea about Calamari installation.
We have 3 nodes in Ceph Cluster
cpm01 – Ceph Mon
cpm02 – Ceph OSD
cpm03 – Ceph OSD

Prepare a Ubuntu 14.04 machine ( can be VM ) and follow the steps below. Default installation.
Step 1
edit your /etc/hosts file and add all of your ceph nodes
10.4.4.1 cpm01
10.4.4.2 cpm02
10.4.4.3 cpm03
etc
.
.

create a user on your ceph nodes ( same username as you use in Calamari node )
ssh user@ceph-server
sudo useradd -d /home/calamariuser -m calamariuser
sudo passwd calamariuser

On ceph nodes, add your new user to sudoers in order to run without password prompt
echo "calamariuser ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/calamariuser
sudo chmod 0440 /etc/sudoers.d/calamariuser

On Calamari node, generate ssh key and copy it to all your ceph nodes.
do not run this command with sudo or root user. run it under calamariuser. Enter with defaults no password.
ssh-keygen

now copy the generated key to your ceph nodes
ssh-copy-id calamariuser@cpm01
ssh-copy-id calamariuser@cpm02
ssh-copy-id calamariuser@cpm03
etc
.
.

to make things tidy and in place
cd ~
mkdir calamarifiles
cd calamarifiles

Now download and install “ceph-deploy” utility. Don’t install ceph-deploy from default ubuntu repos. Because it doesn’t have calamari commands.( ver 1.40 ). So we will install latest deb packages from ceph.
wget http://download.ceph.com/debian-firefly/pool/main/c/ceph-deploy/ceph-deploy_1.5.28trusty_all.deb
sudo dpkg -i ceph-deploy_1.5.28trusty_all.deb
run "sudo apt-get -f install" if you meet any dependency problem.

Download the calamari deb packages
wget http://download.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/c/calamari/calamari-server_1.3.1.1-1trusty_amd64.deb
wget http://download.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/c/calamari-clients/calamari-clients_1.3.1.1-1trusty_all.deb
wget http://download.ceph.com/calamari/1.3.1/ubuntu/trusty/pool/main/d/diamond/diamond_3.4.67_all.deb

Step 2
Installing Salt packages
wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -
add the line in /etc/apt/sources.list
deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main
Then run the commands below
sudo apt-get update
sudo apt-get install salt-syndic
sudo apt-get install salt-minion
sudo apt-get install salt-master
sudo apt-get install -y apache2 libapache2-mod-wsgi libcairo2 supervisor python-cairo libpq5 postgresql

sudo dpkg -i calamari-server*.deb calamari-clients*.deb
After this step you may see some python dependency errors. like “python-gevent python-twisted python-greenlet python-txamqp”
It’s normal. just run sudo apt-get -f install to fix dependencies.

Once, everything is OK. run the following command and follow the instructions. It will ask you some account information for management.
sudo calamari-ctl initialize
Finally, open your web browser and visit Calamari node address then log into the Calamari user interface with the account you created.

Now,you should see a screen like below
calamarifirst

Let’s connect our nodes.

Step 1
As mentioned in the web interface we will use ceph-deploy to connect and install required packages to the nodes.
Edit cephdeploy.conf and add the master definition in the file. This information will be used by your ceph nodes to connect.

On calamari node,
nano ~/.cephdeploy.conf
add these lines top of the file. Change the master address. Save and exit.
[ceph-deploy-calamari]
master = your calamari FQDN address

Now run the following command and follow debug messages. Everything should be done without error.
ceph-deploy calamari connect [ ...]
Example
ceph-deploy calamari connect cpm01 copm02 cpm03

Step 2
Now we should copy the diamond_3.4.67_all.deb file to all ceph nodes and install it.Diamond is a python daemon that collects system metrics.

On calamari node,
cd ~/calamarifiles
scp diamond_3.4.67_all.deb calamariuser@cephnode:/tmp/

Now SSH to each ceph node and install it.
cd /tmp
dpkg -i diamond_3.4.67_all.deb

fix any dependencies if met as before.

Step 3

SSH and check each ceph node if they are all have the master: calamarinodeFQDN the following config file. Otherwise, add and restart minion service.
sudo nano /etc/salt/minion.d/calamari.conf
sudo service salt-minion restart

Make sure ceph nodes solve your calamari node FQDN. Otherwise add it to their /etc/hosts file

Step 4
Now refresh your calamari web interface. You should see your nodes requesting to register. Follow the screen instructions.
That’s all. you have a working monitoring and management system for ceph.
Feel free to ask any question

CEPH IOError: connexion already closed

While deploying ceph cluster with ceph-deploy utility and if you receive the error below, you should add the user to sudoers which is deploying ceph on remote node.
In this senario “cephusr” is the account which is deploying ceph on remote host. Run these commands on each node you are deploying ceph. OS in this example is Ubuntu 14.04
echo "cephusr ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephusr
sudo chmod 0440 /etc/sudoers.d/cephusr

ERROR
Traceback (most recent call last):
File “/usr/bin/ceph-deploy”, line 21, in
sys.exit(main())
File “/usr/lib/python2.7/dist-packages/ceph_deploy/util/decorators.py”, line 62, in newfunc
return f(*a, **kw)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/cli.py”, line 136, in main
return args.func(args)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/install.py”, line 37, in install
distro = hosts.get(hostname, username=args.username)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/hosts/__init__.py”, line 37, in get
conn.import_module(remotes)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/connection.py”, line 47, in import_module
self.remote_module = ModuleExecute(self.gateway, module, self.logger)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/connection.py”, line 53, in __init__
self.channel = gateway.remote_exec(module)
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway.py”, line 117, in remote_exec
channel = self.newchannel()
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py”, line 967, in newchannel
return self._channelfactory.new()
File “/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py”, line 743, in new
raise IOError(“connexion already closed: %s” % (self.gateway,))
CEPH IOError: connexion already closed: Gateway id=’gw0′ not-receiving, thread model, 0 active channels

ldap_bind: Invalid credentials (49)

I was trying to find AD objects with ldapsearch under linux. But somehow it always returned “ldap_bind: Invalid credentials (49)”. I solved using “bind DN” format like object@domain. here is example. it will return nothing but at least you will see that authentication is working.

ldapsearch -H ldap://your.domain.com/ -x -D cn=youruid,dc=your,dc=domain -W
ldap_initialize( ldap://your.domain.com:389/??base )
Enter LDAP Password:
ldap_bind: Invalid credentials (49)
additional info: 80090308: LdapErr: DSID-0C0903AA, comment: AcceptSecurityContext error, data 525, v1772

Working one
ldapsearch -H ldap://your.domain.com/ -x -D “user@domain” -vvvvvvvv -W

SafeNET PKCS Keypair generation failed

If you run a SafeNET HSM box with soft 6 and firmware 6.22 in FIPS mode you will meet errors while generating RSA PKCS keypair. As you can see on the following test, PKCS mechanism gives “Key pair generation failed” .
Also, HSM always returns CKR_MECHANISM_INVALID to your requesting application. For example, SUN Java PKCS provider should return something like this.
sun.security.pkcs11.wrapper.PKCS11Exception: CKR_MECHANISM_INVALID

C:\Program Files\SafeNet\LunaClient>Cmu.exe gen
Please enter password for token in slot 0 : ****************
Enter key type – [1] RSA [2] DSA [3] ECDSA : 1

Select RSA Mechanism Type –
[1] PKCS [2] FIPS 186-3 Only Primes [3] FIPS 186-3 Auxiliary Primes : 1
Enter modulus length (8 bit multiple) : 2048
Select public exponent – [1] 3 [2] 17 [3] 65537 : 3
Key pair generation failed

CKM_RSA_PKCS_KEY_PAIR_GEN is disabled in FIPS mode in 6.0/6.22. I havent tried but you have an option “Mechanism Remap for FIPS Compliance” please refer to your HSM guide. But if you get an firmware software update, be careful with this setting, which makes it appear you are getting a new, secure mechanism, when really you are getting an outdated, insecure mechanism. Anyway, it is better to run what FIPS says. Don’t play around :)

Here are the supported mechasims chart
HSMFIPS

SafeNET HSM LunaSA ile Client Arasında NTL Oluşturma

Aşağıda, HSM cihazımız üzerinde tanımlanmış clientları ilgili partition(slotlara) NTL kullanarak tanıtma adımlarını bulacaksınız. Daha önce cihaz üzerinde partition oluşturulduğunu ve ilgili clientların HSM cihazına tanıtıldığını farzediyorum.

cihaz üzerinde bulunan partitionları aşağıdaki komutla listeyebilirsiniz.
[HSM] lunash:>partition list
Storage (bytes)
—————————-
Partition Name Objects Total Used Free
===========================================================================
1110641543200 testpartition 0 1039288 0 1039288
Command Result : 0 (Success)
Continue reading “SafeNET HSM LunaSA ile Client Arasında NTL Oluşturma”

SafeNET HSM LunaSA Client Tanıtma Adımları

Bu döküman öncesinde http://java.com/en/download/manual.jsp son java versionu ve ilgili LUNA Client kurulumunu yapmış olmanız gerekiyor.

Aşağıdaki adımları uygulayacağımız makinada Microsoft Windows(Client) işletim sistemi vardır.HSM ve Luna client arasındaki iletişim NTL olacaktır.
Aşağıdaki adımları uygulamadan önce HSM cihazınızın initialize işlemleri bitmiş, NTLS servislerinin çalıştığını, server sertifikasının düzgün oluşturulduğunu ve cihaza SSH erişiminizin olduğunu teyid ediniz.

Continue reading “SafeNET HSM LunaSA Client Tanıtma Adımları”

Apache MaxClients/MaxRequestWorkers Calculation

In order to set optimal values to MaxClients or MaxRequestWorkers we have to know how much memory apache consumes for each process. The script below give you Maximum memory usage of a single process and average memory usage of each process. But the values change under server load. So, stress test your server to fill up the memory to set correct values. This won’t give you the exact/absolute value but does most of the job.Setting values between average and max should save you.

Continue reading “Apache MaxClients/MaxRequestWorkers Calculation”

PHP OpCache – worth to use

PHP v5.5 comes with a caching engine named OpCache which stores precompiled scripts in the memory like APC. After the first execution of the scripts, the precompiled script is stored in memory, which will lead to performance boost in your PHP application.

You can find how to enable and configure opcahce in the web. Simply, I want to show you if opcache effects php execution and performance or not.

Continue reading “PHP OpCache – worth to use”