RHEV setup

This blog post comes a little late because I did this RHEV setup at our company more than 6 months ago and it has been living in the drafts folder for some time now. Now with RHEV 3.0 Beta released I tought it’s time to publish this.

About a year and a half ago we started looking at alternatives for our VMWare ESXi setup because we wanted to add hypervisor nodes to our 2 existing nodes running VMWare ESXi. We also wanted the ability to live migrate vm’s between the nodes. At the same time Red Hat released RHEV 2.1 and being a Red Hat partner we decided to evaulate it.

We extended our existing setup with 2 Supermicro servers and a Supermicro SATA disk based SAN box configured as an iSCSI target providing around 8TB of usable storage.

###Migration

To migrate our existing VM’s running on VMWare we used the virt-v2v tool that converts and moves VMWare machines to RHEV. This procedure can be scripted so you can define a set of VM’s you want to migrate in one go. Unfortunate these VM’s need to be powerd down. I noticed that if your vmdk folders/files are scattered around on you storage including differend folder names, the virt-v2v tool in some cases bails out. In our case I could understand why the tool refused to migrate some machines (it was quite a mess).

###Hypervisors

You have 2 options to install the hypervisor nodes :

  • RHEV-H : stripped RHEL with a 100MB foorprint that provides enough to function as a hypervisor node.
  • RHEL : a default RHEL install you can configure yourself.

We created a custom profile on our Kickstart server so we could easily deploy hypervisors nodes based on a standard RHEL. By using a standard RHEL you can install additional packages later on which is not the case with a RHEV-H based install.

Once installed you can then add this node from within the manager interface to your cluster. Once added it will automatically install the necessary packages and becomes active in the cluster.

###Storage After adding hypervisor nodes you need to create “Storage Domains” based on either NFS, FC or iSCSI. Besides Storage Domains you also need to define an ISO domain to stock your installation images. If you want to migrate VM’s from VMWare or other RHEV clusters you need to create an Export Domain.

In each cluster one hypervisor node automatically gets the SPM (Storage Pool Manager) role defined. This host keeps track of where storage is assigned to. As soon as this host is put in maintenance or becomes unavailable another host in the cluster will take over the SPM role.

VM’s can use Preallocated disks (RAW) or Thin Provisioning (QCOW). For best performance Preallocated is recommended.

###conclusion We have been running this setup for more than a year now and haven’t had any real issues with it. We actually filed 2 support cases which have been resolved in newer releases of RHEV. At the moment we run around 100 VM’s and although I haven’t run any benchmarks yet, I see no real difference with our VMWare setup using FC storage. Although the product still has some drawbacks I believe it has a solid base to build on and already has some nice features like Live Migration, Load Balancing, Thin provisioning,..

###Cons

  • RHEV-M (manager) runs on Windows
  • RHEV-M can only be accessed via IE (will probably change in 3.1)
  • Storage part is quite confusing at first.
  • API only accesible via Powershell
  • no live snapshots

In a few weeks I’ll probably start testing RHEV 3.0 which now runs on Linux on JBOSS. This makes me think if JBOSS clustering will work to get RHEV-M working in a HA setup.