Linux.com

Home Learn Linux Linux Tutorials Managing Live and Offline Migrations with Linux's KVM

Managing Live and Offline Migrations with Linux's KVM

One of the best features of a using virtualization is being able to move guests between virtual machine hosts as needed. Using KVM, you can do that with a mouse click. (After a bit of setup...) Today we'll learn about doing live and offline migrations using KVM on Linux.

Prequisites

As we all know, KVM is the Linux kernel's built-in virtualization solution. Using KVM, you can run virtual machines on top of Linux without having to modify the operating system that's running under KVM. You can run Linux, Windows, and other operating systems on top of Linux using KVM.

One of KVM's most useful features is migrating guest operating systems to different hosts. The type of migration we're going to do is temporary, so it's perfect for temporary chores like hardware maintenance or emergency load-balancing. You can keep your guest running while it migrates, or have only a short downtime in an offline migration.

You must already have KVM and the graphical Virtual Machine Manager installed and working, and know how to install guest operating systems. Smaller distros like Jeoss, Slitaz, and Arch are great for testing as KVM guests because they install quicker. If you need installation help try my Crash course: Virtualization with KVM on Ubuntu Server, which applies to other distros as well.

You'll need a total of three PCs: one for your KVM server, and two with some NFS (network file system) shares for testing, because migrating KVM guests requires shared network storage accessible to at least two different hosts, and NFS is fast to set up for testing. Each host must have the same processor architecture, either Intel or AMD. You can install both 32- and 64-bit guests on 64-bit hosts, but only 32-bit guests on 32-bit hosts. All hosts must be in the same subnet.

KVM can access NFS shares, Fibre Channel, iSCSI, GFS2, and SCSI RDMA protocol (SRP), so if you already have any of these to play with you may use them instead. The steps for some of them are a little different, so to learn about these consult openSUSE Virtualization with KVM or the Guide to Virtualization on Red Hat Enterprise Linux 6.

NFS Storage Pools

storage pool is a directory, hard disk, iSCSI target, LVM Volume Group, or Netfs (NFS or GlusterFS) share. Storage pools are sliced up into storage volumes for your guests to use. So let's create an NFS-based storage pool. Fire up Virtual Machine Manager, click Edit > Details and open the Storage tab. On the bottom left you'll see a button with a green cross, the Add Pool button. Click this button to open the Add a New Storage Pool windows (figure 1). Type in whatever name you want and then select netfs: Network Exported Directory, then the Forward button.

Figure 1: Creating a new NFS-based storage pool.

In Step 2 (figure 2) the Target Path is the local mountpoint on the KVM server for your NFS share. The default local storage directory is /var/lib/libvirt/images/, but you may use whatever you want. The Format is NFS, Host Name is the hostname of the server the NFS share is on, and Source Path is the NFS directory you want to use. When you click Finish it will try to mount the NFS share. You'll get either a success or error message, so you'll know if it worked or not. The error messages are detailed and usually give enough information to figure out where you went wrong.

When your new storage pool is created you must create a storage volume in it. Go back to the Storage tab, click on your new storage pool in the left pane to select it, and click the New Volume button at the bottom of the right pane (figure 3). Enter whatever you name you want and then select an image format. Figure 2: Configuring the NFS share mountpoint and shared directory.

qcow2 is the native QEMU copy-on-write image that supports snapshots, AES encryption, zlib compression, and sparse files. qcow2 is the one I use the most, but you have several others to choose from.

  • qcow— the old QEMU copy-on-write format. It has been replaced by qcow2.
  • vmdk— VMWare image format, so use this for any VMWare images you have lying around.
  • vdi— VirtualBox format, for compatibility with VirtualBox images.
  • cow is an even older copy-on-write format. It still works but I have no reasons why you would want to use it.
  • cloop— compressed loop format for reading ISO images.
  • raw is a plain binary image, and the most portable format.

Make the volume large enough to accommodate the guest operating system you want to install on it. When you enter the size you want for your image in the Max Capacity field, the value in the Allocation field determines if it is a sparse file or a static allocation. If you make both values the same, then the maximum size is reserved as soon as it is created. If the Allocation value is smaller then it creates a sparse file, so it grows as more data are stored, up to the maximum size. I prefer a static allocation because if disk space accidentally runs out it causes data corruption.

Figure 3: Creating a new storage volume.Click Finish, and you will see your new volume on the Storage tab. Then repeat the whole process to create a storage pool and storage volume on your second NFS server.

Install New Guests

Now you need to install a new guest operating system on each of your new volumes. Get them up and running, and then you can practice both live and offline migrations. To migrate a guest, go back to the main Virtual Machine Manager console where it shows all of your guests.

Right-click on the guest you want to migrate, which must be running or paused, but not stopped. This opens the Migration window which tells you the name of its existing host, and it has a dropdown menu for selecting which host to move to. There is a checkbox for an offline migration, or leave it unchecked for a live migration.

In either type of migration only the guest's image in memory is moved, and its physical location on disk is not touched. In a live migration the memory contents are copied over to the new host, including any changes caused by activity, and then after the default 10 milliseconds of inactivity the old guest is shut down and the new one started. If the guest is very busy a live migration may take a long time to complete. In an offline migration the guest is shutdown, moved, and then restarted, and this is usually the fastest.

This type of migration is not permanent, but lasts until the guest on the its new host is shut down. For a permanent move look at cloning.

 

Comments

Subscribe to Comments Feed
  • Ryan Said:

    Someone plagiarized this article at http://sattia.blogspot.com/2011/11/managing-live-and-offline-migrations.html

  • raju Said:

    Hi, I have installed kvm in centos 6.4 and installed some vm's window and linux. now i would like to install vm's in nfs storage pool when i am creating nfs storage pool i am getting below error Error creating pool: Could not start storage pool: internal error Child process (/bin/mount -t nfs sftraid.datalifecycle.com:/kvm /var/lib/libvirt/images/nfs-pool) unexpected exit status 32: mount.nfs: Connection timed out Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/createpool.py", line 480, in _async_pool_create poolobj = self._pool.install(create=True, meter=meter, build=build) File "/usr/lib/python2.6/site-packages/virtinst/Storage.py", line 489, in install raise RuntimeError(errmsg) RuntimeError: Could not start storage pool: internal error Child process (/bin/mount -t nfs sftraid.datalifecycle.com:/kvm /var/lib/libvirt/images/nfs-pool) unexpected exit status 32: mount.nfs: Connection timed out /home/administrator/Desktop/NFS_Storage_Pool_Error/nfs_storage_pool_error.png Please help to come out from this problem...


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board