Planning Best Virtualization Strategies for Your Enterprise

276

So far this month we’ve covered the basics of virtualization, differences between containers and hypervisor virtualization, and basics of cloud computing. In this final installment, we’ll cover some of the things you need to consider when planning to deploy your organization’s best virtualization strategy.

Planning for Virtualization

Before setting on a strategy or choosing technologies, you have to be clear about what you’re hoping to accomplish. Is your organization looking to consolidate hardware, reduce energy costs, or find ways to be more flexible about deploying workloads? Of course these are not mutually exclusive, but you need to have a well-defined set of goals before embarking on any project and deploying the best virtualization strategy.

Why are you looking at virtualization, setting aside the fact that everyone else is doing it? Are you trying to consolidate workloads because your existing server capacity is underutilized? Or perhaps it’s more about conserving energy and reducing cooling and reducing energy costs. Or the goal may be high availability and the cost savings related to reduced energy bills might be a secondary win, but not the main goal.

Odds are you have virtualization deployed already, but maybe need to expand or take on new projects, or consolidate more. Or perhaps you’re ready to start working with cloud technologies. The first step should be defining your goals, and then coming up with a detailed requirements document that outlines exactly what you need. This includes an inventory and description of your current environment. Identify the hardware that will be freed up if you’re consolidating, and see if it is going to fit into the new infrastructure or if it’s being phased out.

If you’re going to be consolidating existing workloads, you need to start getting baselines of those workloads if you don’t already have them. At least 30 days if possible, and running at peak demand. Then start sketching out the hardware that you’ll need, the storage requirements, management requirements and the possible solutions.

As mentioned in the previous article about containers and hypervisors, some workloads work better with hypervisor based technologies like VMware, Xen, Parallels Bare Metal and KVM. If you’re working with a mixed environment, like Windows and Linux on the same servers, then hypervisors are the way to go. A lot of organizations are best off with hypervisor-based technologies because the mix of operating systems requires the flexibility to deploy more than one OS per server.

The flip side is that if you’re an all-Linux shop, then container-based virtualization may be just what the doctor ordered. If so, consider Parallels Virtuozzo Containers or OpenVZ, depending on your budget and needs for management tools. OpenVZ is the open source foundation for Virtuozzo, and lacks the management tools you get with Virtuozzo but is free to deploy.

Virtual is Not Like Physical

One of the mistakes organizations make is assuming that today’s policies and procedures are going to apply equally well to a virtualized environment. This is not strictly the case.

First of all, decide how purchasing and acquisition of new hardware is going to be handled. In many organizations, each department or business unit has its own IT budget. This made it easy for each cost center to acquire new resources. What happens if the entire IT infrastructure is virtualized? It’s no longer about billing each department for the hardware it purchases; it may be that virtual machines for each department sit on a single server or group of servers. Early on, it’s important to decide how those costs will be split.

Another consideration is the procedure for deploying new virtual machines. With physical resources, procurement processes are relatively straightforward. It’s easy to deploy new virtual machines and workloads, which means it’s tempting to just fire up another virtual machine without necessarily considering all options. The machines may not require physical hardware, but they still need to be managed. The entire lifecycle of a machine needs to be taken into consideration. Is the virtual machine going to require the same amount of resources for a set period of time, or is it possible that you’re going to outgrow the system that it’s hosted on? Don’t abandon longstanding policies for acquiring new “servers” just because it’s suddenly easier to deploy them.

Virtualization also changes disaster recovery plans. On the one hand, this can be a good thing. If you are storing your virtual machines and appliances on a SAN, this can make recovering from a dead server trivial. But it also means that when a server goes down it’s not just taking one workload down – it might be taking two or twenty with it.

Network and Capacity Planning

If you’re consolidating workloads, you need to also consider the network and storage needs for your virtual machines. Remember that you’re going to be combining not only the memory and CPU requirements for each virtual server, but also the network and storage requirements.

For workloads with high network traffic, or spiky traffic, you need to take special measures. It may be necessary to equip servers with multiple network adapters to ensure that the link isn’t saturated when one virtual machine is being hammered.

Is the workload going to have relatively static storage requirements, or will it continue to grow? If you’re running workloads with minimal storage requirements, then depending on local storage should be sufficient. But if you have heavy storage requirements, then you might need to set up virtual storage as well.

Forecast for Clouds?

Further complicating things is the decision whether to add cloud computing to the mix. Do you have workloads that can be deployed on services like Amazon EC2? Or is your organization considering taking up Software as a Service (SaaS) offerings?

Cloud services are still in their infancy, and few organizations look poised to go whole-hog into integrating off-site cloud services into their IT infrastructure. Which is probably as it should be. However, it’s probably a good idea to start thinking about how cloud services may integrate with your organization in the next five years.

Rather than relying on outside services, it might be a good idea to start some test deployments on Eucalyptus or other private cloud solutions.

Devil is in the Details

It’s impossible to offer a generic template for enterprise virtualization strategies. The variety of workloads and the possible combinations of software, hardware, and technologies means that there are no shortcuts. If your organization hasn’t already worked virtualization into its IT planning, the best thing to do is start with a test pilot program and start cautiously in moving workloads from physical servers onto virtualized servers or into the cloud.

Virtualization can be a massive money saver and enhance IT services tremendously, as long as it’s deployed thoughtfully and with a good roadmap.