Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW
Infrastructure providers aim to deliver excellent customer service and provide a flexible and cost-efficient infrastructure, as we learned in part one of this series.
Cloud Computing, then, is driven by a very simple motivation from the infrastructure providers’ perspective: “Do as much work as possible only once and automate it afterwards.”
In cloud environments, the provider will simply provide infrastructure that allows customers to do most of the work on their own through a simple interface. After the initial setup, the provider’s main task is to ensure that the whole setup has enough resources. If the provider runs out of resources, they will simply add more capacity. Thus another advantage of automation is that it can facilitate flexibility.
In this article, we’ll contrast what we learned in part two about conventional, un-automated infrastructure offerings with what happens in the cloud.
The Fundamental Components of Clouds
From afar, clouds are automated virtualization and storage environments. But if you look closer, you’ll start seeing a lot more details. So let’s break the cloud down into its fundamental components.
First and foremost, a cloud must be easy to use. Starting and stopping virtual machines (VMs) and commissioning online storage is easy for professionals, but not for the Average Joe! Users must be able to start VMs by pointing and clicking. So any cloud software must provide a way for users to do just that, but without the learning curve.
Installing a fresh operating system on a newly created virtual machine is a tedious process, once again, hard to achieve for non-professionals. Thus, clouds need pre-made images, so that users do not have to install operating systems on their own.
Conventional data centers are heterogeneous environments which grow to meet the organic needs of an organization. While components may have some automation tools available, there is not a consistent framework to deploy resources. Various teams such as storage, networking, backup, and security, each bring their own infrastructure, which must be integrated by hand. A cloud deployment must integrate and automate all of these components.
Customer organizations typically have their own organizational hierarchy. A cloud environment must provide an authorization scheme that is flexible enough to match that hierarchy. For instance, there may be managers who are allowed to start and stop VMs or to add administrator accounts, while interns might only be allowed to browse them.
When a user starts a new VM, presumably from the aforementioned easy-to-use interface, it must be set up automatically. When the user terminates it, the VM itself must be deleted, also automatically.
A bonus of the work to implement this particular kind of automation is that with a little more effort, usually involving the implementation of a component that knows which VMs are running on which servers, the cloud can provide automatic load-balancing.
Online storage is an important part of the cloud. As such, it must be fully automated and easy to use (like Dropbox or Gdrive).
There are a number of cloud solutions, such as Eucalyptus, OpenQRM, OpenNebula, and of course, OpenStack. Open source implementations typically share some design concepts, which we will discuss in part 4.
Various cloud solutions have been in existence since the mid-1960s. Mainframes provide virtualized resources but tend to be proprietary, expensive, and difficult to manage. Since then there have been midrange and PC architecture solutions. They also tend to be expensive and proprietary. These interim solutions also may not provide all of the resources now available through OpenStack.
The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!
Read the other articles in the series: