To build a virtualized cloud infrastructure, four major elements play the vital role in the way to design a virtual infrastructure from its physical counterpart. These four elements are at the level of processors, memory, storage and network. In the continuity of the design, there is the life of the infrastructure, that is to say the daily operations. Is it administering the daily virtual infrastructure as they do in a physical environment? Pooling many resources involves consideration of other parameters that we will see here.
Processors performance monitoring
Although the concept of virtual (vCPU) for a hypervisor (VMware or Microsoft Hyper-V) is close to the notion of the physical heart, a virtual processor is much less powerful than a physical processor. Noticeable CPU load on a virtual server can be absolutely more important than the physical environment. But nothing to worry about if the server has been sized to manage peak loads. In contrast, maintenance of indict alert threshold often makes more sense and it is wise to adjust.
By the way, all hypervisors are not necessarily within a server farm. The differences may be at the level of processors (generation, frequency, cache) or other technical characteristics. This is something to consider as a virtual server that can be migrated to warm hypervisor to another (vMotion in VMware, Microsoft Live Motion). After a trip to warm virtual server, the utilization of the processor can then vary.
Does the alert threshold define by taking into account the context changes?
The role of the scheduler in access to physical processors is per virtual servers. You may not be able to access it immediately, because there would be more virtual servers, more access and waiting time (latency) which is important to consider. In addition, performance monitoring is not limited solely to identify how power is used, but also to be able to detect where it is available. In extreme cases it is possible to visualize a virtual server with low CPU load. This indicator of processor latency for each virtual server is an indicative of a good use of the available power. Do not believe that the power increases by adding virtual processors to the server, it is actually more complex and it is often the opposite effect of what is required to happen. Usually, you need to send this problem by analyzing the total number of virtual processors on each hypervisor.
Monitoring the memory utilization
If the memory is shared between the virtual servers running within the same hypervisor, it is important to distinguish used and unused memory. The used memory is usually statically allocated to virtual server while the unused memory is pooled. Due to this reason it is possible to run virtual servers with a total memory which also exceeds the memory capacity of hypervisor. The memory over-provisioning of virtual servers on hypervisor is not trivial. It is a kind of risk taking, betting that all servers will not use their memory at the same time. There is no concern of the type “it is” or “this is wrong”, much depends on the design of virtual servers. However, monitoring will prevent it from becoming a source of performance degradation.
A first important result is that over provisioning prevents starting of all virtual servers simultaneously. Indeed, a hypervisor verifies that it can affect the entire memory of a virtual server before starting it. It is possible to start with a timer so that everyone releases its unused memory in a specific order.
The slower memory access is second consequence. VMware has implemented a method of garbage collection (called as a ballooning) with virtual servers. It occurs when the hypervisor is set to provide memory to a virtual server when available capacity is insufficient. The hypervisor strength release memory with virtual servers. This freed memory is then distributed to servers as needed. This mechanism is not immediate; there is some latency between the request for memory allocation and its effectiveness. This is the reason it slows down memory access. This undermines the good performance of the servers.
Another consequence of the ballooning, it can also completely change the behavior of the server’s memory. Linux systems are known to use available memory and keep it cached when it is released by the process (memory that does not appear to be free). The occupancy rate of memory of such a server is relatively stable and high. This rate will decrease with ballooning and will not be stable (frequent releases and increases).
Also there are cases where used memory can be pooled between virtual servers. This appears when the ballooning is not enough. Performance is greatly degraded when it is a situation to be avoided.