Fourth article in series of Linux+ Certification articles/tutorials. In this installment, the Linux partitioning schemes, LVM and RAID, will be discussed and contrasted in order to understand under what conditions and circumstances each would be most appropriately applied to a storage solution. While both schemes have been in use for many years, they differ in their approaches and utilization profiles. With the increasing features of the Linux kernel, LVM and RAID can be now utilized together to provide a very flexible yet inexpensive disk storage environment.
RAID (redundant array of inexpensive drives) is a partitioning scheme that supports increasing levels of both size, availability and cost of hard drive storage technologies in today’s workstation and server markets. RAID, in its various levels of data profiling that are supported via the current Linux kernel (RAID 0, RAID 1, RAID 5, and layered RAID), and 2 different implementations, software and hardware, offer the Linux practitioner numerous choices in their deployment.
LVM (logical volume manager) is the software that supports flexible virtual volumes that can be managed without the need to do major reconstructive surgery on disk partitions if the filesystems they support grow beyond initial planning limits. LVM, now LVM2, can be used in conjunction with RAID to give both the performance and reliability of RAID with the flexibility of virtual volumes.
However, before implementing either RAID or LVM, Todd needs to be clear about what issues he is trying to solve: disk performance/reliability or partitioning flexibility. Simply because Linux supports a technology does not mean that it needs to be implemented without knowing its uses and applicability.
Our favorite budding Linux admin, Todd, would like to increase the reliability and performance of his disk storage environment and needs to understand what the RAID levels mean and provide:
Linear: While not specifically seen by most as RAID at all, it is simply the sequentially “stitching together” of disk devices of varying sizes where the data is written from disk0 to disk1 to disk2 up to the diskN. Data redundancy or parity-protection is not supported.
RAID 0: also known as data stripping; RAID 0 is the simple the “chunking” of data into blocks that are then stripped across the number of disks in the array. No redundancy is provided, but performance of writing is added if disks are on separate controllers and channels.
RAID 1: also known as data mirroring; RAID 1 duplicates the data on 2 or more disks in the arrays providing data redundancy at the cost of write performance since more than a single write is necessary to effect the duplication.
RAID 5: provides data stripping with the addition of parity for the recovery of single device failure. The data being written is stripped across the minimum of 3 disks of the array with the parity of the data stripped along with the data.
RAID 6: RAID 5 with a double data parity thus supporting the recovery of simultaneous two device failures.
RAID 10: an in-kernel support for RAID 1 combined with RAID 0 (RAID 10 = RAID 1 + RAID 0) giving the advantages of both reliability and performance. Kernel 2.6 supports.
RAID Issues: Todd must understand the uses and weaknesses of using RAID in a Linux environment. For a single workstation setup, RAID appears to be over-kill; however, in a shared server solution, database application, web serving or email support, RAID can significantly improve both reliability and performance without the need to capitalize “commercial grade” disk storage environments. RAID 5 is the frequently utilized level due to its performance enhancements in addition to supporting parity-enabled data reliability.
For another server environment, Todd believes that the filesystems especially /usr and /home could grow beyond his initial estimates if the Linux phenomenon in his company takes off and Windows users begin to move over to Linux. He has heard that re-partitioning device partitions could be both time-consuming and data-dangerous so he would like to implement a more flexible partitioning scheme, and LVM/LVM2 appears to be the ticket.
Basically, LVM or the support for logical volumes provides Todd with the flexibility of re-sizing his filesystems on the fly. The process for using or implementing LVM is straight-forward and consists of the following steps:
1. Todd needs to mark the existing partitions or RAID devices as physical volumes with the pvcreate command. The partitions should be marked with the 0x8E type unless it is a RAID marked with 0xFD.
2. Todd would then merge the physical volumes into volume groups with the vgcreate command.
To extend the existing volume group, Todd would use the vgextend command to add new physical volumes to the existing volume group. He can also view the status and details of both the physical volumes (lvdisplay command) and the volume group (vgdisplay command). Todd can find the use of these commands on the ‘man lvm2’ pages after he installs the packages into this system.
Resources from linux.com:
LVM “how to” – a bit dated, but very solid coverage.
Todd needs to decide on which technology, one or both, that he will need to improve his Linux installations.
Good luck, Todd, and I will see you back here very soon.