If you've ever looked in your /etc/fstab file, you have may have seen an entry that looks like
UUID=62fa5eac-3df4-448d-a576-916dd5b432f2 instead of a more familiar disk drive designation, such as /dev/hda1. Such entries are called universally unique identifiers (UUID). You can use these 128-bit numbers to make hard disk management easier.
Suppose that you have a system that contains two hard drives, traditionally known as /dev/hda and /dev/hdb. /dev/hda contains the root partition and swap, while /dev/hdb1 contains your home directory and encompasses the entire drive. Now say that you want to add another hard drive to the system, and some constraint forces you to add it between /dev/hda and /dev/hdb, thereby moving /dev/hdb to /dev/hdc. Anyone who has ever tried to do something resembling this knows the problem: the mount command checks /etc/fstab and tries to mount the new drive as /home. In this example, to fix the problem, you would have to log in in single-user mode as root and edit /etc/fstab before making the changes in the drive order active or the system would likely return an error when you tried to log in, or, worst case scenario for a root drive, throw a kernel panic. A situation such as this would be a nuisance, but if you consider a machine that has several hard drives containing such directories as /var, /opt, /home, /boot, /usr, and /usr/local, the problem becomes far more complex.
However, if the system administrator opts to use UUIDs, the problem is virtually nonexistent, because the /dev/sd* or /dev/hd* nomenclature almost goes away. Each hard drive is instead given a UUID, stored within the filesystem, and referenced by /etc/fstab, giving the sysadmin the freedom to put any device anywhere within the BIOS chain without affecting where the device is mounted within the Unix filesystem tree.
Under the old system, using JFS, a typical /etc/fstab entry would look something like this:
Under the new system, the same entry would look something like this:
The only thing different is the first entry in the table. Instead of /dev/sda1, the UUID 1c0653cd-e897-41af-bd30-55f3a195ff33 now designates the drive. Because of this, it wouldn't matter if the drive were /dev/sdi1; as long as the appropriate modifications to the bootloader config file were made, it would still mount as root and function as expected.
To get started using UUIDs, look at your /etc/fstab file. If there is a line similar to the above UUID example, then you are probably already using UUIDs to mount your drives. To find out for sure, run the command
cat /proc/cmdline. If the response contains a UUID, then your system's bootloader passed the command to mount the root filesystem by a UUID, and you are in fact using them. Unfortunately, the mount command itself does not give such information yet -- at least the one on my system doesn't. It and the /etc/mtab file both show the hard drives using the /dev/[hs]d* system. Because of this limitation, consider keeping notes on where each drive and partition is mounted for your own use, though there are commands that show this information, as we'll see in a moment.
If you are not using UUIDs and you would like to, then I suggest that you dig out a hard drive that you can use for testing purposes; do not tinker with a live filesystem, as you may inadvertently damage your system. You also need to know how to relate BIOS drive positions to device nodes, and how to partition and create filesystems. If you do not know what I am talking about, then consult with someone more knowledgable or brush up with a tutorial.
Once you're ready to proceed, mount the hard drive in your system in such a way that it will not disrupt your existing hard drive nodes, then boot the computer and create a partitioning scheme and filesystem on the new drive. It does not matter what filesystem you use; all native Linux filesystems should support UUIDs. I have personally checked ReiserFS, ext2/3, JFS, and XFS, and each has support for UUIDs. FAT and NTFS, by the way, may not have good support for UUIDs;
blkid shows a UUID for them, but Microsoft's technical references for NTFS and FAT don't even mention UUIDs. Now, in addition to the filesystem, create a mount point for the new drive. Once these tasks are complete, run the following command:
sudo vol_id /dev/your_hard_drive. It should come back with output similiar to the following:
The only information we are currently interested in is the ID_FS_UUID. If the vol_id command did not give you a UUID, then you must generate one and assign it to the drive.
uuid is a command that generates UUIDs according to DCE 1.1 methods 1, 3, 4, and 5. Method 1, which is default, uses a combination of time from the system clock and the MAC address of an Ethernet card on your system. MAC addresses are supposed to be unique, but for some people they raise privacy or security concerns as they identify part of the machine they were generated on. Method 3 uses a name-based MD5 hash. Method 4 is random-number-based. Method 5 uses a name-based SHA-1 hash. Methods 3 and 5 require a namespace, such as a URL, in order to function. For most people, methods 1 and 4 suffice.
If you want to generate a random UUID without having to remember which method is random, then use the command uuidgen. It defaults to random, but has the option to generate a UUID based on time and MAC address.
Once you have a UUID, open /etc/fstab and copy that information to a new line. Use the lines already in /etc/fstab as an example, copying the filesystem options to the new line, in a format similar to this:
Make sure to include
UUID= at the beginning of the line. Once you are finished, save your work and exit the text editor, and run a command like
sudo mount -U /your/mount/point. If all was successful, the command should exit without error. Now run the
mount command without parameters and see what the computer shows as being mounted. If your drive shows up in the list, then you were successful!
If you want to see what good your new UUID will do for you, shut down the computer, move the hard drive connection to a new location that will result in a new device node name, and reboot the computer. The new drive should mount in the same location without error.
reiserfstune -u UUID /dev/node,
tune2fs -U UUID /dev/node,
jfs_tune -U UUID /dev/node, and
xfs_admin -U UUID /dev/node can all change the UUID on their respective filesystems. If you do this, take care to make sure that /etc/fstab and possibly your bootloader config file match the new UUID, lest you make the new filesystem unusable or, in the case of the root partition, make the system unbootable.
If you want to see the /dev/[hs]d* names along with the UUIDs, you can run
sudo blkid. This command shows the devices connected to the system along with their respective UUIDs, and requires root access to work properly. Without root access, it won't error out, but it won't show any information either. Just to be clear: this is not a replacement for mount. It shows a device as long as it is connected to the system, mounted or not.
sudo findfs UUID=1c0653cd-e897-41af-bd30-55f3a195ff33 returns the device node for the device that matches that UUID. This is handy if you didn't record elsewhere the device node and UUID information in /etc/fstab.
All of this information on what UUIDs are and how to use them to manage your disks should make administrating your disks easier. Just remember to work carefully so that you don't end up with a big headache.