Linux.com

Home Learn Linux Linux Tutorials How to Manage Btrfs Storage Pools, Subvolumes And Snapshots on Linux (part 1)

How to Manage Btrfs Storage Pools, Subvolumes And Snapshots on Linux (part 1)

Before we dive into using Btrfs, how is it pronounced? Lots of ways, like Bee Tree Eff Ess and Bee Tee Arr Eff Ess. That's too many syllables, so I favor Butter Eff Ess. It sounds nice, and everyone likes butter. In this two-part series we'll build a three-node Btrfs storage pool and learn all about managing snapshots, rollbacks, and subvolumes. Part 1 covers installing Btrfs, creating a simple test lab, creating a storage volume, and what commands to use to see what's in it. In Part 2 we'll create and manage subvolumes, snapshots and rollbacks.

What's the Big Deal about Btrfs?

Btrfs is the next-generation Linux filesystem all cram-full of advanced features designed for maximum data protection and massive scalability such as copy-on-write, storage pools, checksums, support for 16, count 'em, 16-exabyte filesystems, journaling, online grow and shrink, and space-efficient live snapshots. If you're accustomed to using LVM and RAID to manage your data storage, Btrfs can replace these.

A snapshot is a copy of a Btrfs subvolume at a particular point in time. It's many times faster than making a traditional backup, and incurs no downtime. You can make snapshots of a filesystem whenever you want, and then quickly roll back to any of them.

Prerequisites

To use Btrfs you need a recent version of Debian, Arch Linux, Ubuntu, OpenSUSE, SUSE Enterprise Linux, Red Hat Enterprise Linux, or Fedora Linux, and an extra empty hard disk to play with, or ~50GB of free space on a hard disk. Btrfs is already supported in the kernels of these distros (run cat /proc/filesystems to check), so you only need to install the user-space tools btrfs-progs, which is btrfs-tools on Debian/Ubuntu/Mint/etc.

You'll see a lot of warnings in the Btrfs documentation, and even in the output of some commands, that it is not ready for production systems and to not trust it for anything important. However, the good people at SUSE Enterprise Linux claim the opposite, and have supported it for production systems since SLES 11 SP2. I use it on my OpenSUSE and Ubuntu systems without drama. But, as they say, your mileage may vary and you should do your own testing. Meanwhile, it's free to test and learn, so let's get cracking.

Creating a Btrfs Storage Pool

First create three partitions of equal size to create a simple testing environment. GParted is a great graphical app to do this, and it partitions and creates the filesystem at the same time (figure 1). The Btrfs documentation recommends a mininum partition size of one gigabyte. In the examples for this tutorial they are 12 gigabytes each. I'm using a blank 150GB SATA hard disk for this article (/dev/sdd) because it makes me feel a little safer using a separate hard drive for filesystem testing. You can use any hard disk on your PC that has enough free space to play with, and 50GB gives you plenty of room to do mad Btrfs experiments. Do be careful to not destroy stuff you want to keep, like your root filesystem and data.

fig-1-gparted

Now that we have three Btrfs partitions to play with, we will combine them into a Btrfs storage pool with the mkfs.btrfscommand:

# mkfs.btrfs -f -L testbtrfs  /dev/sdd1 /dev/sdd2 /dev/sdd3
WARNING! - Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
adding device /dev/sdd2 id 2
adding device /dev/sdd3 id 3
fs created label testbtrfs on /dev/sdd1
        nodesize 4096 leafsize 4096 sectorsize 4096 size 35.16GB
Btrfs v0.20-rc1

The -f option forces an overwrite of any existing filesystems. -L creates a filesystem label, which is any name you want to give it. With no other options this command creates a three-node RAID array, using RAID0 for data and RAID1 for metadata. The RAID in Btrfs has some differences from the old-fashioned RAID we're used to. In Btrfs RAID0 stripes your data across all available devices with no redundancy. RAID1 mirrors your data in pairs, round-robin across all available devices, so there are always two copies of your metadata regardless of how many devices are in the storage pool.

Seeing Your Partitions and UUIDs

You can use the familiar old blkid command to see your new Btrfs filesystems (the UUIDs are abbreviated in this example):

# blkid  /dev/sdd* 
/dev/sdd: UUID="e9b11649" UUID_SUB="af7ce22c" TYPE="btrfs" 
/dev/sdd1: LABEL="testbtrfs" UUID="b6a05243" UUID_SUB="4770cbfb" TYPE="btrfs" 
/dev/sdd2: LABEL="testbtrfs" UUID="b6a05243" UUID_SUB="b4524e3d" TYPE="btrfs" 
/dev/sdd3: LABEL="testbtrfs" UUID="b6a05243" UUID_SUB="7e279107" TYPE="btrfs"

Mounting the Btrfs Storage Volume

Notice that the UUIDs on the three partitions in our storage volume are the same, but the UUID_SUBs are unique. If you run the blkid command before creating the storage pool, the UUIDs will also be unique. I like to create a special testing directory-- in this example, /btrfs -- so I don't accidently gum up something important. Mounting any single device mounts the whole works, like this:

# mkdir /btrfs
# mount /dev/sdd3 /btrfs

You can create an /etc/fstab entry in the same way as for any filesystem. Use your label or the UUID (not the UUID_SUB) like one of these examples:

LABEL=testbtrfs  /btrfs  btrfs  defaults  0 0
UUID=b6a05243  /btrfs  btrfs  defaults  0 0

What are my RAID Levels?

You can check your RAID levels with the btrfs command:

# btrfs filesystem df /btrfs
Data, RAID0: total=3.00GB, used=0.00
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=24.00KB
Metadata: total=8.00MB, used=0.00

Measuring Available Space

You can't use our good ole du and df commands to measure used and free space on the mounted Btrfs filesystem, because they don't understand Btrfs metadata, RAID, and how it manages storage. Measuring available space on a Btrfs volume is tricky because of these factors. I copied 7GB of files into my little test volume, and this is what it looks like with thebtrfs command:

# btrfs filesystem df btrfs/
Data, RAID0: total=9.00GB, used=6.90GB
Data: total=8.00MB, used=0.00
System, RAID1: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=46.01MB
Metadata: total=8.00MB, used=0.00

You could also try it on any raw device in the storage pool:

# btrfs filesystem show /dev/sdd1
failed to open /dev/sr0: No medium found
Label: 'testbtrfs'  uuid: b6a05243
        Total devices 3 FS bytes used 6.95GB
        devid    3 size 11.72GB used 4.01GB path /dev/sdd3
        devid    2 size 11.72GB used 3.01GB path /dev/sdd2
        devid    1 size 11.72GB used 4.02GB path /dev/sdd1

Allrighty then, we have a nice Btrfs storage pool to play with, and know how to poke around in it. Come back for part 2 to learn how to create, remove, and manage snapshots and subvolumes.

 

Comments

Subscribe to Comments Feed
  • Trent Whaley Said:

    Ubuntu server 13.10 wouldn't boot for me with a multi-drive btrfs root unless it has a raid1 /boot To install ubuntu with a multi-drive btrfs raid1 root do this: (assuming blank drives on /dev/sda and /dev/sdb are most likely to be reliable, more blank drives on /dev/sdc etc...) boot the installer and go through to the manual partitioner. Create filesystems as follows: sda1 8GB MD physical, bootable sda2 8GB swap sda3 remainder btrfs mountpoint=/ sdb1 8GB MD physical, bootable sdb2 8GB swap sdb3 remainder ext4 do not format no mountpoint sdc1 8GB MD physical, bootable sdc2 8GB swap sdc3 remainder ext4 do not format no mountpoint md0 (raid1+spare sda1 sdb1 sdc1) ext4 mountpoint=/boot Continue the installer, when installing the bootloader it should offer to install on sda, sdb, sdc. Also YES you want to boot to a degraded RAID. Boot to the system. It should now have / as btrfs on sda3 now add sdb3 and sdc3 sudo btrfs device add /dev/sdb3 / sudo btrfs device add /dev/sdc3 / Cool, now your fs is basically a big raid0 that you can add devices to on the fly and the devices can be any size... BUT if any one device dies you lose data (and probably kill the system)... so do this: sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 / Cool... now you have two copies of all your data, you can add random-sized disks to the pool, you can live snapshot the system... Note with this setup I have used multiple SWAP partitions, which is functionally equivalent (but with less layers) to having swap on a multi-drive raid0 and gives the best swap performance, but having any one of the drives with swap on it fail during operation will cause a system crash (but should recover - with less swap - on reboot. If you need better uptime you can put swap on a raid1 (simpler, no fragmentation) or a file on the btrfs (more flexible). Or install copious amounts of RAM and use zswap and no swap partition.

  • Trent Whaley Said:

    Sorry for lack of formatting, moderators please delete.

  • Jo-Erlend Schinstad Said:

    Sounds like you're trying to use BtrFS with LVM? With LVM, I know you need to have a RAID1 device to boot from, rather than something like RAID5. But does that apply to BtrFS RAID levels as well?

  • Trent Whaley Said:

    Ubuntu server 13.10 wouldn't boot for me with a multi-drive btrfs root unless it has a raid1 /boot

    To install ubuntu with a multi-drive btrfs raid1 root do this:

    (assuming blank drives on /dev/sda and /dev/sdb are most likely to be reliable, more blank drives on /dev/sdc etc...)

    boot the installer and go through to the manual partitioner.

    Create filesystems as follows:

    sda1 8GB MD physical, bootable
    sda2 8GB swap
    sda3 remainder btrfs mountpoint=/
    
    sdb1 8GB MD physical, bootable
    sdb2 8GB swap
    sdb3 remainder ext4 do not format no mountpoint
    
    sdc1 8GB MD physical, bootable
    sdc2 8GB swap
    sdc3 remainder ext4 do not format no mountpoint
    
    md0 (raid1+spare sda1 sdb1 sdc1) ext4 mountpoint=/boot
    

    Continue the installer, when installing the bootloader it should offer to install on sda, sdb, sdc. Also YES you want to boot to a degraded RAID.

    Boot to the system. It should now have / as btrfs on sda3

    now add sdb3 and sdc3

    sudo btrfs device add /dev/sdb3 /
    sudo btrfs device add /dev/sdc3 /
    

    Cool, now your fs is basically a big raid0 that you can add devices to on the fly and the devices can be any size... BUT if any one device dies you lose data (and probably kill the system)... so do this:

    sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /
    

    Cool... now you have two copies of all your data, you can add random-sized disks to the pool, you can live snapshot the system...

    Note with this setup I have used multiple SWAP partitions, which is functionally equivalent (but with less layers) to having swap on a multi-drive raid0 and gives the best swap performance, but having any one of the drives with swap on it fail during operation will cause a system crash (but should recover - with less swap - on reboot. If you need better uptime you can put swap on a raid1 (simpler, no fragmentation) or a file on the btrfs (more flexible). Or install copious amounts of RAM and use zswap and no swap partition.

  • Carla Schroder Said:

    That is amazing, thanks! Putting the root filesystem on any kind of RAID always means more hurdles to jump to boot the system. Thank you!

  • Aaron Echols Said:

    This is almost identical to what you have to do with mdadm, if the drive with the bootloader fails, system won't boot and you'll have to do some grub or bootstrap magic to get back up.

  • trent Whaley Said:

    With an md /boot partition the Ubuntu installer will offer automatically to install grub in the mbr of every drive the /boot resides on. It takes some jiggery-pokery to replace a drive but booting after failure was not a problem in my tests.

  • MetaPhaze Said:

    You forgot about Gentoo. Gentoo also has support for btrfs.

  • ifadey Said:

    Most of the Linux distros with new kernel versions has btrfs support :)

  • GoinEasy9 Said:

    Thanks for the article. I'm looking forward to Part 2.

  • François Said:

    I like your prononciation idea for BTRFS like "Butter Eff Ess". "Better Eff Ess", sound more interesting, does it not ???

  • NikTh Said:

    I cannot find any reference of btrfs in /proc/filesystems (Ubuntu 14.04) , instead a better way to look (imho) would be : ls /lib/modules/$(uname -r)/kernel/fs/btrfs which should return btrfs.ko . So, the module is already there.


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board