July 3, 2006

Reduce network storage cost, complexity with ATA over Ethernet

Author: Paul Virijevich

Today, Fibre Channel is the dominant enterprise storage technology, but as with all technologies, eventually something better comes along. If you're lucky, that something is also less complex and less expensive. For storage, that something may be ATA over Ethernet (AoE), a simple and open network protocol that allows storage to be accessed over Ethernet. Here's how you can set up a test server to provide shared storage using AoE.

AoE got its start when a company named Coraid launched a series of storage devices based on AoE in the summer of 2004. AoE's inclusion in the Linux kernel came a year later with kernel 2.6.11. The entire protocol specification is only nine pages long.

AoE is conceptually similar to the more widely known iSCSI (Internet Small Computer System Interface). The promise of iSCSI was that Fibre Channel storage area networks (SAN) could be replaced by much cheaper IP-based storage networks. iSCSI encapsulates SCSI commands in IP packets, allowing storage to be placed anywhere on the LAN or even the Internet. The problem with iSCSI is the extra processing required for wrapping and unwrapping the packets in IP. All things being equal, the performance of iSCSI SANs fall short of Fibre Channel unless you start adding expensive TCP offload cards to the mix, which significantly reduce the cost savings of using iSCSI in the first place.

AoE is out to change all that. AoE sends ATA commands over Ethernet frames without the overhead of IP. Communication is done via MAC addresses and is non-routable. Best of all, AoE devices appear as regular block storage. That means you can do with them whatever you would do with local storage. You can manage AoE storage with LVM, create RAID arrays out of your AoE devices, or put a cluster file system on top of them.

Getting started

Not much is required to start getting your feet wet with AoE. Just about every recent distribution has a kernel that supports AoE. As far as "enterprise" distributions go, to install on SUSE Linux Enterprise Server 9 (SLES 9) or Red Hat Enterprise Linux 4 you need to have your kernel's source installed to build the AoE driver located here.

The storage server, known in storage parlance as the target, needs to have the open source vblade package installed. vblade is the daemon that allows exporting partitions or hard drives for client machines to access. The syntax is straightforward as long as you remember that vblade stands for virtual blade. It treats devices as if they were Coraid EtherDrive appliances. To install vblade, just extract the source and compile with make. When it's ready to go, you can export a disk or a partition using a command like:

./vbladed 0 0 eth0 /dev/sda4

This breaks down to shelf, slot, Ethernet device, storage device. The shelf and slot part comes from the fact that vblade emulates an EtherDrive storage appliance. In this case, we are treating the device (/dev/sda4) as if it is located in the first slot (0) in the first shelf (0)of an EtherDrive using the Ethernet device eth0. The resulting output should look something like this:

pid 12905: e0.0, 436614507 sectors

Here we see the process ID of the vblade daemon, the shelf and slot address, and the number of sectors on the exported device. If everything went well, you now have shared storage set up for clients to access.

Accessing AoE storage is just as easy as exporting devices. To be able to see the device, clients need to have the AoE kernel module loaded. Use modprobe aoe from a command prompt to check whether your kernel agrees with the module. If you don't get any error messages, confirm that the module is loaded with lsmod | grep aoe. The last thing to do is to install the userspace aoetools package. Once installed (with a simple make;make install), the tools will allow your clients to discover AoE devices on the network.

Go ahead an issue an aoe-discover;aoe-stat. These commands probe the network for available AoE devices and list them. They also automatically add the entry /dev/etherd/e0.0 on the client machine.

Because AoE is block-level storage, you can do whatever you want to with it. Let's put a filesystem on it and mount it with:

mkfs.ext3 /dev/etherd/e0.0;mount /dev/etherd/e0.0 /mnt

Of course, you are not limited to ext3, any filesystem Linux supports will work just fine.

Another option is to manage AoE storage with LVM. This does require a slight change to the file /etc/lvm/lvm.conf. You need to add to the file the line types = [ "aoe", 16 ]. Now LVM treats e0.0 the same as any other block device.

In my informal testing, I found the performance of AoE limited only by the speed of the 100Mbps switched network. This was borne out both by running bonnie++ from a directory mounted on the AoE device and by copying large files to and from the AoE device. To achieve maximum performance, AoE devices need to be on a separate, dedicated storage network. The latest AoE driver supports jumbo frames when used with Gigabit Ethernet switches.

AoE brings a new, low-cost option to your storage environment. Coraid's EtherDrive appliances are priced far lower than Fibre Channel alternatives. The vblade daemon lets you become comfortable with the technology at no cost. It also makes it possible to fill up a server with disks and get going on the cheap. Any environment that makes heavy use of Linux will want to take a closer look at AoE.


  • Storage
Click Here!