Fault-tolerant Web hosting on a shoestring

51

Author: Alaric Snell-Pym

The words “fault-tolerant Web hosting” bring to mind hosting centers with multiple redundant power supplies, complex networking, and big bills. However, by taking advantage of the underlying fault-tolerance of the Internet, you can get a surprising level of reliability for little cost.

There’s a worrying number of things that can go wrong and take your Web site down. For instance, if the power supply goes down in your hosting center, then all the routers, switches, and servers go down too. Finally, any router, switch, server, firewall, or other piece of network infrastructure can fail.

Large sites usually have multiple Internet connections into their server farm via separate leased lines following independent paths through different telephone exchanges. To defend against power outages, they have incoming power feeds, uninterruptible power supplies, and generators. Finally, they carefully design their network so that there are always multiple routes between the servers and the Internet, so that any single cable or piece of equipment can fail without causing a problem.

An easier way of getting nearly the same level of redundancy is to install two servers at different hosting centers, with different Internet service providers (ISP) — but there’s a problem with that. If your servers are connected to two different ISPs, they are going to end up with two different Internet Protocol (IP) addresses, and not even on the same subnet. When a Web server’s hostname has more than one A record, a Web browser will end up being given one IP address from the list at random. So if you set up two identical servers, about half of the requests will go to each. This is great for load balancing, but if one of the servers goes down, then about half of the requests will be routed to the broken server, and just fail.

However, you may be able to get around that issue by taking advantage of the fact that DNS servers are required by specification to fetch all the server IP addresses (stored as NS records in the DNS), and try them in turn until one succeeds. So if one server’s down, clients will automatically try another until they find one that works.

Let’s imagine we want to host a Web site at www.example.com. We have two servers at two different ISPs — let’s call them bob.example.com (IP 1.2.3.4) and bill.example.com (IP 5.6.7.8). We install BIND (or another DNS server) and Apache (or another Web server) on both.

As usual, we tell our domain registrar that we want them both to be name servers for our new domain. Rather than configuring one as a primary name server and setting the other up as a secondary, we tell the servers that they are both primary for example.com. Using BIND, that would mean giving both an identical named.conf containing something like this:

  zone "example.com" {
    type master;
    file "example.com";
  };

We give the two servers different zone files for example.com. For bob.example.com, we use:

  $TTL 60

  @  IN  SOA  bob.example.com. hostmaster.example.com. (
    2007010100
    60
    7200
    604800
    60 )

  @    IN NS bob.example.com.
  @    IN NS bill.example.com.

  @    IN A 1.2.3.4
  www  IN A 1.2.3.4

  bob  IN A 1.2.3.4
  bill IN A 5.6.7.8

We would probably add some MX records as well for mail servers, but that is beyond the scope of this article.

Then for bill.example.com, we’d use a slightly different zone file:

  $TTL 60

  @  IN  SOA  bill.example.com. hostmaster.example.com. (
    2007010100
    60
    7200
    604800
    60 )

  @    IN NS bob.example.com.
  @    IN NS bill.example.com.

  @    IN A 5.6.7.8
  www  IN A 5.6.7.8

  bob  IN A 1.2.3.4
  bill IN A 5.6.7.8

You will notice that both zone files specify that the server hosting the zone file is also the Web server. We put an A record on example.com itself (with the @ IN A ... line) for people who just put example.com into their browser, as well as an A record for www.example.com. And for both, we give the IP address of the DNS server the zone will be hosted on.

The Web server configuration is standard, and should be identical on both servers, with the same files in their document roots.

Now, when a user enters www.example.com into a browser, it (with the cooperation of the local name server) will find a working name server for example.com, and ask it for the A record called www.example.com. Whichever server it gets in touch with will answer with its own IP address.

If one of the servers is down, or the network connection to it is broken, then it will no longer count as a working name server. Therefore, all the requests will go to the other server, which will (hopefully) still be up. In practice, this means that if one of the servers becomes inaccessible, it will also disappear from the DNS — so all the requests will be routed to the live server.

This scheme can be scaled to three or more servers.

The gotchas

The first problem with this approach is that name servers tend to cache things they find out from other servers, in order to decrease the load on the remote servers and to be able to answer queries faster in the future. If somebody requests www.example.com, his local DNS server will pick either bill or bob as the Web server. When the user requests www.example.comagain, his server will send him to the same Web server again, since it will have the A record cached. If that server has gone down, this is a problem.

To reduce this problem, we set very low values for the time-to-live, refresh, and minimum timeouts in the zone files above; just 60 seconds. This means that other name servers are requested to never cache any records from this zone for more than one minute. We could set the value lower, but the lower we set it, the more often name servers will have to contact ours, which increases the load on bill and bob, and makes the site seem slower.

This technique relies on the name server becoming unreachable if the Web service is broken. If Apache crashes on one of the servers, then BIND will continue to run happily and hand out its own IP address for the A record, but incoming requests will fail. This technique only protects against network failures, or whole-machine disasters like power outages, crashes, and hardware failure.

And as with any system for replicating a Web application across multiple servers, you still need some way of dealing with data storage. It’s no use putting a copy of your read-write database on each server, since any user that makes a change to something will make the change in only that one copy, and the copies will soon diverge. However, if your site mainly provides read-only access to some body of information, you can put copies of it on both servers and take care to update them both with any new versions.

Tidying it up

Having two nearly identical zone files is a bit untidy. For a start, if you add a third host, ben.example.com for instance, you need to add it to both the other zone files. If a server changes IP address, you need to update all the zone files individually. Likewise, if something needs changing in the Apache configuration, it has to be done on all the servers. And the document root’s contents need to be kept identical, not to mention other files and databases. As the number of servers rises, the scope for error increases.

If we were using a normal DNS setup, we’d just change information on the master, and the secondaries would pick the changes up automatically. We can do something similar here.

Firstly, we can generate each server’s zone file from a single master by using a macro processor like M4 — or just plain sed — to convert a string such as LOCAL-IP to the IP address of that server. We can replace the first two A records in the zone file with:

  @    IN A LOCAL-IP
  www  IN A LOCAL-IP

Then on each server we can generate the zone file with the following command:

sed s/LOCAL-IP/1.2.3.4/ < example.com.template >  example.com

or:

sed s/LOCAL-IP/5.6.7.8/ < example.com.template >  example.com

Once we’ve done that, we just have a set of files we need to keep identical on each server: the zone file template, named.conf, the Web server configuration, the contents of the document root, and maybe some data files (or database dump files). There is a standard solution to this problem: we can use a version control system such as CVS or Subversion. Move named.conf, the zone file template, the Web server configuration, and the document root directory into a single parent directory, and put it under version control. Check it out into, for example, /var/configuration on all of your servers, then configure BIND and Apache to look for their configuration and data files in there. Whenever something needs changing, after committing the change, update /var/configuration on every server (don’t forget to rerun the sed command to generate the zone file from the template; it might be wise to make a shell script that contains svn update followed by this command to make sure it isn’t missed). Another advantage of this approach is that you have a complete change history of your configuration, so disastrous ideas can be quickly reverted.

Alaric Snell-Pym is a freelance software engineer who lives in Gloucestershire, England. He has volunteered for a joint ISO/ITU-T working group on data encodings, edited technical books, and dabbled in open source projects.