Building a distro

74

Author: Farrell J. McGovern

You download a CD or maybe a diskette image, transfer it to the appropriate media, boot your computer with it, and voilà, you’re running Linux. It sounds so simple — but a great deal of work goes into creating that software. Beginning about two years ago, I spent a year and a half building a desktop-oriented GNU/Linux distribution named MfxLinux, designed to be tightly integrated with Crowell Systems’ Medformix medical office management system. Along the way, as with any project, a lot of design and implementation decisions had to be made — some of which worked out better than others.

My main job at Crowell was as the primary developer of its distribution. Most of the design decisions were mine, but some were dictated to me, and some came from a consensus of the technical staff at Crowell.

The goal of the project was to replace Windows workstations, saving time on support, removing the prevalent threat of malware, and increasing reliability. Additionally, the company’s developers wanted to get away from the moving target of Windows APIs. GNU/Linux’s Unix heritage means that its APIs are stable.

The first choice any potential distro creator needs to address is whether to base the distro on an existing one or build it from scratch. Unless you have encyclopedic knowledge of Unix, a great deal of time or the need to serve an extremely specialized environment, it is easier to start with another distro.

Most derivative distros are based upon Red Hat, Debian, or Slackware. Of them, Slackware is the oldest. I started using Slackware in 1993, and therefore I was very familiar with it. Slackware is easy to modify. The package system is simple to work with and build packages for. It has an unencumbered init script structure, and you can install software from source without worrying about breaking dependency databases. Plus, Slackware author Patrick Volkerding does a good job of keeping even older versions of the software up-to-date with security revisions.

Crowell already had a tool that would take a functioning system, turn it into a series of tarballs, and, with a basic install program, burn it onto a bootable install CD. It had been developed to install a server Linux distro I had been writing for the company a couple of years previously. All that was needed was a working system to clone.

I began by installing everything from the Slackware 9.0 CDs, then slowly removed packages as I decided what we did and didn’t need.

Configuring Slackware is fairly easy, but configuring X Windows was a more arcane process. I had played with a Slackware derivative called College Linux that had a nice X configuration setup. I took parts of it and adapted them. With some glue scripts, I created a hybrid of the College Linux and Slackware configuration scripts that was easy to use and worked well. With that setup, you could load a system and have it functional in 20 minutes.

The desktop side

I chose KDE as the window manager because it has the most Windows-like interface available for Linux, and that would make transitioning Windows users to Linux easier. Although the KOffice Suite has a good word processor, it takes a long time to load, so I suggested we use Abiword, as it was the fastest-loading GUI-based word processor and had lots of features. As the system evolved, we needed some features that AbiWord didn’t have, so Crowell paid Dominic (Dom) Lachowicz, the overseer of AbiWord, to add the features we needed, but this didn’t work out, and we ended up switching to OpenOffice.org.

My next challenge was support for the flatbed scanners used for imaging things like medical insurance cards. SANE kept my sanity in place, and Kooka, KDE’s scanner application, gave us a consistent interface. Doctors’ offices do a lot of scanning, and thus continually replace their scanners. Windows scanning programs that come with scanners have no standard interface, which required rewriting the integration software and retraining staff every time each office purchased a new scanner. Kooka gave us a standard interface for all scanners.

A somewhat related topic to scanners was digital still cameras. When connected to the client, some appear as USB storage devices, others interface via SANE, and a whole bunch are supported by gPhoto2. This gives Linux some very good support for digital cameras.

Next, I tackled integrating multimedia support with a Web browser. Linux has wonderful tools for creating and playing back audio and video, but until recently, trying to get them properly integrated with a Web browser was a challenge. The alternative was to get things working using MIME types and launching separate applications from within a browser. Tighter integration can be time-consuming, and involves a myriad of changes to allow the browser to access the various multimedia devices. I wanted a stricter file permission rights model, but I was overruled, and we just gave all the devices global read/write/execute permissions. Thus anyone who could log onto the system could access these devices. This was not a good idea from a security point of view, but the CIO considered our environment a “trusted” one, so that’s how we set things up.

The next piece of the multimedia puzzle was integration with the Web browser, Mozilla. At the time only RealPlayer could be properly “swallowed” into a Web page, rather than being spawned to a separate window running the multimedia application. I had to fight with MIME types and the browser’s support for spawning outside programs to support the media types we needed. These days, there are free programs like mplayerplug-in and gxine that allow you to properly integrate most multimedia types into a Web page. CodeWeavers’ Crossover Plugin can also do the trick, but it costs.

We replaced Mozilla with Firefox as our default browser with the distro since it loads fairly quickly, has a smaller footprint, and is very fast. After a few deployments, we found a problem; many of the doctors did online billing to insurance companies via a Web interface, and a good number of those insurance companies had a Web site that demanded Internet Explorer or Netscape, not Firefox.

After a bit of searching, I ran across an add-in for Firefox that would solve the problem. User Agent Switcher gives you a pull-down menu choice that causes the browser to change the identity string it sends to Web sites to essentially anything you want, which allowed Foxfire to masquerade as Netscape.

The server side

Early on we decided to build one distribution for both the desktops and the servers. We wrote a script to be run after the install that would configure a machine one role or the other. This would replace the Red Hat-based server distro that I had developed for them previously.

We used IBM’s xSeries eServers. For storage, these servers use the ServeRAID controller cards, whose Linux drivers are maintained by Adaptec. As development on the 2.6 kernel ramped up, the code in the 2.4 kernel started to lag behind, and often I had to patch the kernel by hand with the latest sources emailed to us from the code’s maintainer. Not fun. We should have chosen the 2.6 kernel to start with, but as always, hindsight is 20/20.

We also had to deal with the WAN interfaces for connecting offices via T1 or frame relay connections. Originally, we used Cyclades WAN interface cards, but we found the company to be very bad at maintaining its 2.4 kernel code, and not very responsive to our inquiries. I did some research and found Sangoma had good quality WAN cards with good Linux support. Their code was always up-to-date in the 2.4 kernels, the developers loved Linux, and they were always willing to give us a hand. So we switched. We also needed to support a range of DigiBoards; some had support in the 2.4 kernel, others had to be patched into the kernel. It took a lot of compiles to get the right combination of building some as loadable modules, others compiled into the kernel. We needed two kernels, one with SMP support for the servers, and another for servers and workstations without SMP. Toward the end, I also updated the kernel for Serial ATA, as the newest workstation were using it.

The far side

One of the last things that I did with MfxLinux was enable desktop sharing on KDE with password protection. Desktop sharing is a compatible derivative of VNC. With it, we could connect to a system and actually see a problem a user was reporting. As the clients’ workstations were behind firewalls, standard VNC was unusable. TightVNC, on the other tentacle, had the capability to transparently tunnel through the firewall to the workstation. It’s also easy on bandwidth, due to using OpenSSH’s compression. TightVNC rocks.

With all that software in place, virtually everything that the customers used to do on Windows systems could now be done on GNU/Linux. MfxLinux solved real problems, saved people time and money, and provided a more secure computing environment suitable for HIPAA and Sarbanes-Oxley regulations. The close integration of MfxLinux and Medformix is a great example of how GNU/Linux can be used to build an enterprise caliber FOSS system that reduces problems for both the users and support staff.

When I parted ways with Crowell Systems, I had delivered an easy to install and use Linux distro, with integration with Medformix. Despite the compromises, it was just what the doctors ordered — literally!

Category:

  • Linux