Linux.com

Home Learn Linux Linux Tutorials Weekend Project: Ensure a Hassle-Free Linux Upgrade

Weekend Project: Ensure a Hassle-Free Linux Upgrade

Linux's long-term stability means that users can go for years simply upgrading packages without ever doing a re-install from scratch. Believe it or not, that is not always a good thing. It is the recommended practice for servers, naturally, but a peculiar side-effect is that when you do eventually re-install (a desktop or a server), you have ages of old tweaks and customizations built up, and reproducing them can be a confusing hassle. I recently undertook a from-scratch-reinstall, so some of the lessons I learned could be valuable when you tackle your next migration.

To be precise, I started experiencing hard drive problems on one of my desktop machines (with the disk on which my root partition lived), and seeing the writing on the wall, I decided to replace the aging disk. A new release of that machine's distro had just dropped (Ubuntu), so a fresh install was not much worse than an upgrade plus migrating to a new disk. I also took the opportunity to install / to an SSD, which does not affect the process much, and to finally move that machine from 32-bit to 64-bit build, which does affect the process. It means I cannot simply copy binaries from one machine to another; instead I have to actually sort out what is installed on the old machine, and replicate it.

From the bird's eye view, migrating your settings and "presence" to a completely new machine means taking stock of everything locally-installed and customized at the system level (namely application packages), determining the stock package-set you need to replicate onto the new machine, isolating any specialty applications like databases, and preserving your configuration and settings (system-wide and personal). Obviously you need to migrate your actual files, too, but that is hardly an unsolved problem. For the moment we will just stick to the software and non-data portions of the OS.

Support your /local/ Hierarchy

The first thing to do is take inventory of all of the packages on your old machine that you cannot simply re-install through the new machine's package management service. For starters, this includes everything that you have compiled and installed locally. According to the filesystem hierarchy standard (FHS), locally-installed software belongs in /usr/local/bin/ and (for system administration programs, /usr/local/sbin/). You can take stock of the contents of those directories to make sure you don't forget something, however, if you built RPM or Debian packages from source rather than installing the programs with make install, the packages may be installed in the normal /usr/bin/ and /usr/sbin/ directories instead.

In that case, you will need to turn to the package manager for help. Apt front-ends like Synaptic can show you locally-installed packages for Debian-based systems (including Ubuntu and all of its derivatives). As near as I can tell, YUM does not yet implement a similar feature for RPM-based distributions, although it has been discussed.

You can glean similar information with a bit of elbow grease, though: on the YUM list, Tom Mitchell suggests running rpm -qia and looking for the "Build Host" field -- a locally-compiled package should have your local machine's hostname. Mitchell suggested piping the output to less, but rpm -qia | grep -B 3 'yourhostname' | less will find only those packages that match your local machine's name. Hopefully you've picked an unusual one.

Finally, you should always be wary of accidentally overlooking proprietary applications -- particularly when they come in installation shell scripts rather than standard packages. Independent software vendors have a tendency to drop binaries and configurations in peculiar places. Many go in /opt/, but there is no foolproof way to find them all. You can use either rpm -qf /some/particular/pathname or dpkg -S /some/file/name to discover what (if any) package a suspicious-looking file belongs to, but you are just as likely to be successful by reading through your "Applications" system menu and taxing your memory.

Manifest Destiny

Next up, you will want to generate a list of the installed packages on your old system, which you can use to replicate the installed-package-list on the new machine. This is a little easier on Debian-based systems, because dpkg has a built-in command for the purpose, but it is easy enough on RPM distros, too.

On a Debian system, run dpkg --get-selections > my-software-packages.txt. This will write a list of all installed packages on your system to the my-software-packages.txt file. All means all; not just the packages you chose, but all of the libraries, data packages, and other dependencies that they pull in as well. It will be a long list.

You can edit it by hand and remove things you do not care about, but be careful not to erase one line but leave a dependent package in elsewhere in the file; that would be a conflict. It is also up to you what to do about the locally-installed Debian packages (if any) discussed in the preceding section. The simple solution might be to uninstall them before generating the file, but that makes life harder if the migration takes longer than you expect. Excising the lines in question from a text editor is probably just as easy; most people don't have more than a handful of local packages.

On the new machine, you can flag the full list of packages by running dpkg --set-selections < my-software-packages.txt -- then run dselect to start the installation, and enjoy a good book while the process churns.

RPM distro users can generate a comprehensive package list with rpm -qa > my-software-packages.txt. You also will need to consider polishing up the list before feeding it into the new machine's package manager; all of the same caveats apply. Once you are satisfied, however, run yum -y install $(cat my-software-packages.txt) on the new machine. Enjoy an equally-good book.

Files You Actually Want

I said in the introduction that I was not going to address migrating data files, since that is in some ways a simpler process — and because no two users have the same data, it is harder to generalize. However, there are some system-wide files you need to make special provisions for, such as content in /var/ and configuration files in /etc/.

Some of these files will potentially be different enough on the new machine that you will definitely not want to simply copy over the old file into the new /etc/ directory. For example, /etc/modprobe.d, or /etc/fstab and /etc/mdadm.conf. On the other hand, you may want to preserve some locally-honed configurations, such as any custom cron entries in /etc/cron.d/, LAN hostname information in /etc/hosts, network configuration in /etc/network/, or any firewall configuration you might have saved (such as in /etc/iptables.rules).

The tricky part is that Linux allows for so much flexibility in the naming and location of these configuration files that it is hard to write general-purpose rules. For example, you can store your firewall rules in /etc/iptables.rules, but you can just as easily keep them in /etc/network/if-pre-up.d/iptablesload. If you are not sure of the history of your customizations, the safe play is to make a copy of your /etc/ directory in a different location on the new machine, and resolve the differences one at a time. In the future, you might consider a configuration-version-control system like etckeeper for the new machine, so that the next migration will be better documented.

The /var/ directory is similarly inconsistent. Many game packages use /var/games/ to store settings and data that needs to be visible to multiple users — forget to migrate it, and the worst that can happen is you lose all of your high scores. On the other hand, /var/www/ is historically where web applications are installed, and you probably do not want to move to a new machine without them. You may want to re-install local web utilities on the new machine if you are doing an upgrade at the same time -- that depends on whether new versions of the scripting engines and libraries used are coming with the upgrade. Even if you do not, however, you probably need to preserve the data in the web apps.

In any event, this is another topic where you will have to devise a strategy on a case-by-case basis. Straightforward web utilities like the CUPS web front-end or phpMyAdmin you should have no trouble reinstalling on the new machine. A web-based accounting package that you use every day, however, needs to be treated carefully to avoid data loss.

All Your Data Are Belong to Base

Local web content leads into another important subject: databases. Migrating databases from one machine to another requires consideration of the database system used, the storage format, and the architectures involved. As with /var/www/, there is an important distinction between applications where you want to start over on the new machine because the context is different (such as Webmin), and applications where you need to migrate your data (such as a GIS app).

Fortunately, the need arises often enough that a lot of RDBMS-backed applications will bundle their own database-migration tools, just as they do backup tools. MythTV, for example, includes backup and restoration scripts that can be used to migrate from one machine to another without a loss in service.

Other apps might not be so generous, in which case you should start by consulting your database's documentation. MySQL has extensive online docs dedicated to migrating databases between machines. There is a good chance that your old a new desktop machines will use the same binary storage format (most PC architectures do), in which case the procedure is straightforward: you can copy the underlying .frm, .MYI, and .MYD files from one machine to another -- provided that you copy the mysql database itself, and replicate the MySQL users to the new machine.

Of course, you'll need to find the aforementioned MySQL binary files before you can copy them. If you need to locate yours, run grep datadir /etc/my.cnf. Even for very large databases, this direct copy method is faster than the old-fashioned migration process, in which you dump the database on the old machine, transfer the dump to the new machine, and import it into a new database. If you happen to use a weird, non-default storage engine, you will need to do a little extra digging, but for most users, the migration process is painless. Just be sure to run a few tests on the new machine before you erase the old one.

Sweeping Out The Corners

There are likely to be peculiarities on any system that won't get caught by the general-purpose migration hints, but if you have the luxury to migrate to your new machine (or simply your new root partition and filesystem) while your old one is around, you can catch most of them within a few weeks' time running the new system. That list might include locally-installed packages that did not include dropping any executables into /usr/local/bin/ and /usr/local/sbin/ (such as window manager or icon themes), Java applications that ended up in unusual directories, and programs that use peculiar locations to save their data and settings (such as /usr/share/ instead of /var/ — the fact that they shouldn't doesn't guarantee that they won't).

Naturally, even after you manage to replicate the software environment and system configuration from your old machine onto your new one, you still have to move your personal files. I have been assuming thus far that this content is either stored in one location (within /home/), or else on other mount points that you know well enough to re-attach properly. But when your migration also happens at the same time as a system upgrade (as my example did), simply mounting the old /home/ on the new machine can introduce its own share of problems because of deprecated and upgraded settings in the desktop environment and applications.

Consequently, it may be better to rename old configuration folders like .gconfd and .kde4 and use them only as reference when tweaking the new environment. The good news is that older apps and lower-level utilities are more stable than the desktop environments and far less likely to introduce API changes. So dependable favorites like .bashrc, .vimrc, .emacs.d/, and .ssh/ are liable to sail through any upgrade process, no matter how prickly, and be waiting for you on the other side.

 

Comments

Subscribe to Comments Feed

Upcoming Linux Foundation Courses

  1. LFD320 Linux Kernel Internals and Debugging
    03 Nov » 07 Nov - Virtual
    Details
  2. LFS416 Linux Security
    03 Nov » 06 Nov - Virtual
    Details
  3. LFS426 Linux Performance Tuning
    10 Nov » 13 Nov - Virtual
    Details

View All Upcoming Courses

Become an Individual Member
Check out the Friday Funnies

Sign Up For the Linux.com Newsletter


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board