Linux.com

Home Learn Linux Linux Career Center The Kernel Newbie Corner: Building and Running a New Kernel

The Kernel Newbie Corner: Building and Running a New Kernel

This week, we're going to take a bit of a detour from building and loading modules to discuss how to configure, build, install and reboot a new kernel since, very soon, we'll be covering debugging and there's only so much you can do without being able to configure and reboot under a new kernel with extra debugging features. And while most of you have probably already done this once or twice, I'll try to throw in a couple surprises even for the hardened veterans out there.

As with all previous articles, I'll be doing this on a fully-updated, 64-bit version of Fedora 11, so translating what happens here to your favorite Linux distro is up to you. (The archive of all previous "Kernel Newbie Corner" articles can be found here.)

This is ongoing content from the Linux Foundation training program. If you want more content please, consider signing up for one of these classes.

Getting a Fresh Kernel Source Tree

For the sake of consistency, let's assume you have a working copy of the latest git pull of the kernel source tree. (If you don't, one of the earlier articles explains how to get one.)

Let's start with a perfectly clean and pristine version of the kernel source, which you can get using one of:

 $ make clean        [kind of clean]
$ make mrproper [even cleaner]
$ make distclean [there we go, THAT's clean]

It's that last variation that should clean your source tree right back to the "distribution" state, so let's go with that.

Configuring Your Kernel

As most folks know, the point of "configuring" your kernel is to select precisely which features you want in the kernel you're about to build, and whether those features should be built into the kernel image proper, or left as loadable modules. And, as most folks know, the end result of any kernel configuration process is the .config file that will be left at the top level of the kernel directory tree.

Assuming that most readers have seen this done once or twice, let's just summarize the most common configuration targets you're likely to see. First, if you're configuring from scratch, well, one of the following is probably what you want:

 $ make config
$ make menuconfig
$ make gconfig
$ make xconfig
$ make defconfig

If you want to use an existing .config file for the configuration, simply copy that file into the top level of the kernel source tree and run one of:

 $ make oldconfig
$ make silentoldconfig

And, finally, there are a number of configuration targets that aren't quite so well-known, but are wildly useful for testing since they attempt to stress the configuration process with weirdly extreme and unusual choices:

 $ make allyesconfig
$ make allnoconfig
$ make allmodconfig
$ make randconfig

Got all that? No? No problem, since you can get a quick listing of all of the make targets with:

 $ make help

and digging into some of them is left as an exercise for the reader.

Where All That "Help" Comes From

It's useful to know how all that "help" information is generated, since it's not as simple as it seems. There is, not surprisingly, a "help" make target in the top-level kernel source Makefile, but how it produces its output is worth seeing. (You can follow along if you want starting at about line 1270 in that Makefile.)

The first part of the output is hard-coded but, very quickly, that top-level file dives into the scripts directory to pick up more content, then returns to hardcoding and, finally, includes other makefiles for packaging, documentation and, most interestingly. architecture-specific content.

That last part is worth explaining a bit more. Some make targets only make sense in the context of certain architectures, and you can see that if you ask for target help for a specific architecture, as in:

 $ make help             [for the current arch]
$ make ARCH=arm help
$ make ARCH=powerpc help
$ make ARCH=blackfin help

and so on. This is something we'll see again when we cover cross-compiling a kernel for different architectures, but you can at least see how the help information is constructed, and how some of it is clearly architecture-dependent.

Configuring in a Remote Directory

Here's one of the most useful tricks when it comes to configuring and building a kernel--doing all that work in a directory other than the source directory. Sure, if you're the only one working with the source, and you're never doing more than one configuration and build at a time, you can just do all that in the source directory itself. But, working from within the source directory, you can choose to, say, configure and build a new kernel and generate all that output in a remote destination directory thus:

 $ make O=destdir menuconfig   [configure]
$ make O=destdir [build]

The advantages of this technique should be obvious:

  • Several users can be sharing the same kernel source tree.
  • Even if there is only one user, that user can be working with multiple configurations, perhaps even multiple architectures, simultaneously.
  • This may be your only choice if the kernel source tree was installed by someone else and you have only read access to it.
  • This approach leaves your source tree clear to, perhaps, continue searching for strings or phrases without having to wade through all of the object files that are generated as part of a build.

However, what you need to know about this approach:

  • All of your make commands must be run from the top of the source tree, not the destination tree.
  • Once you start the remote configuration/build with the O= variable, all subsequent processing for that configuration must use the same O= value. In short, once you start a remote build, you have to stick with it--there's no changing your mind halfway through.
  • Finally (and even though it's still not clear why this is necessary), the source tree being used for a remote build must be clean. That is, you can't be using the same source tree for both a local build and a remote build. Why this is true is not entirely clear, but that it's true is obvious if you check the top-level Makefile where, down around line 953, the prepare3 target does the following checking:
     prepare3: include/config/kernel.release
    ifneq ($(KBUILD_SRC),) [doing a remote build?]
    @$(kecho) ' Using $(srctree) as source for kernel'
    $(Q)if [ -f $(srctree)/.config -o -d $(srctree)/include/config ]; then \
    echo " $(srctree) is not clean, please run 'make mrproper'";\
    echo " in the '$(srctree)' directory.";\
    /bin/false; \
    fi;

    What the above is doing is checking for the existence of either a top-level .config file in the source tree, or the existence of the generated include/config directory. If either exists, the source tree is considered not clean enough to be used as the basis of a remote build, so just run make distclean if you want to rectify the situation.

But wait--we're not done yet. The final advantage of this approach is that it makes it easier to see exactly what is generated as part of a configuration process. Recall that, in order to use a kernel source tree to build modules against, you need to both configure the source tree, followed by running (at the very least) make modules_prepare. But what exactly does that last step do? If you do the above and dump the results into a remote directory, then it should be trivially easy to examine what's generated by those steps since you won't have all the source cluttering up the place. Useful, no?

Building and Installing Your New Kernel

Once you're happy with your configuration, doing the actual kernel build is typically fairly simple. Here's the recipe on my current 64-bit Fedora 11 system:

 $ make 
# make modules_install [install the modules]
# make install [install the new kernel]

Note that the last two steps will almost certainly need root privilege. But what did the above do?

The modules_install step will normally install all of your new modules under the /lib/modules directory, in a subdirectory whose name matches the built kernel version. In my case, since I was working with the latest git pull, I'll have a new modules directory /lib/modules/2.6.31-rc3.

The actual kernel install does a little more work, the end result being the following new files being created under the /boot directory:

  • vmlinuz-2.6.31-rc3¬†
  • initrd-2.6.31-rc3.img
  • System.map-2.6.31-rc3

and a new entry stuffed into my GRUB configuration file:

 title Fedora (2.6.31-rc3)
root (hd0,0)
kernel /vmlinuz-2.6.31-rc3 ro root=/dev/mapper/f11-root
nomodeset rhgb quiet
initrd /initrd-2.6.31-rc3.img

We won't spend any more time on this process, since most readers should already be familiar with it. All we wanted to do with this column was quickly review the kernel configuration and build process to prepare for subsequent columns where we'll be configuring new kernels with specific features, mainly for debugging. If you're not familiar with building and rebooting under a new kernel, you probably want to try that a few times in the near future.

So, are we done here? Not quite.

Any Other Useful make Targets?

In fact, there are. Certainly, you're welcome to peruse the output of make help to see what looks interesting, where you should notice the documentation-related targets:

  Linux kernel internal documentation in different formats:
htmldocs - HTML
pdfdocs - PDF
psdocs - Postscript
xmldocs - XML DocBook
mandocs - man pages
installmandocs - install man pages generated by mandocs
cleandocs - clean all generated DocBook files

What the above targets do is build documentation based on the "in-kernel" documentation you can find embedded in various source and header files throughout the tree. If you want to see the various documents that can be generated by any of those targets, check out the directory Documentation/DocBook.

And That "headers_install" Target? What's Up With That?

There's one more make target that's worth discussing and, while it technically has nothing to do with building and rebooting under a new kernel, I couldn't figure out where else to cover it so I'm tossing it in here. Consider it bonus kernel goodness at no extra charge.

Until now, we've been very careful to distinguish between user space programming and kernel space programming, particularly in the area of available header files. User space programmers are used to including header files that are typically found under the system-level directory /usr/include, which is where one finds standard header files such as stdio.h, string.h and so on, while kernel programmers know that they should be pulling in the header files they find in a number of subdirectories under the kernel tree include directory such as linux/kernel.h or from a number of other subdirectories under there including the directories video, scsi, etc. In short, user space programmers know where to get their header files, while kernel programmers know where to get theirs, and never the two shall mix.

Well, not quite.

It turns out that there are quite a number of kernel header files that are useful for user space programming, since they contain content that is relevant in both places, and it's the job of the headers_install target to identify those header files, "sanitize" them of their kernel-only content, then collect them all in one place where they're now available to be bundled into a package that will be installed (typically also under /usr/include) where they're now available for any user space programming. And, typically, these header files will come as part of a single package, as they do on this Fedora system:

 $ rpm -q kernel-headers
kernel-headers-2.6.29.5-191.fc11.x86_64
$

So where did that package of exported kernel headers come from, and who decided what would be part of it? Let's pick a simplified example and follow the construction.

Consider some of the directories under the kernel tree include/ directory that might contain header files available for exporting:

 $ ls -1F include
acpi/
asm-arm/
asm-generic/
asm-x86/
config/
crypto/
drm/
Kbuild/
keys/
linux/
math-emu/
media/
mtd/
net/
pcmcia/
rdma/
rxrpc/
scsi/
sound/
trace/
video/
xen/
$

So which of those directories contain header files, some of which might be part of that export? Consider the contents of the video subdirectory, which contains a couple dozen header files. Whatever is exported from that directory would typically end up in the corresponding user space directory /usr/include/video, whose contents are simply:

 $ ls /usr/include/video
edid.h sisfb.h uvesafb.h
$

In other words, only a small subset of all of those video-related kernel header files ends up being exported to user space since they've been identified as the only ones that have any relevance to user space programming. And who gets to decide that list? Why, the Kbuild file in that very directory:

 $ cat include/video/Kbuild
unifdef-y += sisfb.h uvesafb.h
unifdef-y += edid.h
$

Each kernel include subdirectory that has any content to be exported needs to define a similar Kbuild file listing which headers to export. If there's nothing to export, that directory won't have such a file. And one level higher up, the include directory itself will have a list of which directories are even worth processing:

 $ cat include/Kbuild
# Top-level Makefile calls into asm-$(ARCH)
# List only non-arch directories below

header-y += asm-generic/
header-y += linux/
header-y += sound/
header-y += mtd/
header-y += rdma/
header-y += video/
header-y += drm/
header-y += xen/
header-y += scsi/

In short, the file include/Kbuild defines which directories are even of interest, and inside those directories, further Kbuild files identify only those header files to be exported, so you run the commands:

 $ make defconfig
$ make headers_install

and the appropriate header files are processed and collected... where? Here:

 $ find usr/include     [note: no leading "/", this is in the kernel directory]
usr/include
usr/include/asm
usr/include/asm/ioctls.h
usr/include/asm/mce.h
usr/include/asm/byteorder.h
...
usr/include/video
usr/include/video/..install.cmd
usr/include/video/edid.h
usr/include/video/sisfb.h
usr/include/video/uvesafb.h
usr/include/video/.install
...

Look familiar? Those are exactly the set of kernel header files that were marked for export, while the rest of them were quietly left behind. But wait--there's more. There always is.

In addition to simply being collected for export, these header files are also "sanitized" of any content that is relevant only in kernel space. Such content is normally protected by conditional inclusion using the #ifdef __KERNEL__ preprocessor macro, which is used to identify content that has no value whatsoever in user space and which is stripped from the exported files. As an example, consider an original kernel header file:

 $ cat include/video/edid.h
#ifndef __linux_video_edid_h__
#define __linux_video_edid_h__

#if !defined(__KERNEL__) || defined(CONFIG_X86)

struct edid_info {
unsigned char dummy[128];
};

#ifdef __KERNEL__ <-- kernel-only content!
extern struct edid_info edid_info;
#endif /* __KERNEL__ */

#endif

#endif /* __linux_video_edid_h__ */

Then consider its exported content:

 $ cat /usr/include/video/edid.h
#ifndef __linux_video_edid_h__
#define __linux_video_edid_h__

#if !defined(__KERNEL__) || defined(CONFIG_X86)

struct edid_info {
unsigned char dummy[128];
};

#endif

#endif /* __linux_video_edid_h__ */

As you can see, this snippet was tossed:

 #ifdef __KERNEL__
extern struct edid_info edid_info;
#endif /* __KERNEL__ */

since it was clearly marked as kernel-only and therefore had no value in user space.

In summary, then, after you configure your kernel tree, the job of make headers_install is to read the appropriate Kbuild files, identify the header files to export, collect them, strip (sanitize) them of any content which is clearly relevant only to the kernel, then place the results under the kernel directory usr/include, where all that content is now available to be bundled as an appropriate "kernel headers" package that will be available to user space programmers.

Exported Headers Afterthought

After review, I realize I neglected to mention one aspect of the Kbuild files that you find in various kernel header file directories that dictate which headers are to be exported to user space.

In those Kbuild files, you'll see directives in one of two forms:

  • unifdef-y := ...
  • header-y := ...

That first form is used to identify the header files that have some kernel content (defined by the __KERNEL__ macro), and which have to be "cleaned" of that content before exporting. This cleaning is done via the unifdef utility; hence, the variable name of unifdef-y.

The second form, header-y, is used to identify those header files that are known to have no kernel-only content and can be exported exactly as is. In fact, it's perfectly normal to have a single Kbuild file that has a mixture of those settings, such as:

 $ cat include/sound/Kbuild
header-y += asound_fm.h
header-y += hdsp.h
header-y += hdspm.h
header-y += sfnt_info.h
header-y += sscape_ioctl.h

unifdef-y += asequencer.h
unifdef-y += asound.h
unifdef-y += emu10k1.h
unifdef-y += sb16_csp.h
$

However, the original rationale for distinguishing between the two--that you might save precious milliseconds not having to "unifdef" header files that had no kernel-only content--really doesn't make much sense anymore, so there's a proposal to simplify this construct to just list the files to be exported and unifdef all of them, no matter what.

Given the speed of modern processors, this would seem to make sense.

P.S. There is, in fact, a third form of a Kbuild entry, and that is:

header-y := dirname/

which simply identifies recursing further into a subdirectory and continuing to process header files. As an example, check out include/linux/Kbuild.

Robert P. J. Day is a Linux consultant and long-time corporate trainer who lives in Waterloo, Ontario. He provides assistance to the Linux Foundation's Training Program. Robert can be reached at This e-mail address is being protected from spambots. You need JavaScript enabled to view it , and can be followed at http://twitter.com/rpjday.

 

Comments

Subscribe to Comments Feed

Upcoming Linux Foundation Courses

  1. LFD320 Linux Kernel Internals and Debugging
    10 Nov » 14 Nov - Virtual
    Details
  2. LFS426 Linux Performance Tuning
    10 Nov » 13 Nov - Virtual
    Details
  3. LFD312 Developing Applications For Linux
    17 Nov » 21 Nov - Virtual
    Details

View All Upcoming Courses

Become an Individual Member
Check out the Friday Funnies

Sign Up For the Linux.com Newsletter


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board