For reasons beyond the scope of this entry, today I feel like writing down a broad and simplified overview of how modern Linux systems boot. Due to being a sysadmin who has stubbed his toe here repeatedly, I’m going to especially focus on points of failure.
- The system loads and starts the basic bootloader somehow, through either BIOS MBR booting or UEFI. This can involve many steps on its own and any number of things can go wrong, such as unsigned UEFI bootloaders on a Secure Boot system. Generally these failures are the most total; the system reports there’s nothing to boot, or it repeatedly reboots, or the bootloader aborts with what is generally a cryptic error message.
On a UEFI system, the bootloader needs to live in the EFI system partition, which is always a FAT32 filesystem. Some people have had luck making this a software RAID mirror with the right superblock format; see the comments on this entry.
- The bootloader loads its configuration file and perhaps additional modules from somewhere, usually your
/bootbut also perhaps your UEFI system partition. Failures here can result in extremely cryptic errors, dropping you into a GRUB shell, or ideally a message saying ‘can’t find your menu file’. The configuration file location is usually hardcoded, which is sometimes unfortunate if your distribution has picked a bad spot.
Read more at UTCC