“When the cost of obtaining the information exceeds the value realized by its possession, the solution is an effective one.” - A Practical Guide to Red Hat Linux by Mark G. Sobell, Third Edition (Prentice Hall), page 989.
After forty years in the commercial computing business, the one idea that has been drilled into me by security professionals is the fact that there is no such thing as a secure computer system, only levels of insecurity. Therefore the cost of keeping the information and system secure has to be balanced with the cost of losing that information or system, or having it damaged. Unfortunately the speed and availability of the Internet combined with the low cost of very powerful computers and network services have made the cost of “cracking” go down and the cost of “securing” go up.
The most important thing in a secure system is to have a good security policy. Without that, you are lost and will wander ineffectively. Therefore you have to give thought as to who will be able to do what, whether those limitations are discretionary or mandatory and how you will implement and enforce those policies. A good example of not having a good policy is the company that forces all of their employees to have long, complicated passwords that change once a week, but tolerate people writing their passwords on sticky notes and pasting it on their LCD panels “because they can not remember the passwords.”
The next most important things are a good set of security tools and people trained to deploy them and monitor their output.
Many desktop systems hide behind a “firewall” these days in the corporate or even home environment. The firewall is a specialized system that accepts data from the Internet and routes it to the desktops or servers. The hope is that the firewall will isolate the bad people from the people inside the firewall, and therefore the systems inside can be more “relaxed” in their security. Unfortunately in the days of mobile computing, laptops can move from inside the protected firewall perimeter to the unprotected “wild west” of Starbucks (as an example) where people sipping lattes and “surfing the net” have their notebooks infected with viruses and Trojans that they bring back to the office. And these days, the attacks sometimes come from within the organization (where the firewall gives no protection), and not the outside.
Other systems cannot hide behind firewalls, and are called “Bastion” systems. They are the systems that run your web-servers, email servers,and other “service” machines. These are the systems that have to be “absolutely (as much as possible) hardened.”
Finally, constant monitoring of security lists, sites and rapid application of patches is key to system security. Having the source code for your system available means that you do not have to wait for your distribution vendor to supply you with the engineered, compiled and tested patch. You can make the decision of applying a fix yourself, depending on the criticality of the attack.
Given the philosophies and issues above, I believe that the Free and Open Source Software space is the best base to allow your insecure systems to reach towards security, and that is what this blog is about.
What this blog is NOT about is being an in-depth explanation of network security, nor how to block SPAM, nor to be the key-stroke-by-key-stroke cookbook of system security. Security is an art form as well as a science, and this blog entry cannot make you a Michelangelo in 3000 words. If I can show you here that your system is currently at the “finger painting” level and that with Free Software you can aspire to do “water colors,” “oil paints” and beyond, then I figure I have done a good job.
Some day your work may be in the "museum of security fine art."
History and the Architecture of Unix
In 1969 Ken Thompson and Dennis Ritchie started developing the Unix operating system, “just for fun.” Whether or not Unix was intended to be a timesharing system in the first place, it quickly became a system that allowed for multiple people to share the system, with multiple processes for each person. This immediately set the design to be more robust and secure than a single-user system since the concept of stability and security had to be built in.
Granted, in the early years of Bell Laboratories a lot of attention was not paid to password strength or security on a personal level, but over the years things like password aging, password strength enforcement and shadowed passwords were put into the system to keep improving the security.
Unix was criticized for its early model of “superuser” versus “everyone else” in the way of running programs (particularly administrative programs), and the groupings of ownership for “owner,” “group” and “other” (i.e. everyone else in the world) having “read,” “write" and/or “execute” capabilities on the file. While this relatively simplistic, yet elegant permission structure worked well for a number of years, over time access control lists (ACLs) were enabled, allowing people to create classes of execution privileges and access to files and directories on a finer-grained level.
As Unix escaped from Bell Labs and entered into the academic environment at universities, it came under the classic “trial by fire” as students tried to break into the system and developers tried to keep them out. Unix became the de facto operating system for serious computer science study, and therefore (in a lot of cases) for serious studies of computer security.
The Architecture of Linux
As I mentioned before, the architecture of Linux follows closely the architecture of the Unix systems. A relatively small monolithic kernel with libraries and utilities that add functionality to it.
This alone adds security value, since it allows the end user to turn off a lot of services (both hosted and network services) that they do not need, and if left to run on the system would create more avenues and possibilities for attacks.
For example, the average desktop system acts as a client for services, not as a server. Turning off these services means that other people across the network cannot attach to them. In the early days of Linux a lot of distributions would be distributed with the services turned on when you installed and booted them the first time. This was under the mistaken impression that having the services running would make them easier to administer, but security people quickly pointed out that having the services running at installation time (before needed patches could be applied) also left the systems, however briefly, open to attack. Now most, if not all, distributions install with these services turned off and you are instructed to turn them on at the proper time, hopefully after you have applied needed patches.
Another example is the concept of removing compilers and other software development tools from the system, as these tools give system crackers more tools to exploit your system. Removing these tools means the cracker has to use other methods to break in.
Added to this base functionality have been several FOSS packages over the years that have given Linux even greater security.
The first is “PAM” or “Pluggable Authentication Modules.” In any system, “authentication” means that you have identified yourself in such a way that the system gives you access to services. As you log in with your username and your password you are being “authenticated” typically by the username and password in the /etc/passwd file by the login program. Likewise ftpd, and other “service” programs would authenticate you the same way.
If you are networked, however, you may be authenticated by any number of methods, whether it be LDAP, DCE, Kerberos or even newer methods, and any number of programs might have to be changed to reflect the new method of authentication. PAM was provided to allow new methods of authentication to be applied to all the programs in the system that need authentication without having to change and integrate each new authentication method.
Another authentication method previously mentioned was “Access Control Lists” or “ACLs”. An ACL grants “access” to a file or directory based on an extension of the traditional Unix permissions of “owner/group/other” and “rwx” mentioned above. Since ACLs are implemented as part of the filesystem structure, you have to make sure that your kernel has been built to support them, that the filesystem you are using supports them, and that the filesystem has been mounted with ACLs turned on. However, once that is done you may assign permissions to multiple users on an individual user basis, multiple groups on a group-by-group basis, and so forth.
This would allow you to easily set up a group of operators who could start or stop an individual database engine or do backups, but could not shut the entire system down, as an example.
Finally, you have to be aware that not all of the Linux utilities support ACLs. If you are copying files from one directory to another with the cp command you should use the -p (preserve) or the -a (archive) options on the command. Some of the stalwart “Unix” commands such as cpio, tar and others do not support ACL copying, and therefore the ACL information would be lost.
Encrypting your data should be part of your security policy in a world of USB thumb-drives, portable drives and stolen laptops, and Linux allows you to encrypt individual files, filesystems, swap partitions and even filesystems held inside of single files.
Some of these encryption methods also work with user-level filesystems, which means you can configure them while the system is up and running.
Loop-AES uses a loop-back technique to allow the block device to do encryption without having to change anything in the kernel at all. Loop-back techniques are also useful for supporting filesystems being held in a single file, so this method can be used to create an encrypted filesystem that is contained in a single file on your machine.
DM-Crypt uses the device-mapper functionality (also useful for software RAID, snapshotting and other features) of the kernel to encrypt filesystems.
CryptoFS is a filesystem in user space (FUSE) that allows you to mount a filesystem over a directory, and then every file stored in that directory is encrypted, including the file name. When you unmount the filesystem, the files are encrypted and will not be de-encrypted until the filesystem is mounted again using the same key.
There are even more methods of encrypting files and filesystems such as EncFS and Truecrypt.
As an aside, recently a Microsoft Windows administrator I know booted a Live CD on one of their machines and was astonished that Linux could read and write the Microsoft Windows filesystems, even though he had set the directories as private under the Microsoft operating system. I explained to him that it was a different operating system and unless he encrypts all of the data in his filesystems, he should expect that anyone using a different operating system on his machine would be able to see, change and delete data in his Microsoft Windows filesystems. I did not “make his day” with that news....
In most of the authentication methods the access control is discretionary. The owner of the object (whether program or data) can change the permissions for other people and groups.
Several years ago the National Security Agency (NSA) created a project to enforce Mandatory Access Control (MAC) inside the Linux Kernel. This project became known as “Security Enhanced Linux” or SELinux. MAC enforces the security policies that limit what a user or program can do, and what files, ports, devices and directories a program or user can access.
SELinux has three modes: disabled, permissive and enforcing. In disabled mode nothing is done. This is so you can have your policies set up and ready to go, but do not necessarily wish to have them acted upon. Permissive mode logs violations to the policy to log files for you to inspect or otherwise monitor. Enforcing means that any violation of the security policy will terminate the offending process.
SELinux uses about 5-10 percent of the system performance when in permissive or enforcing mode.
Likewise SELinux can run in a “targeted” policy or a “strict” policy. Targeted means that the MAC controls only apply to certain processes. Strict means that the MAC controls apply to all processes.
People should be warned that indiscriminate use of the strict policy of SELinux can render a system almost unusable for some users. There has to be a trade-off of keeping the system secure and allowing the users to do their work.
It is argued that SELinux is “overkill” on a “single-user system” but with modern-day exploits and the power of “single-user systems,” we may find more and more applications of SELinux on a single-user desktop.
AppArmor is another system for Mandatory Access Control, but one that is based more on a program-by-program basis than SELinux and allows you to mix enforcing and permissive policies in the same system at the same time.
Through its “profiles” for each program, AppArmor can limit what a program can do and what files it can access, write or execute.
Some people feel that AppArmor is easier to configure and control than SELinux.
Making Files “Immutable”
If someone breaks into your system, they may change various control files, such as the passwd file. You can stop this by making the file “immutable.” When a file is “immutable” it cannot be changed, either by writing or deleting or renamed or hard links made, even by the “superuser.” The file first has to be changed back to have “normal” file permissions, and then it can be changed. The command used to make a file immutable and back to normal is chattr, and had syntax of this form: chattr +i <filename>
Using the chattr command with an “a” instead of an “i” makes the file so it can only be appended. This is useful for log files, where you only want the system to add new information, not delete old information.
Once the chattr command has been executed against a file, even root cannot change or delete that file until the file has been changed back with a “-i” or “-a”.
Again, you have to check to make sure the filesystem you are using supports this functionality. Ext2 and Ext3 filesystems do support it.
Unix and Linux systems have log files. These are files that log different types of events, everything from process start up and ending to messages explicitly about your email server or your database engines. Most Unix and Linux systems have the ability to route various levels of information from “nice to know” to “critical” into a central repository. There the systems administrator can create filters and scripts to help them monitor these log files for activities that would indicate people breaking into the system.
These log files, of course, should be protected using the chattr command mentioned above with the “-a” option.
Intrusion Detection Systems
There are various Intrusion Detection Systems available for Linux. SNORT (http://www.snort.org/) is one of them. SNORT uses a set of rules to determine how it should determine and escalate intrusions.
Despite all your work, time, sweat and tears, eventually your system will be compromised. That is when you have to figure out when it was compromised, how it was compromised and be ready to recover whatever was damaged without allowing other possible viruses and Trojans to remain on your system.
With a lot of work, you may be able to use tools to sweep your system looking for these viruses and Trojans. Or you can re-install from the original CD-ROM or known good ISO image and all of its associated patches.
A final way is to have a really good system-level backup of all of the system work that you have done and periodically update that backup to make sure you have captured all the security patches that have occurred since the last one. If you can determine accurately when you were compromised, you might be able to restore the system from one of those system-level backups. Otherwise you may have to go back to installing from the distributed code.
I am sure that a lot of security professionals will look at this blog entry and say “really elementary.”
Other people might look at some of these features and say “how can I possibly keep up with knowing all of these policies and commands on a system as complex as Unix or Linux?” The answer is that you probably cannot keep up with all of these considerations on every system and that is where your security policies come into play. Make each system as secure as it has to be for its particular job and for the information stored on it, and allowing that you still have to get work done.
Besides studying the resources listed below, you should also look at the website for your specific distribution. Because there are many overlapping ways to do file encryption, compiling a kernel, and securing a system, your distribution may have developed a general security architecture that would compliment your policies and make trying to be more secure a lot easier.
There are many good books on both general computer security and security on Linux in specific. I found these two to be very good:
In addition, there are web sites: