In part one of this four-part series, Xen Project Advisory Board Chairman Lars Kurth takes a look at the theories behind cloud security and how they relate to The Walking Dead -- yes, the TV show. Read on to find out more.
With vulnerabilities like last year’s Heartbleed and more recently VENOM, software that runs the modern Internet and cloud systems has never been more at risk and less secure. Many assume that to keep a system as secure as possible, you must eliminate any entry for an attacker. However, this is simply not the case. The key here is that IT teams really need to determine the probability that an attacker knows of an exploitable vulnerability.
Let’s examine this idea a little more closely through understanding the nature of risk as it relates to virtual and cloud environments. Once we have this framework, we’ll dive into putting this philosophy into practice.
What Does Risk Mean and How Does It Relate to Vulnerabilities and Exploits?
When we say a system is "secure,” it’s easy to fall into the trap of thinking that security is binary: a system that is “insecure” can be broken into, and a system that is “secure” cannot be broken into. Flip this around and consider risk instead. It quickly becomes clear that there’s a whole spectrum of risk and security to consider. A "secure" system is a system that has a relatively low risk of being broken into. An "insecure" system is a system that has a relatively high risk of being broken into. Some amount of risk is tolerable, even it’s not ideal. So, what exactly is the nature of this risk? Where does it come from?
In cloud computing and virtualization (as in computing in general), input and output to and from the system is the primary route for malicious payloads into the system. However, workloads are diverse and dependent on users doing the right thing (e.g., running security software, regularly updating their OS and applications, etc.). We cannot assume that all cloud users will do these tasks, so we have to focus on two other techniques to mitigate risks: compartmentalization (or separation of privileges) and the principle of awarding the least privilege to do a job.
Compartmentalization separates access to resources such as virtual machines, processes, users, data, etc. and helps contain problems if they do occur. And, the principle of awarding the least privilege to do a job gives users privileges that are only essential to do their work. For example, a regular user on a server does not need root access or, in some cases, does not need to be allowed to install software.
Virtual machines and containers are the most basic form of compartmentalizing in cloud computing and data centers today. These are “trust domains,” which rely either on a hypervisor or on Linux to enforce separation of the most basic privileges. Now, let’s try to evaluate the risk of someone breaking the virtualization layer and accessing data or resources in other VMs or other containers. Although “breaking through” is probably the best description, it is a bit misleading. This conjures up an image of a brute force strong enough to overcome software’s virtual tensile strength.
In reality, the source of this type of risk in software is vulnerabilities. A vulnerability is a weakness -- a bug somewhere in the code or in the configuration that an attacker is able to take advantage of within a trusted domain. The code or the technique that attackers use to take advantage of a vulnerability is called an exploit. If there is a vulnerability, and the attacker knows it, then the attacker can get into your system. If there is no vulnerability, or the attacker does not know it, then the attacker cannot get in. So, this virtual break-in requires not strength, but first and foremost, presence and knowledge of a vulnerability.
Evaluating Vulnerabilities to Protect Your Systems
As listed above, a vulnerability in software is a mistake. This could be a mistake either in the code itself (i.e., the software is not functioning as the developer intended). It could also be in its configuration (i.e., the software itself is functioning as the developer intended, but because it's not configured properly, it's not functioning as the administrator intended). Both of these are important to consider when evaluating the security of a system.
Let’s use CVE-2015-3456, or VENOM, as an example. This vulnerability is interesting as it has a configuration angle, as well as a vulnerability angle. VENOM is a vulnerability in QEMU’s Floppy Disk Controller (FDC). QEMU is used in Xen, KVM, Virtualbox, and derived solutions. VENOM allows local guest users to cause a denial-of-service attack or allows the execution of arbitrary code. The Xen toolstack automatically configures QEMU, such that the Floppy Disk Controller is disabled, and KVM users could configure their system manually, such that the FDC is not used. Unfortunately, in this case, there was an additional bug in QEMU, which led to the FDC not actually being disabled in QEMU when asked.
In a nutshell, the lesson for administrators is to disable everything that can be disabled and is not used. Of course, the same lesson applies to software: to avoid vulnerabilities like VENOM, the Xen toolstack disables a wide range of QEMU devices that are not used.
What IT Teams Can Learn From The Walking Dead
Security vulnerabilities in today’s complex software environments are a fact of life. IT professionals are constantly on alert for attackers who might identify and exploit a vulnerability. This risk is real and ever-present for companies in any industry across the world. Although the analogy of The Walking Dead is far afield from technology, it’s useful in that it might scare companies enough to take action and increase their defenses.
Imagine you and your motley crew are the last remnants of humanity, as far as you know. You're going from place to place, living in the remnants of the old civilization. You stay in one place until you use up all the resources in that place, then you move on.
Here are the rules for the Walkers in this analogy:
They are active day and night and usually attracted by sound, which leads a few Walkers to come together, eventually finding and merging with new groups forming a herd, growing larger and larger and more ferocious.
They're strong enough to break down a door or smash through a window easily...
But, they're too stupid to recognize a door, window, fence, or wall for what it is. However, a herd will eventually identify a window or will build up enough mass to topple a fence or wall.
So, all you have to do is keep quiet and make sure that every door, window, or opening of any kind is properly closed/boarded and that your fencing is tall and structurally sound. If you leave a single crack open, and the Walkers find it, then that's the end of the story.
Although it's not that hard to secure any given door or window, you're only human, and often tired, stressed, or in a hurry. So, despite your best efforts, occasionally a door or window is left open, which, by luck, the Walkers won’t find. Like Walkers, computer attackers are looking for an opening they can break through, and you can't do away with all the openings.
Small, simple doors and windows are easy to secure; whereas bigger ones are much more difficult to protect. The same is true for fences with the minimum possible surface area. Given your time constraints, boarding up five small windows is a lot easier and less error-prone than boarding up one big one. The smaller the better.
Multiple layers of protection, sometimes called defense-in-depth, are best. If you can secure the building and the fence around it, and close and lock doors within the house, that is best, because hackers need to find *several* mistakes to break through. If you have time, you can strengthen or add new doors. This would be akin to improving the “compartmentalization” of your system architecture.
In the next post, we will dive deeper into security vulnerabilities and how they differ with hypervisors compared to containers. Read Open Source Security Process Part 2: Containers vs. Hypervisors - Protecting Your Attack Surface.
Lars Kurth had his first contact with the open source community in 1997 when he worked on various parts of the ARM toolchain. This experience led Lars to become a passionate open source enthusiast who has worked with and for many open source communities over the past 19 years. Lars contributed to projects such as GCC, Eclipse, Symbian, and Xen. He became the open source community manager for Xen.org in 2011 and later chairman of the Xen Project Advisory Board.