NFV automation is the ability to transfer manual network configuration to technology; NFV orchestration creates the deployment and automation blueprint.
NFV automation and NFV orchestration have overlapping and interrelated capabilities, which are essential to the deployment of virtual network services. Both automation and orchestration are part of the critical management, automation and orchestration, or MANO, layer. The lack of MANO standards has hindered network functions virtualization deployments by many leading service providers.
A software environment’s attack surface is defined as the sum of points in which an unauthorized user or malicious adversary can enter or extract data. The smaller the attack surface, the better. We recently sat down with Doug Goldstein (https://github.com/cardoe or @doug_goldstein) to discuss how companies can use hypervisors to reduce attack surfaces and why the Xen Project hypervisor is a perfect choice for security-first environments. Doug is a principal software engineer at Star Lab, a company focused on providing software protection and integrity solutions for embedded systems.
Linux.com: Tell us a little bit about what your company does and your area of expertise?
Doug Goldstein: Star Lab is a software security provider dedicated to researching, developing, testing, and delivering embedded security solutions for both commercial and government customers. We address at-rest, boot, and runtime system protections even in the face of successful privilege escalation attacks. Star Lab maintains a Xen-based and security-focused virtualization product called Crucible that targets embedded markets.
As for me, I’ve been a Gentoo Linux developer for about 15 years, and have been interested in virtualization for some time. I started my foray into virtualization with KVM for test environments, and have contributed back to that community. I’m a contributor for the Xen Project and have worked to make a hypervisor modular at compile time through a project called Kconfig, which was borrowed from the Linux kernel. Kconfig allows developers to create a more lightweight hypervisor, which reduces attack surface and is beneficial in security-first environments, microservice architectures, IoT, and industries with heavy compliance and certification requirements (such as the automotive sector). Kconfig was released as part of Xen Project Hypervisor 4.7.
Recently, my coworkers and I have been looking more closely into IoT security. With an ever-increasing number of connected devices being brought to market, there are more vectors than ever for attackers to get into remote systems and access sensitive data—even with protective measures such as firewalls in place. For example, someone could attack your laptop through your smart TV. Given the current security landscape, the same types of software protections and system integrity solutions we’ve been creating for the government are now recognized as necessary for consumers and B2B.
Here are a few other technologies that started out in the military, but trickled to the consumer realm, if you like this type of stuff.
Linux.com: When it comes to securing devices, what is the approach that you use?
Goldstein: Using a multi-layered protection and detection approach is the only way to ensure systems stay secure. Many organizations approach security with a network-based intrusion detection system or a firewall and believe that is enough. We believe that security must take a more holistic, proactive approach. If you only have an intrusion detection system, you can only see attacks at a very high level. For example, you might be able see someone attempting to exploit a service, but what if they somehow got valid login credentials? To begin implementing truly secure architectures, you first need to lockdown and control capabilities in the edge systems. This is where the hypervisor comes into play.
A hypervisor is an important piece in the multi-layered security approach. By separating system components into different VMs, an attacker would have to compromise each VM to modify or access sensitive data, versus compromising one service and having access to all the data. It is also possible to use redundant VMs using technologies like COLO (Course Grain Lock Stepping). This allows two VMs to process the same input and ensure they agree on a valid response, thereby preventing an attacker from permanently modifying the data.
Also, hypervisors can provide multiple levels of privilege so that a service with sensitive data could run as a VM alongside a service with less sensitive information, and they would never be able to see or modify each other’s data. This is one way we are able to reduce costs in government projects that have data at different classification levels. We allow them to use a single system with one or more VMs, instead of two physical systems.
Finally, by using a hypervisor, you can prevent certain classes of persistent attacks because some of these attacks rely on direct access to hardware, flash storage, or BIOS memory. By running services inside constrained VMs, an attacker has no direct access to the system hardware without first defeating the hypervisor.
Linux.com: Why did you choose the Xen Project hypervisor, and how does it differ in terms of security compared to something like KVM?
Goldstein: One of the main reasons why we chose to focus on the Xen Project (and likely why many other security-first projects choose the Xen Project over KVM) is that its architecture allows for strong isolation and privilege separation. This means that the Xen Project hypervisor is able to live separately from the Linux Kernel and the Linux Kernel is able to be separated into less privileged pieces. For example, if there is an attack on the Linux Kernel, it is not going to impact Xen like it impacts KVM.
Another example of this is network card drivers that can be separated into their own driver domains, such as what OpenXT and Qubes do with Xen. The whole idea is, even if an attacker can exploit the driver, the attacker does not gain full access to the hypervisor. This isn’t the case with KVM, where a single kernel or driver compromise can undermine the entire system security posture.
Linux.com: The Xen Project serves a lot of different use cases. What can you do to make it better serve a security use case?
Goldstein: As mentioned above, Kconfig gives you the ability to make the Xen Project hypervisor more lightweight, thereby eliminating attack surfaces. For example, with Kconfig you can take out some of the migration features that might be essential in cloud environments, but don’t make sense for security-first environments.
Other examples include VM introspection and fine-grained access controls. VM introspection allows you to check the state of your VM in real time without having to run any code inside the VM. This allows you to ensure that no one has modified a critical piece of software running inside the VM. Fine-grained access controls that XSM FLASK provides allow or restrict communications between VMs or between the VM and the hypervisor. This provides a way to reduce the attack surface of the VMs and the hypervisor.
Linux.com: How about containers? How would containers fit into this?
Goldstein: Some of the people that we work with wanted to put containers into their infrastructure because they believed it was more secure. On one hand, it is a little better because the software is carved up into separate containers, making it easier to ship updates without affecting the other software running on the system. In this sense, containers can be good from a security standpoint.
In many cases, however, people don’t actually deliver these updates and fixes. People build up containers, but don’t update the software within the containers. For example, there could be multiple copies of OpenSSL in different containers that are all out of date. In this scenario, even though a system administrator updated packages in a server’s OS, the machine would still have vulnerabilities. Containers without good software management practices can actually reduce the system’s overall security posture.
It might be interesting to note at this point that containers do not address kernel-level attacks. Containers limit the attack surface of an individual service, preventing one service from affecting other services on the machine. However, containers are less secure than virtual machines since the kernel has a much larger attack surface than a hypervisor and is more vulnerable to privilege escalation attacks—possibly leading to an escape from the container. The smaller attack surface of the hypervisor decreases the probability that a privilege escalation attack will allow an attacker to break out of a virtual machine and affect other VMs on the system.
Linux.com: Which areas do you think could be improved from a security standpoint currently?
Goldstein: One area I would love to see more interest in, from a Xen Project perspective, is XSM FLASK. It is not the default access control mechanism in the Xen Project hypervisor at present. Many people within the community recognize the benefits of switching to XSM FLASK, but there is a lot of inertia around the existing model. There are issues in the current model that can be elegantly solved by creating the right policies using XSM FLASK. Focusing on using XSM FLASK and enhancing Xen’s default policy, rather than adding more knobs to the existing access control model, could potentially benefit the entire Xen Project community.
Additionally, a lot of maintainers in the security group have overlapping use cases that specifically focus on cloud hosting environments. It would be great to get more members from the security community to engage the Xen Project and even potentially become maintainers focused on use cases other than cloud-hosted environments. For example, virtualization is a great solution for addressing functional safety, multi-level access, system integrity, and cybersecurity problems in the embedded systems space. This broadening of the community would allow a larger set of use cases to be covered by the Xen Project, in a single open source project, rather than siloed within several separate projects.
If you want to learn more about this topic, check out Doug’s presentation at the Xen Project Developer Summit below:
In this talk from Embedded Linux Conference, Marc Kleine-Budde of Pengutronix describes the architecture and strategies of a recently developed verified boot scheme for a single-core, Cortex-A9 NXP i.MX6 running on the RIoTboard SBC.
The IT world is turning to containers, but to control them you need container management programs. That’s where Kubernetes, Mesosphere, and Docker Swarm step in.
Containers, a lightweight way to virtualize applications, are an important element of any DevOps plan. But how are you going to manage all of those containers? Container orchestration programs—Kubernetes, Mesosphere Marathon, and Docker Swarm—make it possible to manage containers without tearing your hair out.
Before jumping into those, let’s review the basics. Containers, according to 451 Research, are the fastest growing cloud-enabling technology. The reason for their appeal is that they use far fewer system resources than do virtual machines (VMs). After all, a VM runs not merely an operating system, but also a virtual copy of all the hardware that the OS needs to run. In contrast, containers demand just enough operating system and system resources for an application instance to run.
… To be fair, all enterprise solutions will, by default, create a lock-in as once they are deployed they are very difficult to replace. The benefit of open source is not that it removes lock-in altogether; instead of being wed to single vendor and a single technology, one is connected to a platform / API. While the open source solution may be driven by a single organization, there are a wider range of technology choices underneath.
Two large open source organizations, the Open Networking Foundation and ON.LAB are merging, signaling that they understand the business need for a more fully baked open source solution instead of just projects. The idea that “If we build it, they will come” is stepping aside, being replaced by the realization that “We can build it, but if we don’t make it easy and interoperable, they won’t come.”
Books are very personal and subjective possessions. And programming books are no exception. But regardless of their style, focus, or pace, good C++ programming books take the reader on a compelling journey, opening eyes to the capabilities of the language, and showing how it can be used to build just about anything.
I have carefully selected C++ books which all share the virtue of being compelling to read. I recommend 9 books which are released under public copyright licenses. Before doing so, I’ll give a brief introduction to C++.
Ubuntu 16.04LTS was released by Canonical back in April last year. Among some of the key new features it brought, one was a new packaging format dubbed Snap. To refresh, here’s an excerpt from our Ubuntu 16.04 overview tutorial that explains the what and why of Snap:
So, why Snap? Well, this new packaging system is aimed at making package installation
and maintenance easier. For example, unlike the existing system, wherein it’s on you to
resolve all version-related conflicts of dependencies for a software being installed, Snaps
allow developers to put in everything on which their software depends in the package
itself, effectively making them self-contained and independent of the system on
which they are being installed.
After Microsoft fell in love with Linux (what has popularly come to be known as “Microsoft Loves Linux”), PowerShell which was originally a Windows-only component, was open-sourced and made cross-platform on 18 August 2016, available on Linux and Mac OS.
PowerShell is a task automation and configuration management system developed by Microsoft. It is made up of a command language interpreter (shell) and scripting language built on the .NET Framework. It offers complete access to COM (Component Object Model) and WMI (Windows Management Instrumentation), thereby allowing system administrators to carry out administrative tasks on both local and remote Windows systems
If you’re interested in running a complex Kubernetes system across several different cloud environments, you should check out what Bob Wise and his team at Samsung SDS call “Control Plane Engineering.”
Wise, during his keynote at CloudNativeCon last year, explained the concept of building a system that sits on top of the server nodes to ensure better uptime and performance across multiple clouds, creates a deployment that’s easily scaled by the ClusterOps team, and covers long-running cluster requirements.
“[If you believe] the notion of Kubernetes as a great way to run the same systems on multiple clouds, multiple public clouds, and multiple kinds of private clouds is really important, and if you care about that, you care about control plane engineering,” Wise said.
By focusing on that layer, and sharing configuration and performance information with the Kubernetes community, Wise said larger Kubernetes deployments can become easier and more manageable.
”One of the things we’re trying to foster, trying to build some tooling and make some contribution around is a way for members of the community to grab their cluster configuration, what they have including things like setting of cluster, be able to grab that, dump that, and capture that and export it for sharing, and also to take performance information from that cluster and do the same,” Wise said. “The goal here is, across a wide range of circumstances, to be able to start compare notes across the community.”
For the work Wise and his team have done, the Control Plane involves four separate parts that sit atop the nodes to make sure things work optimally despite occasional machine failure and broken nodes.
The Control Plane includes:
An API Server on the front end through which all the components interact,
A Scheduler to assign pods to nodes,
The ETCD, a distributed database system where cluster state is maintained, and
A Controller Manager, which is the home for embedded control loops like replica sets, deployments, jobs, etc.
The best way to run the system so that it has some level of allocation automation is through Kubernetes self-hosting, Wise said. But that requires some “tricky bootstrapping” to build it. In the end, it’s worth it if you’re running a large cluster, however.
“The idea here is it’s a system entirely running as Kubernetes objects,” he said. “You have this common operation set. It’s going to make scaling … and HA easier.”
One piece that is perhaps better not to try to build on your own is a load balancer for the API Server, which can get bogged down because it’s a bottleneck into the system. Wise said using a cloud provider’s load balancer is the easiest, and in the end, probably best solution.
“This load balancer, this is a very key part to the overall system performance and availability,” Wise said. “The public cloud providers have put enormous investment into really great solutions here. Use them and to be happy.
“It’s worth the configuration drift that happens between multiple deployments,” Wise continued. “I’d also say again, if you have on premises and you’re trying to do deployments and you already have these load balancers then they work well, they’re pretty simple to configure usually. The configurations that Kubernetes requires for support are not especially complicated. If you have them, use them, be happy but I wouldn’t recommend going and buying those appliances new.”
Watch the complete presentation below:
Want to learn more about Kubernetes? Get unlimited access to the new Kubernetes Fundamentals training course for one year for $199. Sign up now!