Home Blog Page 530

Containers Running Containers with LinuxKit

Some genuinely exciting news piqued my interest at this year’s DockerCon, that being the new operating system (OS) LinuxKit, which was announced and is immediately on offer from the undisputed heavyweight container company, Docker. The container giant has announced a flexible, extensible operating system where system services run inside containers for portability. You might be surprised to hear that even includes the Docker runtime daemon itself.

In this article, I’ll take a quick look at what’s promised in LinuxKit, how to try it out for yourself, and look also at ever-shrinking, optimized containers.

Less Is More

There’s no denying that users have been looking for a stripped-down version of Linux on which to run their microservices. With containerization, you’re trying your hardest to minimize each application so that it becomes a standalone process which sits inside a container of its own. However, constantly shifting containers around because you’re patching the host that the containers reside on causes issues. In fact, without an orchestrator like Kubernetes or Docker Swarm that container-shuffling is almost always going to cause downtime.

Needless to say that’s just one reason to keep your OS as miniscule as possible; one of many.

A favorite quote I’ve repeated on a number of occasions, comes from the talented Dutch programmer, Wietse Zweitze, who brought us the email stalwart Postfix and TCP Wrappers amongst other renowned software.

The Postfix website states that even if you’re as careful with your coding as Wietse that for “every 1000 lines [you] introduce one additional bug into Postfix.” From my professional DevSecOps perspective by the mention of “bug” I might be forgiven for loosely translating that definition into security issues, too.

From a security perspective, it’s precisely for this reason that less-is-more in the world of code. Simply put, there’s a number of benefits to using less lines of code; namely security, administration time and performance. For starters there’s less security bugs, less time updating packages and faster boot times.

Look deeper inside

Think about what runs your application from inside a container.

A good starting point is Alpine Linux which is a low-fat, boiled-down, reduced OS commonly preferred over the more bloated host favourites, such as Ubuntu or CentOS. Alpine also provides a miniroot filesystem (for use within containers) which comes in at a staggering 1.8MB at the last check. Indeed the ISO download for a fully working Linux operating system comes in at a remarkable 80MB in size.

If you decide to utilize a Docker base image from Alpine Linux, then you can find one on the Docker Hub where Alpine Linux describes itself as: “A minimal Docker image based on Alpine Linux with a complete package index and only 5 MB in size!”.

It’s been said, and I won’t attempt to verify this meme, that the ubiquitous Window Start button is around the same file size! I’ll refrain from commenting further.

In all seriousness, I hope that gives you an idea of the power of innovative Unix-type OSs like Alpine Linux.

Lock everything up

What’s more, it goes on to explain that Alpine Linux is (not surprisingly) based on Busy Box, the famous set of Linux commands neatly packaged which many people won’t be aware sits inside their broadband router, smart television and of course many IoT devices in their homes as they read this.

Comments on the the About page of Alpine Linux site state:

“Alpine Linux was designed with security in mind. The kernel is patched with an unofficial port of grsecurity/PaX, and all userland binaries are compiled as Position Independent Executables (PIE) with stack smashing protection. These proactive security features prevent exploitation of entire classes of zero-day and other vulnerabilities.”

In other words the boiled-down binaries bundled inside the Alpine Linux builds which offers the system its functionality have already been sieved through clever industry-standard security tools in order to help mitigate buffer overflow attacks.

Odd socks

Why do the innards of containers matter when we’re dealing with Docker’s new OS you may ask?

Well, as you might have guessed, when it comes to containers, their construction is all about losing bloat. It’s about not including anything unless it’s absolutely necessary. It’s about having confidence so that you can reap the rewards of decluttering your cupboards, garden shed, garage, and sock drawer with total impunity.

Docker certainly deserve some credit for their foresight. Reportedly, early 2016 Docker hired a key driving force behind Alpine Linux, Nathaniel Copa, who helped switch the default, official image library away from Ubuntu to Alpine. The bandwidth that Docker Hub saved from the newly-streamlined image downloads alone must have been welcomed.

And, bringing us up to date, that work will stand arm-in-arm with the latest container-based OS work; Docker’s LinuxKit.

For clarity, LinuxKit is not ever-likely destined to replace Alpine but rather to sit underneath the containers and act as a stripped-down OS that you can happily spin up your runtime daemon on (in this case, the Docker daemon which spawns your containers).

Blondie’s Atomic

A finely-tuned host is by no means a new thing (I mentioned the household devices embedded with Linux previously) and the evil geniuses who have been optimizing Linux for the last couple of decades realized sometime ago that the underlying OS was key to churning out a server estate fulls of hosts brimming with containers.

For example the mighty Red Hat have long been touting Red Hat Atomic having contributed to Project Atomic. The latter goes on to explain:

“Based on proven technology either from Red Hat Enterprise Linux or the CentOS and Fedora projects, Atomic Host is a lightweight, immutable platform, designed with the sole purpose of running containerized applications.”

There’s good reason that the underlying, immutable Atomic OS is forwarded as the recommended choice with Red Hat’s OpenShift PaaS (Platform as a Service) product. It’s minimal, performant and sophisticated.

Features

The mantra that less-is-more was evident throughout Docker’s announcement regarding LinuxKit. The project to realise the vision of LinuxKit was apparently no small undertaking and with the guiding hand of expert Justin Cormack, a Docker veteran and master with unikernels, and in partnership with HPE, Intel, ARM, IBM and Microsoft LinuxKit can run on mainframes as well as IoT-based fridge freezers.

The configurable, pluggable and extensible nature of LinuxKit will appeal to many projects looking for a baseline upon which to build their services. By open-sourcing the project Docker are wisely inviting input from every man and their dog to contribute to its functionality which will mature like a good cheese undoubtedly over time.  

Proof of the pudding

Having promised to point those eager to get going with this new OS, let us wait no longer. If you want to get your hands on LinuxKit you can do so from the GitHub page here: LinuxKit

On the GitHub page, there are instructions on how to get up and running along with some features.

Time permitting, I plan to get my hands much dirtier with LinuxKit. The somewhat-contentious Kubernetes versus Docker Swarm orchestration capabilities will be interesting to try out. I’d like to see memory footprints, boot times, and diskspace-usage benchmarking, too.

If the promises are true then pluggable system services which run as containers is a fascinating way to build an OS. Docker blogged the following on its tiny footprint: “Because LinuxKit is container-native, it has a very minimal size – 35MB with a very minimal boot time. All system services are containers, which means that everything can be removed or replaced.”

I don’t know about you, but that certainly whets my appetite.

Call the cops

Features aside with my DevSecOps hat on, I will be in seeing how the promise of security looks in reality.

Docker quotes from the National Institute of Standards and Technology (NIST) and claims that:

“Security is a top-level objective and aligns with NIST stating, in their draft Application Container Security Guide: “Use container-specific OSes instead of general-purpose ones to reduce attack surfaces. When using a container-specific OS, attack surfaces are typically much smaller than they would be with a general-purpose OS, so there are fewer opportunities to attack and compromise a container-specific OS.”

Possibly the most important container-to-host and host-to-container security innovation will be the fact that system containers (system services) are apparently heavily sandboxed into their own unprivileged space, given just the external access that they need.

Couple that functionality with the collaboration of the Kernel Self Protection Project (KSPP) and with a resounding thumbs-up from me, it looks like Docker have focused on something very worthwhile. For those unfamiliar, KSPP’s raison d’etre is as follows:

“This project starts with the premise that kernel bugs have a very long lifetime, and that the kernel must be designed in ways to protect against these flaws.”

The KSPP site goes on to state admirably that:

“Those efforts are important and on-going, but if we want to protect our billion Android phones, our cars, the International Space Station, and everything else running Linux, we must get proactive defensive technologies built into the upstream Linux kernel. We need the kernel to fail safely, instead of just running safely.”

And, initially, if Docker only take baby steps with LinuxKit, the benefit that it will bring over time through maturity will likely make great strides in the container space.

The end is far from nigh

As the powerhouse that is Docker continues to grow, there’s no doubt whatsoever that these giant-sized leaps in the direction of solid progress will benefit users and other software projects alike.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.

This article originally appeared on DevSecOps.

Submit Your Talk for MesosCon NA: CFP Closes June 30

The MesosCon program committee is now seeking your fresh ideas, enlightening case studies, best practices, or deep technical knowledge to share with the Apache Mesos community at MesosCon North America and Europe in 2017.

Submit a proposal to speak at MesosCon North America » The deadline is June 30.

MesosCon is an annual conference held in three locations around the globe and organized by the Apache Mesos community in partnership with The Linux Foundation. The events bring together users and developers of the open source orchestration framework to share knowledge and learn about the project and its growing ecosystem.

Best practices, lessons learned, and case studies are among the topics the program committee is seeking for 2017. Sample topics include:  

  • Best practices and lessons on deploying and running Mesos at scale

  • Deep dives and tutorials into Mesos

  • Interesting extensions to Mesos (e.g., new communication models, support for new containerizers, new resource types and allocation models, etc.)

  • Improvements/additions to the Mesos ecosystem (packaging systems, monitoring, log aggregation, load balancing etc., service discovery)

  • New frameworks

  • Microservice design

  • Continuous Delivery / DevOps (automating into production)

This list is by no means an exhaustive set of topics for submissions, and we welcome you to submit proposals that fall outside the mentioned areas. Check out these videos of previous talks to see the types of presentations that have been accepted in the past.

All 2017 MesosCon events will be held directly following Open Source Summit events in China, North America, and Europe. Dates are as follows:

MesosCon Asia June 21 – 22, 2017 in Beijing, China

MesosCon North America September 14 – 15, 2017 in Los Angeles, California, USA

MesosCon Europe October 26 – 27, 2017 in Prague, Czech Republic

Not interested in speaking but want to attend? Linux.com readers receive 5% off the “attendee” registration with code LINUXRD5.

Apache, Apache Mesos, and Mesos are either registered trademarks or trademarks of the Apache Software Foundation (ASF) in the United States and/or other countries. MesosCon is run in partnership with the ASF.

FPGAs and the New Era of Cloud-Based ‘Hardware Microservices’

What is that FPGA-powered future going to look like and how are developers going to use it?

FPGAs aren’t a new technology by any means; Traditionally, they have been reserved for specialized applications where the need for custom processing hardware that can be updated as very demanding algorithms evolve outweigh the complexity of programming the hardware.

…FPGAs can offer massive parallelism targeted only for a specific algorithm, and at much lower power compared to a GPU. And unlike an application-specific integrated circuit (ASIC), they can be reprogrammed when you want to change that algorithm (that’s the field-programmable part).

Read more at The New Stack

Crypto Tokens: A Breakthrough in Open Network Design

Today, tech companies like Facebook, Google, Amazon, and Apple are stronger than ever, whether measured by market cap, share of top mobile apps, or pretty much any other common measure. These companies also control massive proprietary developer platforms. The dominant operating systems — iOS and Android — charge 30% payment fees and exert heavy influence over app distribution. The dominant social networks tightly restrict access, hindering the ability of third-party developers to scale. Startups and independent developers are increasingly competing from a disadvantaged position.

Tokens are a breakthrough in open network design that enable: 1) the creation of open, decentralized networks that combine the best architectural properties of open and proprietary networks, and 2) new ways to incentivize open network participants, including users, developers, investors, and service providers. By enabling the development of new open networks, tokens could help reverse the centralization of the internet, thereby keeping it accessible, vibrant and fair, and resulting in greater innovation.

Read more at Medium

An Introduction to Timekeeping in Linux VMs

Keeping time in Linux is not simple, and virtualization adds additional challenges and opportunities. In this article, I’ll review KVM, Xen, and Hyper-V related time-keeping techniques and the corresponding parts of the Linux kernel.

Timekeeping is the process or activity of recording how long something takes. We need “instruments” to measure time. The Linux kernel has several abstractions to represent such devices:

  • Clocksource is a device that can give a timestamp whenever you need it. In other words, Clocksource is any ticking counter that allows you to get its value.

Read more at OpenSource.com

OpenDaylight Carbon SDN Platform Expands Performance and Scalability

Carbon is the sixth major release from OpenDaylight and follows the Boron release that debuted in September 2016. OpenDaylight first started in April 2013 as Linux Foundation Collaborative Project, with the goal of building an open SDN platform.

Phil Robb, Interim Executive Director for OpenDaylight told EnterpriseNetworkingPlanet that from the beginning, OpenDaylight was designed to be a general-purpose development platform for building programmable networks He noted that early releases of OpenDaylight put the core platform in place.

Among the use cases that OpenDaylight Carbon helps to support is Internet of Things (IOT). Robb explained that Carbon adds the standards-based IoTDM plugin infrastructure to allow easy implementation of new device plugins.

Read more at EntepriseNetworkingPlanet

The 30 Highest Velocity Open Source Projects

Open Source projects exhibit natural increasing returns to scale. That’s because most developers are interested in using and participating in the largest projects, and the projects with the most developers are more likely to quickly fix bugs, add features and work reliably across the largest number of platforms. So, tracking the projects with the highest developer velocity can help illuminate promising areas in which to get involved, and what are likely to be the successful platforms over the next several years. (If the embedded version below isn’t clear enough, you can view the chart directly on Google Sheets.)

Read more at CNCF

Securing Private Keys on a Linux Sysadmin Workstation

In this last article of our ongoing Linux workstation security series for sysadmins, we’ll lay out our recommendations for how to secure your private keys. If you’re interested in more security tips and a list of resources for more reading (to go further down the rabbit hole of Linux security), I recommend that you download our free security guide for sysadmins.

Personal encryption keys, including SSH and PGP private keys, are going to be the most prized items on your Linux workstation. Attackers will be most interested in obtaining them, as that would allow them to further attack your infrastructure or impersonate you to other admins. Linux sysadmins should take extra steps to ensure that your private keys are well protected against theft:

  • Strong passphrases are used to protect private keys (Essential)

  • PGP Master key is stored on removable storage (Nice-to-have)

  • Auth, Sign and Encrypt Subkeys are stored on a smartcard device (Nice)

  • SSH is configured to use PGP Auth key as ssh private key (Nice)

Private key security best practices

The best way to prevent private key theft is to use a smartcard to store your encryption private keys and never copy them onto the workstation. There are several manufacturers that offer OpenPGP capable devices:

  • Kernel Concepts, where you can purchase both the OpenPGP compatible smartcards and the USB readers, should you need one.

  • Yubikey, which offers OpenPGP smartcard functionality in addition to many other cool features (U2F, PIV, HOTP, etc).

  • NitroKey, which is based on open-source software and hardware

It is also important to make sure that the master PGP key is not stored on the main workstation, and only subkeys are used. The master key will only be needed when signing someone else’s keys or creating new subkeys — operations which do not happen very frequently. You may follow the Debian’s subkeys guide to learn how to move your master key to removable storage and how to create subkeys.

You should then configure your gnupg agent to act as ssh agent and use the smartcard-based PGP Auth key to act as your ssh private key. We publish a detailed guide on how to do that using either a smartcard reader or a Yubikey NEO.

If you are not willing to go that far, at least make sure you have a strong passphrase on both your PGP private key and your SSH private key, which will make it harder for attackers to steal and use them.

Workstation Security

Read more:

Part 8:  Best Practices for 2-Factor Authentication and Password Creation on Linux

Part 1: 3 Security Features to Consider When Choosing a Linux Workstation

​Why You Must Patch the New Linux sudo Security Hole

If you want your Linux server to be really secure, you defend it with SELinux. Many sysadmins don’t bother because SELinux can be difficult to set up. But, if you really want to nail down your server, you use SELinux. This makes the newly discovered Linux security hole — with the sudo command that only hits SELinux-protected systems — all the more annoying.

Sudo enables users to run commands as root or another user, while simultaneously providing an audit trail of these commands. It’s essential for day-in, day-out Linux work. Qualys, a well-regarded security company, discovered this essential command — but only on systems with SELinux enabled — can be abused to give the user full root-user capabilities.

Or, as they’d say on the Outer Limits, “We will control the horizontal, we will control the vertical.” This is not what you want to see on your Linux server.

Read more at ZDNet

Core Kubernetes: Jazz Improv over Orchestration

This is the first in a series of blog posts that details some of the inner workings of Kubernetes. If you are simply an operator or user of Kubernetes you don’t necessarily need to understand these details. But if you prefer depth-first learning and really want to understand the details of how things work, this is for you.

This article assumes a working knowledge of Kubernetes. I’m not going to define what Kubernetes is or the core components (e.g. Pod, Node, Kubelet).

Read more at Heptio