Home Blog Page 457

Operating a Kubernetes Network

I’ve been working on Kubernetes networking a lot recently. One thing I’ve noticed is, while there’s a reasonable amount written about how to set up your Kubernetes network, I haven’t seen much about how to operate your network and be confident that it won’t create a lot of production incidents for you down the line.

In this post I’m going to try to convince you of three things: (all I think pretty reasonable :))

  • Avoiding networking outages in production is important
  • Operating networking software is hard
  • It’s worth thinking critically about major changes to your networking infrastructure and the impact that will have on your reliability, even if very fancy Googlers say “this is what we do at Google”. (google engineers are doing great work on Kubernetes!! But I think it’s important to still look at the architecture and make sure it makes sense for your organization.)

I’m definitely not a Kubernetes networking expert by any means, but I have run into a few issues while setting things up and definitely know a LOT more about Kubernetes networking than I used to.

Read more at Julia Evans

A Free Guide to Participating in Open Source Communities

The Linux Foundation’s free online guide Participating in Open Source Communities can help organizations successfully navigate these open source waters. The detailed guide covers what it means to contribute to open source as an organization and what it means to be a good corporate citizen. It explains how open source projects are structured, how to contribute, why it’s important to devote internal developer resources to participation, as well as why it’s important to create a strategy for open source participation and management.

One of the most important first steps is to rally leadership behind your community participation strategy. “Support from leadership and acknowledgement that open source is a business critical part of your strategy is so important,” said Nithya Ruff, Senior Director, Open Source Practice at Comcast. “You should really understand the company’s objectives and how to enable them in your open source strategy.”

Read more at The Linux Foundation

Running Non-Root Containers On Openshift

In this blog post we see how a Bitnami non-root Dockerfile looks like by checking the Bitnami Nginx Docker image. As an example of how the non-root containers can be used, we go through how to deploy Ghost on Openshift. Finally, we will cover some of the issues we faced while moving all of these containers to non-root containers

What Are Non-Root Containers?

By default, Docker containers are run as root users. This means that you can do whatever you want in your container, such as install system packages, edit configuration files, bind privilege ports, adjust permissions, create system users and groups, access networking information.

With a non-root container you can’t do any of this . A non-root container should be configured for its main purpose, for example, run the Nginx server.

Read more at Bitnami

Ledger Systems Today Are Siloed and Disconnected. Hyperledger Quilt Wants to Solve That

Hyperledger Quilt started over a year ago and is a Java implementation of the Interledger protocol. We talked with Adrian Hope-Bailie, Standards Officer at Ripple and Maintainer of Hyperledger Quilt about the problem this project wants to solve, its benefits, limitations and more.

JAXenter: What is Hyperledger Quilt and what problem does it want to solve?

Adrian Hope-Bailie: Hyperledger Quilt offers interoperability between ledger systems by implementing the Interledger Protocol (ILP), which is primarily a payments protocol and is designed to transfer value across systems – both distributed ledgers and non-distributed ledgers. It is a simple protocol that establishes a global namespace for accounts, as well as, a protocol for synchronized atomic swaps between different systems.

Read more at Jaxenter

2 Ways to Better Secure your Linux Home Directory

One often-forgotten area of Linux security is the home directory—otherwise known as ~/. Something to keep in mind, is that particular directory houses user data. In other words, this is the default directory where documents are stored. If this machine is used in a business environment, there could be sensitive information stored within.

Let’s see what we can do to that home directory to make it more secure. We’ll start with the easy tip first. I’ll be demonstrating on a freshly installed Ubuntu 17.10 desktop.

Read more at Tech Republic

This Week in Open Source News: Open Source Summit Europe Is Platform for Several Important Announcements

In this special Open Source Summit Europe edition of the Linux.com weekly digest, we revisit stories that broke at the annual gathering of open source experts and enthusiasts. Here’s what you might have missed in Prague.

1) The annual Linux Kernel Development Report has been released, detailing the voices behind the kernel and its strength in today’s technological landscape.

Report: Interest in the Linux Kernel Remains Strong– SDTimes

Who’s Building Linux in 2017?– ZDNet

2) “The Linux Foundation has announced the Community Data License Agreement (CDLA) family of open data agreements.”

CDLA Announced by Linux Foundation– AppDeveloper Magazine

3) CNCF adds Docker-incubated Notary and The Update Framework (TUF), which was “originally developed by professor Justin Cappos and his team at NYU’s Tandon School of engineering.”

The Cloud Native Computing Foundation Adds Two Security Projects to its Open Source Stable– TechCrunch

4) Heather Kirksey, director of OPNFV talk about the newly-announced Euphrates and the open source project’s latest movements

OPNFV Supports Containerized OpenStack and Kubernetes– SDxCentral

Void Linux: A Salute to Old-School Linux

I’ve been using Linux for a very long time. Most days I’m incredibly pleased with where Linux is now, but every so often I wish to step into a time machine and remind myself where the open source platform came from. Of late, I’ve experimented with a few such distributions, but none have come as close as to what Linux once was than Void Linux.

Void Linux (created in 2008) is a rolling release, general purpose Linux distribution, available for Intel, ARM, and MIPS architectures. Void offers a few perks that will appeal to Linux purists:

  • Void isn’t a fork of another distribution.

  • Void uses runit as the init system.

  • Void replaced OpenSSL with LibreSSL (due to the Heartbleed fiasco).

  • Void uses its own, built-from scratch, package manager (called xbps).

Most of all, Void makes you feel like you’re using Linux of old (especially if you opt for the Xfce take on the desktop). With Void, you can opt to download a release with one of the following desktops:

  • Xfce

  • Cinnamon

  • Enlightenment

  • Lxde

  • Lxqt

You can also download a GUI-less version and install your desktop of choice.

With the exception of Cinnamon, the options are all focused on creating a very lightweight desktop. To that end, Void Linux will run very well on your hardware. I should make mention here that working with Void Linux in VirtualBox is an exercise in frustration. I use VirtualBox for all my testing purposes and Void does not play well with the VirtualBox ADDONS. Because of this, Void runs terribly slow in VirtualBox (even after following the Void Linux official instructions on successful host installation). With that warning in check, if you want to test Void Linux, install it on a desktop machine and save yourself an hour or two of hair pulling.

That old-school installation

Regardless of what Void Linux desktop you opt to install, you’re going to get a taste of what it was like to install Linux “back in the day”. No it’s not a perfect recreation, but it’s close enough. So download Void Linux, with your desktop of choice, and get ready.

When you boot the live ISO image, you will find yourself on whatever desktop you’ve chosen. One thing you won’t find is a tried-and-true Install icon on the desktop, for simplified installation. Oh no. The installation of Void is handled through the terminal window, thanks to a lovely ncurses-based system.

Upon boot, you must open up a terminal window, su to the root user (the default root user password is voidlinux), and then issue the command void-installer. This will fire up the ncurses-based installer, where you must walk through the various installation steps (Figure 1).

Figure 1: The Void Linux installation menu on the Xfce desktop version.

You can use the arrow keys on your keyboard to move up and down and hit Enter to select a menu entry to configure. However, if you just hit Enter on the first entry, and then configure that option, you will automatically be moved down to the next step. Most of these steps are very intuitive. It’s not until you get to the Partitioning and Filesystems that you might find cause to raise an eyebrow. Of course, any user who remembers the process of installing Linux from the early days shouldn’t have a problem with these steps. But if you’re used to, say, the Ubuntu installer (that makes the installation of the platform as simple as installing an application), you might have trouble.

When you reach the partition section of the installation (Figure 2), you’ll want to tab down to New, hit Enter, and then define the size for the partition. Mark the partition bootable, tab to Write, and hit Enter (on your keyboard).

Figure 2: Partitioning in Void Linux.

Once the partition is written, tab to Quit and hit Enter. In the filesystem section (Figure 3), you must first select a filesystem type and then specify the mount point.

Figure 3: Selecting the file system that best suits your needs.

The mount point for your filesystem will most likely be /. Enter that in the section to specify the mount point for /dev/sda1 (Figure 4), tab down to OK, and then hit Enter.

Figure 4: Specifying your mount point.

Once you have your filesystem and mount point taken care of, you can then move down to Install and run the installer. This section will take about two minutes. When the installation completes, you can then reboot and enjoy your newly installed Void Linux distribution.

Post installation

With Void Linux installed, you’ll find a fairly minimum set of tools available. Out of the box, there is no office suite, no email client, no image editor, not even a graphical package manager. What you have is a barebones desktop, with a nice command line installation tool, that allows you to install exactly what you want.

What many Linux faithful will appreciate the most about Void Linux is that it opts for runit, over systemd. The runit system is incredibly fast and easily configured. For example, where systemd requires complex run scripts, runit can start a process with a single line of code. That not only makes runit very easy to configure, but goes a long way to speeding up the process. For more information on runit, check out the official page.

If you don’t happen to like the desktops offered by Void, you can install, say, GNOME using the xbps-install command like so:

xbps-install -S gnome

It just so happens, the version of GNOME available to the Void Linux repositories is 3.26, so you’re getting the latest greatest GNOME desktop. There are thousands of other applications you can install on Void. You can query the package manager like so:

xbps-query -Rs PACKAGENAME

Where PACKAGENAME is the name of the software you want to find.

Who should enter the Void?

I can’t say I’d recommend Void Linux to just anyone. In fact, I think it’s safe to say that new-to-Linux users need not apply. Out of the box, Void doesn’t really offer enough in the way of user-facing applications to appease the new crowd. And because there isn’t a GUI package manager, new users would find themselves frustrated very quickly.

However, if you’re wise to the ways of Linux (especially the command line), Void is a refreshing change from the same ol’ same ol’. Void offers just the right amount of old-school Linux to make you feel like you’ve traveled back in time, while still able to maintain enough modernity to remain current.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Secrets of Writing Good Documentation

Taylor Barnett, a Community Engineer at Keen IO, says practice and constant iteration are key to writing good documentation.  At the upcoming API Strategy & Practice Conference 2017, Oct. 31 -Nov. 2 in Portland, OR, Barnett will explain the different types of docs and describe some best practices.

In her talk — Things I Wish People Told Me About Writing Docs — Barnett will look at how people consume documentation and discuss tools and tactics to enable other team members to write documentation.  Barnett explains more in this edited interview.

The Linux Foundation: What led you to this talk? Have you encountered projects with bad documentation?

Taylor Barnett: For the last year, my teammate, Maggie Jan, and I have been leading work to improve the developer content and documentation experience at Keen IO. It’s no secret that developers love excellent documentation, but many API companies aren’t always equipped with the resources to make that happen. 

Read more at The Linux Foundation

How to Rethink Project Management for DevOps

As DevOps boosts your organization’s agility, how does the project manager role need to change? Explore this expert advice.

As DevOps culture spreads, however, so does its impact on other areas of the organization. Take project management: DevOps fundamentally changes how IT teams approach projects, shifting away from monolithic, multi-month (or multi-year, in some cases) initiatives in pursuit of greater speed and agility in the software development lifecycle. That means changes for project managers, too.

But make no mistake: Project managers can still be valuable in the DevOps age.

“A need for speed and velocity – and cutting-edge DevOps technologies and processes – does not replace the need for knowing what you’re going to do with them,” says Josh Collins, technology architect at Janeiro Digital. “A strong project management practice is required in order to keep projects moving on schedule with a clear focus on dependencies.”

Read more at Enterprisers Project

Microsoft Launches Brigade: An Event-Driven Scripting Tool for Kubernetes

To this end, Microsoft has been populating the container space with open source tools that make containerized workloads faster to adopt, easier to use — and, increasingly, reliably automated. So far, 2017 has seen Microsoft acquire Deis, which developed Helm. Helm is a package manager to install and manage the lifecycle of Kubernetes applications, as well as an efficient tool for finding, using and sharing K8s tools and software. The company also introduced Draft, a tool for streamlining application development and deployment by monitoring the live-code, pre-commit “inner loop” of the developer’s workflow to detect the application language and write a simple Dockerfile and Helm chart into the source tree.

Now the same team has introduced Brigade, a framework for scripting together workflow tasks to be executed inside of containers. The Kubernetes-native tool allows devs to build an ordered workflow of K8s containers in any magnitude, from one to multitudes, that then idles while listening for arbitrary trigger events. When triggered, Brigade comes charging in.  …

Containers to the left of them, Containers to the right of them, boldly they ride and well…

Read more at The New Stack