Home Blog Page 379

Heptio Launches New Open Source Load-Balancing Project with Kubernetes in Mind

Heptio added a new load balancer to its stable of open-source projects Monday, targeting Kubernetes users who are managing multiple clusters of the container-orchestration tool alongside older infrastructure.

Gimbal, developed in conjunction with Heptio customer Actapio, was designed to route network traffic within Kubernetes environments set up alongside OpenStack, said Craig McLuckie, co-founder and CEO of Heptio. It can replace expensive hardware load-balancers — which manage the flow of incoming internet traffic across multiple servers — and allow companies with outdated but stable infrastructure to take advantage of the scale that Kubernetes can allow.

“We’re just at the start of figuring out what are the things (that) we can build on top of Kubernetes,” said McLuckie in an interview last week at Heptio’s offices in downtown Seattle. The startup, founded by McLuckie and fellow Kubernetes co-creator Joe Beda, has raised $33.5 million to build products and services designed to make Kubernetes more prevalent and easy to use.

Read more at GeekWire

Why Is the Kernel Community Replacing iptables with BPF?

Author Note: this is a post by long-time Linux kernel networking developer and creator of the Cilium project, Thomas Graf

The Linux kernel community recently announced bpfilter, which will replace the long-standing in-kernel implementation of iptables with high-performance network filtering powered by Linux BPF, all while guaranteeing a non-disruptive transition for Linux users.

From humble roots as the packet filtering capability underlying popular tools like tcpdump and Wireshark, BPF has grown into a rich framework to extend the capabilities of Linux in a highly flexible manner without sacrificing key properties like performance and safety. This powerful combination has led forward-leaning users of Linux kernel technology like GoogleFacebook, andNetflix to choose BPF for use cases ranging from network security and load-balancing to performance monitoring and troubleshooting. Brendan Gregg of Netflix first called BPF Superpowers for Linux. This post will cover how these “superpowers” render long-standing kernel sub-systems like iptables redundant while simultaneous enabling new in-kernel use cases that few would have previously imagined were possible….

Over the years, iptables has been a blessing and a curse: a blessing for its flexibility and quick fixes. A curse during times debugging a 5K rules iptables setup in an environment where multiple system components are fighting over who gets to install what iptables rules.

Read more at Cilium

​What’s the Most Popular Linux of Them All?

Let’s cut to the chase. Android is the most popular of all Linux distributions. Period. End of statement. But that’s not the entire story.

But, setting Android aside, what’s the most popular Linux? It’s impossible to work that out. The website-based analysis tools, such as those used by StatCounter, NetMarketShare, and the Federal government’s Digital Analytics Program (DAP), can’t tell the difference between FedoraopenSUSE, and Ubuntu.

DAP does give one insightful measurement the others sites don’t give us. While not nearly as popular as Android, Chrome OS is more popular than all the other Linux-based desktops combined by a score, in April 2018, of 1.3 percent to 0.6 percent of end users.

As for what most people think of as “Linux distros,” the best data we have comes from the DistroWatch’s Page Hit rankingDistroWatch is the most comprehensive desktop Linux user data and news site.

Read more at ZDNet

6 DevOps Trends to Watch in 2018

Communicating with key subject matter experts in the DevOps space plays an important role in helping us understand where the industry is headed. To gain insight into trends for 2018, we caught up with six DevOps experts and asked them:

What’s the number-one trend you see for log analysis and monitoring in 2018?

Here’s what our panel of influencers had to say.


1. Joe Beda

“By far the biggest trend that I see is the application itself getting more involved in exporting structured metrics and logs. These logs are built so that they can be filtered and aggregated. Being able to say ‘show me all the logs impacting customer X’ is a huge, powerful step forward. Doing this in a way that crosses multiple microservices is even more powerful. 

Read more at Loggly

Viperr Linux Keeps Crunchbang Alive with a Fedora Flair

Do you remember Crunchbang Linux? Crunchbang (often referred to as #!) was a fan-favorite, Debian-based distribution that focused on using a bare minimum of resources. This was accomplished by discarding the standard desktop environment and using a modified version of the Openbox Window Manager. For some, Crunchbang was a lightweight Linux dream come true. It was lightning fast, easy to use, and hearkened back to the Linux of old.

However, back in 2015, Philip Newborough made this announcement:

For anyone who has been involved with Linux for the past ten years or so, I’m sure they’ll agree that things have moved on. Whilst some things have stayed exactly the same, others have changed beyond all recognition. It’s called progress, and for the most part, progress is a good thing. That said, when progress happens, some things get left behind, and for me, CrunchBang is something that I need to leave behind. I’m leaving it behind because I honestly believe that it no longer holds any value, and whilst I could hold on to it for sentimental reasons, I don’t believe that would be in the best interest of its users, who would benefit from using vanilla Debian.

Almost immediately, developers began their own efforts to keep Crunchbang alive. One such effort is Viperr. Viperr is a Fedora respin that follows in the footsteps of its inspiration by using the Openbox window manager. By merging some of the qualities that made Crunchbang popular, with the Fedora distribution, Viperr creates a unique Linux distribution that feels very much old school, with a bit of new-school technology under the hood.

The one thing to keep in mind is that Viperr development is incredibly slow. At the moment, the most recent stable release is Viperr 9, based on Fedora 24. I read in the forums that, as of 2017, work was started on Viperr 10, but it’s still in alpha. So using Viperr might seem a bit of a mixed bag. After installing, I ran an update to find the running kernel at 4.7.5. That’s a pretty old kernel (relatively speaking). Even still, Viperr is a worthwhile distribution that might appeal to users looking for a lightweight Linux akin to Crunchbang.

Let’s install Viperr and see what gives this distribution its bite.

Installation

We’ve reached the point in Linux where walking through an installation is almost pointless—the installs are that easy. That being said, if you’ve installed Fedora or CentOS, you’ve installed Viperr. The Anaconda Installer makes installing any distribution incredibly simple. It’s all point and click, with a minimal of user interaction and steps. The only difference with Viperr is the post-Anaconda installation. Once you’ve completed the installation and rebooted the system, you’ll be greeted with a terminal window, in which a post-install script is run (Figure 1).

Figure 1: The post-install script in action.

That script will first prompt you for your user password (created during the installation). Once you’ve authenticated, it will ask you a number of questions regarding software to be installed. During the run of the script, you can have LibreOffice installed (Figure 2), as well as other applications.

Figure 2: Installing LibreOffice by way of the post-install script.

You will also be asked if you want to include the free and non-free Fusion repo. This repository is filled with software that Fedora or Red Hat doesn’t want to ship (such as Audacity, MPlayer, Streamripper, MythTV, GStreamer, Bombono-DVD, Xtables, Pianobar, LiVES, Telegram-Desktop, Ndiswrapper, VLC, some games, and more). It’s not a huge number of titles, but there are some items many Linux users consider must-haves.

Once the script completes its run, you can close out the terminal and start using Viperr.

Usage

As you probably expect, using Viperr is incredibly simple. The combination of the Openbox window manager and Conky giving a real-time read-out on system resources (Figure 3) is certainly a throwback to old-school Linux that many users will appreciate.

Figure 3: The default Viperr desktop.

Click on the Viperr start button to gain access to all of the installed applications. Open an application and use it. That start menu, however, isn’t the only route to starting applications. If you right-click anywhere on the desktop, you gain access to the same menu (Figure 4).

Figure 4: The Viperr right-click desktop menu.

I’ve always been a big fan of this type of menu system, as it makes interacting with that main menu incredibly efficient.

If you want to bring Viperr even further into the new world order, you can open up a terminal window and install Flatpak with the command sudo yum install flatpak (or sudo dnf install flatpak). Once you’ve installed Flatpak, you’ll find even more software can be installed, via Flathub.

Updates needed

Obviously, the one glaring problem is that Viperr is way out of date. However, you could go through the process of doing a distribution upgrade, via the dnf command. To do this, you would first have to install the DNF plugin with the command:

sudo dnf install dnf-plugin-system-upgrade

Once that command completes, you can upgrade from a base of Fedora 24 to 25 with the command:

sudo dnf system-upgrade download --releasever=25

When that command completes, reboot with the command:

sudo dnf system-upgrade reboot

The above command does take some time to complete (I had 2339 packages to upgrade), but it will eventually land you back on your Viperr desktop. I successfully completed that upgrade (which upgraded the kernel to 4.13), but I didn’t continue with the process to upgrade from 25 to 26 and then 26 to 27. Theoretically, it could work.

Outside of that, you’d be hard-pressed to find anything really wrong with a lightweight distribution like Viperr. It’s a fast, reliable throwback to a distribution so many users quickly fell in love with. With Crunchbang long gone, for those longing to return to the days of a more basic version of the operating system, Viperr fits that bill to a tee.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

This Week in Open Source News: Hyperledger Bug Bounty Program, The Linux Foundation’s Networking Harmonization Initiative & More

This week in Linux and open source news, Arpit Joshipura sheds light on the networking harmonization initiative, Hyperledger opens the doors of its bug bounty program to the public & more! Read on to stay abreast of the latest open source news. 

1) Arpit Joshipura, General Manager of Networking at The Linux Foundation, speaks about the Harmonization 1.0 initiative

Linux Foundation Seeks to Harmonize Open Source and Standards Development– TelecomTV

2) “The open-source blockchain project is now asking the public to help in the quest to squash bugs impacting the platform.”

Hyperledger Bug Bounty Program Goes Public– ZDNet

3) Nextcloud “has announced it will be supplying the German federal government with a private, on-premises cloud platform as part of a three-year contract.”

German Government Goes Open Source With Cloud Firm Nextcloud– TechRadar Pro

4) “For the first time, Microsoft has released its own Linux kernel in a new Linux-based product: Azure Sphere.”

Microsoft Releases Its First Linux Product– ZDNet

5) “Open source is being heavily adopted in China and many companies are now trying to figure out how to best contribute to these kind of projects. Joining a foundation is an obvious first step.”

Cloud Foundry Foundation Looks East as Alibaba Joins As a Gold Member– TechCrunch

CRI: The Second Boom of Container Runtimes

Harry (Lei) Zhang, together with the CTO of HyperHQ, Xu Wang, will present “CRI: The Second Boom of Container Runtimes” at KubeCon + CloudNativeCon EU 2018, May 2-4 in Copenhagen, Denmark. The presentation will clarify about more about CRI, container runtimes, KataContainers and where they are going. Please join them if you are interested in learning more.

When was the first “boom” of container runtimes in your mind?

Harry (Lei) Zhang: At the end of 2013, one of my former colleagues at an extremely successful cloud computing company introduced me to a small project on Github, and claimed: “this thing’s gonna kill us.”

We all know the rest of the story. Docker started a revolution in containers, which soon swept the whole world with its slogan of “Build, Ship, Run.” The brilliant idea of the Docker image reconstructed the base of cloud computing by changing the way software was delivered, and how developers consumed the cloud. The following years of 2014-2015 were all dominated by the word container it was hot and fancy, and swept through all the companies in the industry, just like AI today.

What happened to container runtimes after that?

Zhang: After this period of prosperity, container runtimes, of course including Docker, gradually became a secondary issue. Instead, people were more eager to quarrel over container scheduling and orchestration, which eventually brought Kubernetes to the center of the container world. This is not surprising, although the container itself is highly creative, once it is separated from the upper-level orchestration framework, the significance of this innovation is be greatly reduced. This also explains why CNCF, which is led by platform players like Google and Red Hat, eventually became the winner of the dispute: container runtimes are “boring”.

So you are claiming a new prosperity comes for container runtimes now?

Zhang: Yes. In 2017, the Kubernetes community, started pushing container runtimes back to the forefront with projects like cri-o, containerd, Kata, frakti –

In fact, this second boom is no longer because of the technological competition, but because of the birth of a generic Container Runtime Interface (CRI), which leveraged the freedom to develop container runtimes for Kubernetes however they wanted, using whatever technology. This is further evidence that Kubernetes is winning the infrastructure software industry.

Could you please explain more about CRI?

Zhang: The creation of CRI can be dated back to a pretty old issue in the Kubernetes repo, which, thanks to Brendan Burns and Dawn Chen brought out the idea of “client server mode container runtime.” The reason we began to discuss this approach in sig-node was mainly because, although Docker was the most successful container runtime at that time (and even today), we could see it gradually evolving to a much more complicated platform project, which brought uncertainty to Kubernetes itself. At the same time, new candidates like CoreOS rkt and Hyper runV (hypervisor based container) had been introduced in Kubernetes as PoC runtimes, and to bring extra maintenance efforts to sig-node. We needed to find a way to balance user options and feasibility in Kubernetes container runtimes, and to save users from any potential vendor lock in this layer.

What does CRI look like from a developer view?

Zhang: The core idea of CRI is simple: can we summarize a limited group of APIs which Kubernetes can rely on to talk to containers and images, regardless of what container runtime it is using?

Sig-node eventually defined around 20 APIs in protoc format based on the existing operations in kubelet (the component that talks to container runtime in Kubernetes). If you are familiar with Docker, you can see that we indeed extracted the most frequently used APIs out from its CLI, and also defined the concept of “sandbox” to match to Pod in Kubernetes, which is a group of tightly coupled user containers. But the key is that once you have this interface, you now  have the freedom to choose how to implement this “sandbox,” either by namespaces (Docker), or hypervisor (Kata). Soon after the CRI spec was ready, we worked together to deliver the first CRI implementation named “dockershim” for Docker, and then “frakti” for hypervisor runtimes (at that time, runV).

How does CRI work with Kubernetes?

ysjGjXY74xxyIn6-HBiizom4qw6z13aIw06n-2Vd

Zhang: The core implementation of CRI is a GenericRuntime, which will hide CRI from kubelet so, from the Kubernetes side, it does not know about CRI or any container runtime. All the container and image operations will be called GenericRuntime just like it is in Docker or rkt.

The container functionalities, like networking and storage, are decoupled from container runtime by standard interfaces like CNI (Container Network Interface), and CSI (Container Storage Interface). Thus, the implementation of CRI (e.g. dockershim) will be able to call standard CNI interfaces to allocate the network for the target container without knowing any detail of underlying network plugins. The allocation result,will be returned back to kubelet after the CRI call

What’s more, the latest design of hardware accelerators in Kubernetes, like GPU and FPGA, also rely on CRI to allocate devices to corresponding containers. The core idea of this design is included in another extension system named Device Plugin. It is responsible for generating device information and dependency directories per device allocate request, and then returns them to kubelet. Again, Kubelet will inject this information into CRI to create a container call. That’s also the secret why the latest Kubernetes implementations do not rely on Docker, or any other specific container runtime to manage GPU, etc.

What are CRI shims? I’ve been hearing this a lot recently.

Zhang: Actually, CRI implementations are called CRI shims within the Kubernetes community. It is free for the developer to decide how to implement those CRI interfaces, and this has triggered another innovation storm at the container runtime level, which was almost forgotten by the community in recent years.

The most straightforward idea is: can I just implement a shim for runC, which is the building block for an existing Docker project? Of course yes. Not very long after CRI is released, maintainers of the runC project proposed a design which leverages users of Kubernetes to run workloads on runC without installing Docker at all. That project is cri-o.

Unlike Docker, cri-o is much simpler and only focuses on the container and image management, as well as serving CRI requests. Specifically, it “implements CRI using OCI conformant runtimes” (runC for example), so the scope of cri-o is always tied to the scope of the CRI. Besides a very basic CLI, cri-o will not expect users to use it the same way as Docker.

Besides these Linux operating system level containers (mostly based on cgroups and namespaces), we can also implement CRI by using hardware virtualization to achieve higher level security and isolation. These efforts rely on the newly created project KataContainers and corresponding CRI shim frakti.

The CRI has been so successful that many other container projects or legacy container runtimes are beginning to provide implementations for it. Alibaba’s internal container Pouch, for example, is a container runtime which has been used inside Alibaba for years, tested with unbelievable battles like serving the 1.48 billion transactions in 24-hours during the the Single’s Day (11/11) sale.

What’s the next direction of CRI and container runtimes in Kubernetes?

Zhang: I believe the second boom of container runtimes is still continuing, but this time, it is being lead by the Kubernetes community.

At the end of 2017, Intel ClearContainer and Hyper runV were announced to merge to one new project, KataContainers, under the governance of the OpenStack foundation and OCI. Those two teams are well known for leading the effort of pushing hypervisor based container runtimes to Kubernetes since 2015, and finally joined forces with the help of CRI.

This will not be the only story in this area. The maintainers of container runtimes have noticed that expressing their capabilities through Kubernetes, rather than competition at the container runtime level, is an effective way to promote the success of these projects. This has already been proven by cri-o, which is now a core component in Red Hat’s container portfolio, and is known for its simplicity and better performance. There’s no need to mention Windows Container with CRI support, which has already been promoted a high priority task in sig-node.

But there’s also concern. With the continuing of container runtime innovation, users now have to face the same problem again: which container runtime should I choose?

Luckily, this problem has already been partially solved by Kubernetes itself as it’s the Kubernetes API that users need to care about, not the container runtimes. But even for people who do not rely on Kubernetes, there is still  a choice to be made. “No default” should be the “official” attitude from Kubernetes maintainers. One good thing is that we can see the maintainers of runtimes are trying their best to eliminate unnecessary burden for users. runV has merged with ClearContainer, and frakti will be refactored to containerd based eventually. After all, as a user of Kubernetes, I expect a future that only several container runtimes with essential differences for me to choose, if that’s the case, my life will be much easier.

Harry (Lei) Zhang

Harry (Lei) Zhang, Engineer at HyperHQ, Microsoft MVP of cloud computing. Feature maintainer of Kubernetes project. Harry (Lei) Zhang mainly works on scheduling, CRI and hypervisor-based container runtime, i.e. KataContainers. Focusing on kube-scheduler, kubelet and secure container runtime on Kubernetes upstream as well as Hypernetes project. An active community advocator, tech speaker of LinuxCon, KubeCon, OpenStack Summit etc. Once published the book “Docker and Kubernetes Under The Hood” which is the best seller of container cloud area in China.

Learn more at KubeCon + CloudNativeCon Europe, coming up May 2-4 in Copenhagen, Denmark.

​Learn to use GitHub with GitHub Learning Lab

Want to join the 27 million open-source programmers who develop on GitHub? Here’s how to get your start.

The most popular open-source development site in the world is GitHub. It’s used by tens of millions of developers to work on over 80 million projects.

It’s not just a site where people use Linus Torvalds’ Git open-source distributed version control system. It’s also an online home for collaboration, a sandbox for testing, a launchpad for deployment, and a platform for learning new skills. The GitHub Training Team has now released an app, GitHub Learning Lab, so you can join the programming party.

GitHub Learning Lab is not a tutorial or webcast. It’s an app that gives you a hands-on learning experience within GitHub. 

Read more at ZDNet

Managing OPA

OPA is a general-purpose policy engine that let’s you offload decisions from your service. To do so, OPA needs to have access to policies and data that it can use to make decisions.

Prior to v0.8, OPA only exposed low-level HTTP APIs that let you push policy and data into the engine. With v0.8, we’re excited to provide new management features in OPA which make it easier to distribute policies (and data) as well as monitor the health of your agents.

Bundle API

To simplify distribution of policy and data, you can now configure OPA to download “bundles” from remote HTTP endpoints. Bundles are simply gzipped tarballs containing Rego and JSON files. When you configure the Bundle feature, OPA will periodically call out to the remote HTTP and GET the named bundle:

Read more at Medium

Meet Gloo, the ‘Function Gateway’ That Unifies Legacy APIs, Microservices, and Serverless

Most enterprises still have monolithic applications, but many are exploring the use of microservices. The monoliths are accessible via APIs and monitored by the traditional application performance management (APM) tools, with deep dives provided by Splunk and other log investigation tools. With microservices — usually, run on platforms such as Kubernetes or Cloud Foundry — monitoring is usually done through tools such as Prometheus (scalable monitoring) and Open Tracing (transactional logging). Typically, the microservices monitoring tools and the traditional ones do not play well together, necessitating two sets of tools that must be maintained for monitoring.

Adding to this architectural complexity is that many organizations are also exploring the realm of serverless, which is mostly cloud-driven at this point through services like AWS Lambda or Google Cloud Functions. These, too, have their own sets of monitoring tools, such as AWS X-Ray.

Solo’s approach is to treat any of these distinct styles of computing as a single entity, that of the function, so they then can be monitored and managed with a single system. To realize this vision, the company built a function gateway, which is like an API gateway but for functions.

Read more at The New Stack