What was traditionally known as just Monitoring has clearly been going through a renaissance over the last few years. The industry as a whole is finally moving away from having Monitoring and Logging silos – something we’ve been doing and “preaching” for years – and the term Observability emerged as the new moniker for everything that encompasses any form of infrastructure and application monitoring. Microservices have been around for a over a decade under one name or another. Now often deployed in separate containers it became obvious we need a way to trace transactions through various microservice layers, from the client all the way down to queues, storage, calls to external services, etc. This created a new interest in Transaction Tracing that, although not new, has now re-emerged as the third pillar of observability….
In a distributed system, a trace encapsulates the transaction’s state as it propagates through the system. During the journey of the transaction, it can create one or multiple spans. A span represents a single unit of work inside a transaction, for example, an RPC client/server call, sending query to the database server, or publishing a message to the message bus. Speaking in terms of OpenTracing data model, the trace can also be seen as a collection of spans structured around the directed acyclic graph (DAG).
Soon, Ubuntu 18.04, aka the Bionic Beaver, and Canonical‘s next long-term support version of its popular Linux distribution will be out. That means it’s about time to consider how to upgrade to the latest and greatest Ubuntu Linux.
First, keep in mind that this Ubuntu will not look or feel like the last few versions. That’s because Ubuntu is moving back to GNOME for its default desktop from Unity. The difference isn’t that big, but if you’re already comfortable with what you’re running, you may want to wait a while before switching over.
Brigade gateways trigger new events in the Brigade system. While the included GitHub and container registry hooks are useful, the Brigade system is designed to make it easy for you to build your own. In this post, I show the quickest way to create a Brigade gateway using Node.js. How quick? We should be able to have it running in about five minutes.
Prerequisites
You’ll need Brigade installed and configured, and you will also need Draft installed and configured. Make sure Draft is pointed to the same cluster where Brigade is installed.
If you are planning to build a more complex gateway, you might also want Node.js installed locally.
Getting Started
Draft provides a way of bootstrapping a new application with a starter pack. Starter packs can contain things like Helm charts and Dockerfiles. But they can also include code.
Two sets of utilities—the GNU Core Utilities and util-linux—comprise many of the Linux system administrator’s most basic and regularly used tools. Their basic functions allow sysadmins to perform many of the tasks required to administer a Linux computer, including management and manipulation of text files, directories, data streams, storage media, process controls, filesystems, and much more….
These tools are indispensable because, without them, it is impossible to accomplish any useful work on a Unix or Linux computer. Given their importance, let’s examine them…
You can learn about all the individual programs that comprise the GNU Utilities by entering the command info coreutils at a terminal command line. The following list of the core utilities is part of that info page. The utilities are grouped by function to make specific ones easier to find; in the terminal, highlight the group you want more information on and press the Enter key.
Bash is known for admin utilities and text manipulation tools, but the venerable command shell included with most Linux systems also has some powerful commands for manipulating binary data.
One of the most versatile scripting environments available on Linux is the Bash shell. The core functionality of Bash includes many mechanisms for tasks such as string processing, mathematical computation, data I/O, and process management. When you couple Bash with the countless command-line utilities available for everything from image processing to virtual machine (VM) management, you have a very powerful scripting platform.
One thing that Bash is not generally known for is its ability to process data at the bit level; however, the Bash shell contains several powerful commands that allow you to manipulate and edit binary data. This article describes some of these binary commands and shows them at work in some practical situations.
I’m a Software Engineer. Every day, I come into work and write code. That’s what I’m paid to do. As I write my code, I need to be confident that it’s of the highest quality. I can test it locally, but anyone who’s ever heard the words, “…but it works on my machine,” knows that’s not enough. There are huge differences between my local environment and my company’s production systems, both in terms of scale and integration with other components. Back in the day, production systems were complex, and setting them up required a deep knowledge of the underlying systems and infrastructure. To get a production-like environment to test my code, I would have to open a ticket with my IT department and wait for them to get to it and provision a new server (whether physical or virtual). This was a process that took a few days at best. That used to be OK when release cycles were several months apart. Today, it’s completely unacceptable.
Instant Environments Have Arrived
We all know this, it’s almost a cliché. Customers today will not wait months, weeks, or even days for urgent fixes and new features. They expect them almost instantly. Competition is fierce and if you snooze you lose. You must release fast or die! This is the reality of the software industry today. Everything is software and software needs to be continuously tested and updated.
To keep up with the growing velocity of release cycles and provide quality software at near-real-time speed with bug fixes, new features and security updates, developers need the tooling to support quick and accurate verification of their work. This need is met by virtualization and container technologies that put on-demand development environments at developers’ fingertips. Today, a developer can easily spin up a production-like Linux box on their own computer to run their application and test their work almost effortlessly.
The K8s Solution for O11n
Over the past few years, the evolution of orchestration (o11n) tools has made it incredibly easy to deploy containerized applications to remote production-like environments while seamlessly taking care of developer overhead such as security, networking, isolation, scaling and healing.
Kubernetes is one of the most popular tools and has quickly become the leading orchestration platform for containerized applications. As an open-source tool, it has one of the biggest developer communities in the world. With many companies using Kubernetes in production, it has proven mileage and continues to lead the container orchestration pack.
Much of Kubernetes’ popularity comes from the ease in which you can spin up a cluster, deploy your applications to it and scale it to your needs. It’s really DIY-friendly and you won’t need any system or IT engineers to support your development efforts.
Once your cluster is ready, anyone can deploy an application to it using a simple set of endpoints provided by the Kubernetes API.
In the following sections, I’ll show you how easy it can be to run and test your code on a production-like environment.
An Effective Daily Routine with Kubernetes
The illustration below suggests an effective flow that, as a developer, you could adopt as your daily routine. It assumes that you have a production-like Kubernetes cluster set up as your development or staging environment.
Optimizing Deployment to Kubernetes with a Helm Repository
Several tools have evolved to help you integrate your development with Kubernetes letting you easily deploy changes to your cluster. One of the most popular tools is Helm, the Kubernetes packages manager. Helm gives you an easy way to manage the settings and configurations needed for your applications in Kubernetes. It also provides a way to specify all the pieces of your application as a single package and distribute in an easy-to-use format.
But things get really interesting when you use a repository manager that supports Helm. A Kubernetes Helm repository adds capabilities like security and access control over your Helm charts and a REST API to automate the use of Helm charts when deploying your application to Kubernetes. The more advanced repository managers even offer features such as high availability and massively scalable storage making them ready for use in enterprise-grade systems.
Other Players in the Field
Helm is not the only tool you can use to deploy an application to Kubernetes. There are other alternatives, some of them even integrate with IDEs and CI/CD tools. To help you decide which tool best meets your needs you can read this post that compares: Draft vs Gitkube vs Helm vs Ksonnet vs Metaparticle vs Skaffold. There are many other tools the help setup and integrate with Kubernetes. You can see a flat list in this Kubernetes tools repository.
Your One Takeaway
Several container orchestration tools are available; however, the ease with which Kubernetes lets you spin up a cluster and deploy your applications to it has fueled its dominance in the market. The combination of Kubernetes and a tool like Helm puts production-like systems at the hands of every developer. With the ability to spin up a Kubernetes cluster on virtually any development machine, developers can easily implement a fully automated CI/CD pipeline and deliver bug fixes, security patches and new features with the confidence that they will run as expected when deployed to production. If there’s one takeaway you should get from this article it’s that even if you’re already releasing fast, with Kubernetes and Helm, your development cycles can get even shorter and be more reliable letting you release better quality code faster.
Eldad Assis, DevOps Architect, JFrog
Eldad Assis has been working on infrastructure for years, and loving it! DevOps architect and advocate. Automation everywhere!
For similar topics on Kubernetes and Helm, consider attending KubeCon + CloudNativeCon EU, May 2-4, 2018 in Copenhagen, Denmark.
The goal of the National Oceanic and Atmospheric Administration (NOAA) is to put all of its data — data about weather, climate, ocean coasts, fisheries, and ecosystems – into the hands of the people who need it most. The trick is translating the hard data and making it useful to people who aren’t necessarily subject matter experts, said Edward Kearns, the NOAA’s first ever data officer, speaking at the recent Open Source Leadership Summit (OSLS).
NOAA’s mission is similar to NASA’s in that it is science based, but “our mission is operations; to get the quality information to the American people that they need to run their businesses, to protect their lives and property, to manage their water resources, to manage their ocean resources,” said Kearns, during his talk titled “Realizing the Full Potential of NOAA’s Open Data.
Now the NOAA is looking to find a way to make the data available to an even wider group of people and make it more easily understood. Those are their two biggest challenges: how to disseminate data and how to help people understand it, Kearns said.
Heptio added a new load balancer to its stable of open-source projects Monday, targeting Kubernetes users who are managing multiple clusters of the container-orchestration tool alongside older infrastructure.
Gimbal, developed in conjunction with Heptio customer Actapio, was designed to route network traffic within Kubernetes environments set up alongside OpenStack, said Craig McLuckie, co-founder and CEO of Heptio. It can replace expensive hardware load-balancers — which manage the flow of incoming internet traffic across multiple servers — and allow companies with outdated but stable infrastructure to take advantage of the scale that Kubernetes can allow.
“We’re just at the start of figuring out what are the things (that) we can build on top of Kubernetes,” said McLuckie in an interview last week at Heptio’s offices in downtown Seattle. The startup, founded by McLuckie and fellow Kubernetes co-creator Joe Beda, has raised $33.5 million to build products and services designed to make Kubernetes more prevalent and easy to use.
Author Note: this is a post by long-time Linux kernel networking developer and creator of the Cilium project, Thomas Graf
The Linux kernel community recently announced bpfilter, which will replace the long-standing in-kernel implementation of iptables with high-performance network filtering powered by Linux BPF, all while guaranteeing a non-disruptive transition for Linux users.
From humble roots as the packet filtering capability underlying popular tools like tcpdump and Wireshark, BPF has grown into a rich framework to extend the capabilities of Linux in a highly flexible manner without sacrificing key properties like performance and safety. This powerful combination has led forward-leaning users of Linux kernel technology like Google, Facebook, andNetflix to choose BPF for use cases ranging from network security and load-balancing to performance monitoring and troubleshooting. Brendan Gregg of Netflix first called BPF Superpowers for Linux. This post will cover how these “superpowers” render long-standing kernel sub-systems like iptables redundant while simultaneous enabling new in-kernel use cases that few would have previously imagined were possible….
Over the years, iptables has been a blessing and a curse: a blessing for its flexibility and quick fixes. A curse during times debugging a 5K rules iptables setup in an environment where multiple system components are fighting over who gets to install what iptables rules.
Let’s cut to the chase. Android is the most popular of all Linux distributions. Period. End of statement. But that’s not the entire story.
But, setting Android aside, what’s the most popular Linux? It’s impossible to work that out. The website-based analysis tools, such as those used by StatCounter, NetMarketShare, and the Federal government’s Digital Analytics Program (DAP), can’t tell the difference between Fedora, openSUSE, and Ubuntu.
As for what most people think of as “Linux distros,” the best data we have comes from the DistroWatch’s Page Hit ranking. DistroWatch is the most comprehensive desktop Linux user data and news site.