Home Blog Page 270

How Service Meshes Are a Missing Link for Microservices

It is now widely assumed that making the shift to microservices and cloud native environments will only lead to great things — and yet. As the rush to the cloud builds in momentum, many organizations are rudely waking up to more than a few difficulties along the way, such as how Kubernetes and serverless platforms on offer often remain a work in progress.

“We are coming to all those communities and basically pitching them to move, right? We tell them, ‘look, monolithic is very complicated — let’s move to microservices,’ so, they are working very, very hard to move but then they discover that the tooling is not that mature,” Idit Levine, founder and CEO of Solo.io, said. “And actually, there are many gaps in the tooling that they had or used it before and now they’re losing this functionality. For instance, like logging or like debugging microservices is a big problem and so on.” …

One of the first things organizations notice when migrating away from monolithic to microservices environments is how “suddenly you’re losing your observability for all of the applications,” Levine said. “That’s one thing that [service meshes] is really big in solving.”

Read more at The New Stack

Community Demos at ONS to Highlight LFN Project Harmonization and More

A little more than one year since LF Networking (LFN) came together, the project continues to demonstrate strategic growth, with Telstra coming on in November, and the umbrella project representing ~70% of the world’s mobile subscribers. Working side by side with service providers in the technical projects has been critical to ensure that work coming out of LFN is relevant and useful for meeting their requirements. A small sample of these integrations and innovations will be on display once again in the LF Networking Booth at the Open Networking Summit Event, April 3-5 in San Jose, CA. We’re excited to be back in the Bay Area this year and are offering Day Pass and Hall Pass Options. Register today!

Due to demand from the community, we’ve expanded the number of demo stations from 8 to 10 and cover many areas in the open networking stack — with projects from within the LF Networking umbrella (FD,io, OpenDaylight, ONAPOPNFV, and Tungsten Fabric), as well as adjacent projects such as ConsulDockerEnvoyIstioLigatoKubernetesOpenstack, and SkyDive. We welcome you to come spend some time talking to and learning from these experts in the technical community.

The LFN promise of project harmonization will be on display from members iConectiv, Huawei, and Vodafone who will highlight ONAP’s onboarding process for testing Virtual Network Functions (VNFs), integrating with the OPNFV’s Dovetail project, supporting the expanding OPNFV Verification Program (OVP), and paving the way for future compliance and certification testing. Another example of project harmonization is use of common infrastructure for testing across LFN project and the folks from UNH-IOL will be on hand to walk users through the Lab-as-a-Service offering that is facilitating development and testing by hosting hardware and providing access to the community.

Other focus areas include network slicing, service mesh, SDN performance, testing, integration, and analysis.

Listed below is the full networking demo lineup, and you can read detailed descriptions of each demo here.

  • Demonstrating Service Provider Specific Test & Verification Requirements (OPNFV, ONAP) – Proposed By Vodafone and Demonstrated by Vodafone/Huawei/iConectiv
  • Integration of ODL and Network Service Mesh (OpenDaylight, Network Service Mesh) – Presented by Lumina Networks
  • SDN Performance Comparison Between Multi Data-paths (Tungsten Fabric, FD.io) – Presented by ATS, and Sofioni Networks
  • ONAP Broadband Service Use Case (ONAP, OPNFV) – Presented by Swisscom, Huawei, and Nokia
  • OpenStack Rolling Upgrade with Zero Downtime to Application (OPNFV, OpenStack) – Presented By Nokia and Deutsche Telekom
  • Skydive & VPP Integration: A Topology Analyzer (FD.io, Ligato, Skydive, Docker) – Presented by PANTHEON.tech
  • VSPERF: Beyond Performance Metrics, Towards Causation Analysis (OPNFV) – Presented by Spirent)
  • Network Slice with ONAP based Life Cycle Management (ONAP) – Presented by AT&T, and Ericsson
  • Service Mesh and SDN: At-Scale Cluster Load Balancing (Tungsten Fabric, Linux OS, Istio, Consul, Envoy, Kubernetes, HAProxy) – Presented by CloudOps, Juniper and the Tungsten Fabric Community
  • Lab as a Service 2.0 Demo (OPNFV) – Presented by UNH-IOL

We hope to see you at the show! Register today!

This article originally appeared at Linux Foundation

Kubernetes Setup Using Ansible and Vagrant

This blog post describes the steps required to set up a multi node Kubernetes cluster for development purposes. This setup provides a production-like cluster that can be set up on your local machine.

Why do we require multi node cluster setup?

Multi node Kubernetes clusters offer a production-like environment which has various advantages. Even though Minikube provides an excellent platform for getting started, it doesn’t provide the opportunity to work with multi node clusters which can help solve problems or bugs that are related to application design and architecture. For instance, Ops can reproduce an issue in a multi node cluster environment, Testers can deploy multiple versions of an application for executing test cases and verifying changes. These benefits enable teams to resolve issues faster which make them more agile.

Why use Vagrant and Ansible?

Vagrant is a tool that will allow us to create a virtual environment easily and it eliminates pitfalls that cause the works-on-my-machine phenomenon. It can be used with multiple providers such as Oracle VirtualBox, VMware, Docker, and so on. It allows us to create a disposable environment by making use of configuration files.

Ansible is an infrastructure automation engine that automates software configuration management. It is agentless and allows us to use SSH keys for connecting to remote machines. Ansible playbooks are written in yaml and offer inventory management in simple text files.

Prerequisites

  • Vagrant should be installed on your machine. Installation binaries can be found here.
  • Oracle VirtualBox can be used as a Vagrant provider or make use of similar providers as described in Vagrant’s official documentation.
  • Ansible should be installed in your machine. Refer to the Ansible installation guide for platform-specific installation.

Read more at Kubernetes.io

A Linux Sysadmin’s Guide to Network Management, Troubleshooting and Debugging

A system administrator’s routine tasks include configuring, maintaining, troubleshooting, and managing servers and networks within data centers. There are numerous tools and utilities in Linux designed for administrative purposes.

In this article, we will review some of the most used command-line tools and utilities for network management in Linux, under different categories. We will explain some common usage examples, which will make network management much easier in Linux.

Network Configuration, Troubleshooting and Debugging Tools

1. ifconfig Command

ifconfig is a command line interface tool for network interface configuration and also used to initialize an interface at system boot time. Once a server is up and running, it can be used to assign an IP Address to an interface and enable or disable the interface on demand.

It is also used to view the status IP Address, Hardware / MAC address, as well as MTU (Maximum Transmission Unit) size of the currently active interfaces. ifconfig is thus useful for debugging or performing system tuning.

Here is an example to display the status of all active network interfaces.

Read more at Tecmint

Why You Need DevOps and the Cloud

DevOps and the cloud are not only two of today’s biggest tech trends, but are inextricably linked. Research from DORA’s 2018 Accelerate State of DevOps Report and Redgate’s 2019 State of Database DevOps Report outline a clear correlation between cloud and DevOps adoption, with the two working together to contribute to greater business success.

For example, in the DORA report, those companies that met all essential cloud characteristics were 23 times more likely to be in the elite group when it came to DevOps performance. Similarly, Redgate found that 43 percent of organizations that have adopted DevOps have server estates that are all or mostly cloud-based. This compares to just 12 percent of organizations that have not yet adopted DevOps or have no plans to.

Looking into the link in more detail, research shows that there are four common factors that underpin DevOps and the cloud:

Cultural Openness to Transformation

Adopting DevOps is not just a technical or process change, but one that requires a certain type of culture to be in place. 

Read more at DevOps.com

Linux Desktop News: Solus 4 Released With New Budgie Goodness

After teasing fans for several months with the 3.9999 ISO refresh, the team at Solus has delivered  “Fortitude,” a new release of the independent Linux desktop OS. And like elementary OS did with Juno, it seems to earn that major version number.

Perhaps the most notable upgrade is the appearance of Budgie 10.5, even before it lands on the slick desktop environment’s official Ubuntu flavor next month. I first experienced Budgie during my review of the InfinityCubefrom Tuxedo Computers, and I found a lot to love about it. … Read more at Forbes

Finding Files with mlocate

Learn how to locate files in this tutorial from our archives.

It’s not uncommon for a sysadmin to have to find needles buried deep inside haystacks. On a busy machine, there can be hundreds of thousands of files present on your filesystems. What do you do when you need to make sure one particular configuration file is up to date, but you can’t remember where it is located?

If you’ve used Unix-type machines for a while, then you’ve almost certainly come across the find command before. It is unquestionably exceptionally sophisticated and highly functional. Here’s an example that just searches for links inside a directory, ignoring files:

# find . -lname "*"

You can do seemingly endless things with the find command; there’s no denying that. The find command is nice and succinct when it wants to be, but it can also get complex very quickly. This is not necessarily because of the find command itself, but coupled with “xargs” you can pass it all sorts of options to tune your output, and indeed delete those files which you have found.

Location, location, frustration

There often comes a time when simplicity is the preferred route, however — especially when a testy boss is leaning over your shoulder, chatting away about how time is of the essence. And, imagine trying to vaguely guess the path of the file you’ve never seen but that your boss is certain lives somewhere on the busy /var partition.

Step forward mlocate. You may be aware of one of its close relatives: slocate, which securely (note the prepended letter s for secure) took note of the pertinent file permissions to prevent unprivileged users from seeing privileged files). Additionally, there is also the older, original locate command whence they came.

The differences between mlocate and other members of its family (according to mlocate at least) is that, when scanning your filesystems, mlocate doesn’t need to continually rescan all your filesystem(s). Instead, it merges its findings (note the prepended m for merge) with any existing file lists, making it much more performant and less heavy on system caches.

In this series of articles, we’ll look more closely at the mlocate tool (and simply refer to it as “locate” due to its popularity) and examine how to quickly and easily tune it to your heart’s content.

Compact and Bijou

If you’re anything like me unless you reuse complex commands frequently then ultimately you forget them and need to look them up.The beauty of the locate command is that you can query entire filesystems very quickly and without worrying about top-level, root, paths with a simple command using locate.

In the past, you might well have discovered that the find command can be very stubborn and cause you lots of unwelcome head-scratching. You know, a missing semicolon here or a special character not being escaped properly there. Let’s leave the complicated find command alone now, relax, and have a look into the clever little command that is locate.

You will most likely want to check that it’s on your system first by running these commands:

For Red Hat derivatives:

# yum install mlocate

For Debian derivatives:

# apt-get install mlocate

There shouldn’t be any differences between distributions, but there are almost certainly subtle differences between versions; beware.

Next, we’ll introduce a key component to the locate command, namely updatedb. As you can probably guess, this is the command which updates the locate command’s db. Hardly counterintuitive.

The db is the locate command’s file list, which I mentioned earlier. That list is held in a relatively simple and highly efficient database for performance. The updatedb runs periodically, usually at quiet times of the day, scheduled via a cron job. In Listing 1, we can see the innards of the file /etc/cron.daily/mlocate.cron (both the file’s path and its contents might possibly be distro and version dependent).

#!/bin/sh

nodevs=$(< /proc/filesystems awk '$1 == "nodev" { print $2 }')

renice +19 -p $$ >/dev/null 2>&1

ionice -c2 -n7 -p $$ >/dev/null 2>&1

/usr/bin/updatedb -f "$nodevs"

Listing 1: How the “updatedb” command is triggered every day.

As you can see, the mlocate.cron script makes careful use of the excellent nice commands in order to have as little impact as possible on system performance. I haven’t explicitly stated that this command runs at a set time every day (although if my addled memory serves, the original locate command was associated with a slow-down-your-computer run scheduled at midnight). This is thanks to the fact that on some “cron” versions delays are now introduced into overnight start times.

This is probably because of the so-called Thundering Herd of Hippos problem. Imagine lots of computers (or hungry animals) waking up at the same time to demand food (or resources) from a single or limited source. This can happen when all your hippos set their wristwatches using NTP (okay, this allegory is getting stretched too far, but bear with me). Imagine that exactly every five minutes (just as a “cron job” might) they all demand access to food or something otherwise being served.

If you don’t believe me then have a quick look at the config from — a version of cron called anacron, in Listing 2, which is the guts of the file /etc/anacrontab.

# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.


SHELL=/bin/sh

PATH=/sbin:/bin:/usr/sbin:/usr/bin

MAILTO=root

# the maximal random delay added to the base delay of the jobs

RANDOM_DELAY=45

# the jobs will be started during the following hours only

START_HOURS_RANGE=3-22

#period in days   delay in minutes   job-identifier   command

1       5       cron.daily              nice run-parts /etc/cron.daily

7       25      cron.weekly             nice run-parts /etc/cron.weekly

@monthly 45     cron.monthly            nice run-parts /etc/cron.monthly 

Listing 2: How delays are introduced into when “cron” jobs are run.

From Listing 2, you have hopefully spotted both “RANDOM_DELAY” and the “delay in minutes” column. If this aspect of cron is new to you, then you can find out more here:

# man anacrontab

Failing that, you can introduce a delay yourself if you’d like. An excellent web page (now more than a decade old) discusses this issue in a perfectly sensible way. This website discusses using sleep to introduce a level of randomality, as seen in Listing 3.

#!/bin/sh

# Grab a random value between 0-240.
value=$RANDOM
while [ $value -gt 240 ] ; do
 value=$RANDOM
done

# Sleep for that time.
sleep $value

# Syncronize.
/usr/bin/rsync -aqzC --delete --delete-after masterhost::master /some/dir/

Listing 3: A shell script to introduce random delays before triggering an event, to avoid a Thundering Herd of Hippos.

The aim in mentioning these (potentially surprising) delays was to point you at the file /etc/crontab, or the root user’s own crontab file. If you want to change the time of when the locate command runs specifically because of disk access slowdowns, then it’s not too tricky. There may be a more graceful way of achieving this result, but you can also just move the file /etc/cron.daily/mlocate.cron somewhere else (I’ll use the /usr/local/etc directory), and as the root user add an entry into the root user’s crontab with this command and then paste the content as below:

# crontab -e

33 3 * * * /usr/local/etc/mlocate.cron

Rather than traipse through /var/log/cron and its older, rotated, versions, you can quickly tell the last time your cron.daily jobs were fired, in the case of anacron at least, with:

# ls -hal /var/spool/anacron

Next time, we’ll look at more ways to use locate, updatedb, and other tools for finding files.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows how hackers launch sophisticated attacks to compromise servers, steal data, and crack complex passwords, so you can learn how to defend against these attacks. In the book, he also talks you through making your servers invisible, performing penetration testing, and mitigating unwelcome attacks. You can find out more about DevSecOps and Linux security via his website (http://www.devsecops.cc).

Searchable List of Certified Open Hardware Projects

In this article, and hopefully more to come, I will share interesting examples of hardware that has been certified by the Open Source Hardware Association (OSHWA).

As an introduction to this series, I’ll start with a little background.

What is open source hardware?

Open source hardware is hardware that is, well, open source. The Open Source Hardware Association maintains a formal definition of open source hardware, but fundamentally, open source hardware is about two types of freedom. The first is freedom of information: Does a user have the information required to understand, replicate, and build upon the hardware? The second is freedom from legal barriers: Will legal barriers (such as intellectual property rights) prevent a user who is trying to understand, replicate, and build upon the hardware from doing so? True open source hardware is open to everyone to do with as they see fit.

Read more at OpenSource.com

The What and the Why of the Cluster API

Throughout the evolution of software tools there exists a tension between generalization and partial specialization. A tool’s broader adoption is a form of natural selection, where its evolution is predicated on filling a given need, or role, better than its competition. This premise is imbued in the central tenets of Unix philosophy:

  • Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new features.
  • Expect the output of every program to become the input to another, as yet unknown, program.

The domain of configuration management tooling is rife with examples of not heeding this lesson (i.e. Terraform, Puppet, Chef, Ansible, Juju, Saltstack, etc.), where expanse in generality has given way to partial specialization of different tools, causing fragmentation of an ecosystem. This pattern has not gone unnoticed by those in the Kubernetes cluster lifecycle special interest group, or SIG, whose objective is to simplify the creation, configuration, upgrade, downgrade, and teardown of Kubernetes clusters and their components. Therefore, one of the primary design principles for any subproject that the SIG endorses is: Where possible, tools should be composable to solve a higher order set of problems.

In this post, we will outline the history and motivations behind the creation of the Cluster API as a specialized toolset to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management in the Kubernetes ecosystem. The primary function of Cluster API is not meant to supplant existing tools, but to serve as a partial specialization that can be used in a composable fashion with those tools.

Read more at VMware

Understanding GCC Warnings

Most of us appreciate when our compiler lets us know we made a mistake. Finding coding errors early lets us correct them before they embarrass us in a code review or, worse, turn into bugs that impact our customers. Besides the compulsory errors, many projects enable additional diagnostics by using the -Wall and -Wextra command-line options. For this reason, some projects even turn them into errors via -Werror as their first line of defense. But not every instance of a warning necessarily means the code is buggy. Conversely, the absence of warnings for a piece of code is no guarantee that there are no bugs lurking in it.

In this article, I would like to shed more light on trade-offs involved in the GCC implementation choices. Besides illuminating underlying issues for GCC contributors interested in implementing new warnings or improving existing ones, I hope it will help calibrate expectations for GCC users about what kinds of problems can be expected to be detected and with what efficacy. Having a better understanding of the challenges should also reduce the frustration the limitations of the available solutions can sometimes cause. (See part 2 to learn more about middle-end warnings.)

Read more at Red Hat Developers