Home Blog Page 339

Do You Need a Service Mesh?

Beyond the hype, it’s necessary to understand what a service mesh is and what concrete problems it solves so you can decide whether you might need one.

A brief introduction to the service mesh

The service mesh is a dedicated infrastructure layer for handling service-to-service communication in order to make it visible, manageable, and controlled. The exact details of its architecture vary between implementations, but generally speaking, every service mesh is implemented as a series (or a “mesh”) of interconnected network proxies designed to better manage service traffic.

This type of solution has gained recent popularity with the rise of microservice-based architectures, which introduce a new breed of communication traffic. Unfortunately, it’s often introduced without much forethought by its adopters. This is sometimes referred to as the difference between the north-south versus east-west traffic pattern. Put simply, north-south traffic is server-to-client traffic, whereas east-west is server-to-server traffic. The naming convention is related to diagrams that “map” network traffic, which typically draw vertical lines for server-client traffic, and horizontal lines for server-to-server traffic. In the world of server-to-server traffic, aside from considerations happening at the network and transport layers (L3/L4), there’s a critical difference happening in the session layer to account for.

 

Read more at O’Reilly

Persistent Volumes for Docker Containers

Docker guarantees the same environment on all target systems: If the Docker container runs for the author, it also runs for the user and can even be preconfigured accordingly. Although Docker containers seem like a better alternative to the package management of current distributions (i.e., RPM and dpkg), the design assumptions underlying Docker and the containers distributed by Docker differ fundamentally from classic virtualization. One big difference is that a Docker container does not have persistent storage out of the box: If you delete a container, all data contained in it is lost.

Fortunately, Docker offers a solution to this problem: A volume service can provide a container with persistent storage. The volume service is merely an API that uses functions in the loaded Docker plugins. For many types of storage, plugins allow containers to be connected directly to a specific storage technology. In this article, I first explain the basic intent of persistent memory in Docker and why a detour through the volume service is necessary. Then, in two types of environments – OpenStack and VMware – I show how persistent memory can be used in Docker with the appropriate plugins.

Planned Without Storage

The reason persistent storage is not automatically included with the delivery of every Docker container goes back to the time long before Docker itself existed. The cloud is to blame: It made the idea of permanent storage obsolete because storage regularly poses a challenge in classic virtualization setups. If you compare classic virtualization and the cloud, it quickly becomes clear that two worlds collide here. A virtual machine (VM) in a classic environment rightly assumes that it is on persistent storage, so the entire VM can be moved from one host to another. …

When dealing with persistent storage, Docker clearly must solve precisely those problems that have always played an important role in classic virtualization. Without redundancy at the storage level, for example, such a setup cannot operate effectively; otherwise, the failure of a single container node would mean that many customer setups would no longer function properly. The risk that the failure of individual systems precisely hitting the critical points of the customer setups, such as the databases, is clearly too great in this constellation.

The Docker developers have found a smart solution to the problem: The service that takes care of volumes for Docker containers can also commission storage locally and connect it to a container. Here, Docker makes it clear that the volumes are not redundant; that is, Docker did not even tackle the problem of redundant volumes itself. Instead, the project points to external solutions: In fact, various approaches are now on the market that offer persistent storage for clouds and deal with issues such as internal redundancy. One of the best-known representatives is Ceph, and to enable the use of such storage services, the Docker volume service is coupled with the plugin system that already exists, thus providing redundant volumes for Docker containers with the corresponding plugin of an external solution.

Read more at ADMIN

Linux history Command Tutorial for Beginners (8 Examples)

If your work involves running tools and scripts on the Linux command line, I am sure there are a lot of commands you would be running each day. Those new to the command line should know there exists a tool – dubbed history – that gives you a list of commands you’ve executed earlier.

In this tutorial, we will discuss the basics of the history command using some easy to understand examples. But before we do that, it’s worth mentioning that all examples here have been tested on an Ubuntu 16.04LTS machine.

Linux history command

If you know how to effectively utilize your command line history, you can save a lot of time on daily basis. Following are some Q&A-styled examples that should give you a good idea on how you can use the history command to your benefit.

Read more at HowtoForge

An Introduction to Using Git

If you’re a developer, then you know your way around development tools. You’ve spent years studying one or more programming languages and have perfected your skills. You can develop with GUI tools or from the command line. On your own, nothing can stop you. You code as if your mind and your fingers are one to create elegant, perfectly commented, source for an app you know will take the world by storm.

But what happens when you’re tasked with collaborating on a project? Or what about when that app you’ve developed becomes bigger than just you? What’s the next step? If you want to successfully collaborate with other developers, you’ll want to make use of a distributed version control system. With such a system, collaborating on a project becomes incredibly efficient and reliable. One such system is Git. Along with Git comes a handy repository called GitHub, where you can house your projects, such that a team can check out and check in code.

I will walk you through the very basics of getting Git up and running and using it with GitHub, so the development on your game-changing app can be taken to the next level. I’ll be demonstrating on Ubuntu 18.04, so if your distribution of choice is different, you’ll only need to modify the Git install commands to suit your distribution’s package manager.

Git and GitHub

The first thing to do is create a free GitHub account. Head over to the GitHub signup page and fill out the necessary information. Once you’ve done that, you’re ready to move on to installing Git (you can actually do these two steps in any order).

Installing Git is simple. Open up a terminal window and issue the command:

sudo apt install git-all

This will include a rather large number of dependencies, but you’ll wind up with everything you need to work with Git and GitHub.

On a side note: I use Git quite a bit to download source for application installation. There are times when a piece of software isn’t available via the built-in package manager. Instead of downloading the source files from a third-party location, I’ll often go the project’s Git page and clone the package like so:

git clone ADDRESS

Where ADDRESS is the URL given on the software’s Git page.
Doing this most always ensures I am installing the latest release of a package.

Create a local repository and add a file

The next step is to create a local repository on your system (we’ll call it newproject and house it in ~/). Open up a terminal window and issue the commands:

cd ~/

mkdir newproject

cd newproject

Now we must initialize the repository. In the ~/newproject folder, issue the command git init. When the command completes, you should see that the empty Git repository has been created (Figure 1).

Figure 1: Our new repository has been initialized.

Next we need to add a file to the project. From within the root folder (~/newproject) issue the command:

touch readme.txt

You will now have an empty file in your repository. Issue the command git status to verify that Git is aware of the new file (Figure 2).

Figure 2: Git knows about our readme.txt file.

Even though Git is aware of the file, it hasn’t actually been added to the project. To do that, issue the command:

git add readme.txt

Once you’ve done that, issue the git status command again to see that readme.txt is now considered a new file in the project (Figure 3).

Figure 3: Our file now has now been added to the staging environment.

Your first commit

With the new file in the staging environment, you are now ready to create your first commit. What is a commit? Easy: A commit is a record of the files you’ve changed within the project. Creating the commit is actually quite simple. It is important, however, that you include a descriptive message for the commit. By doing this, you are adding notes about what the commit contains (such as what changes you’ve made to the file). Before we do this, however, we have to inform Git who we are. To do this, issue the command:

git config --global user.email EMAIL

git config --global user.name “FULL NAME”

Where EMAIL is your email address and FULL NAME is your name.

Now we can create the commit by issuing the command:

git commit -m “Descriptive Message”

Where Descriptive Message is your message about the changes within the commit. For example, since this is the first commit for the readme.txt file, the commit could be:

git commit -m “First draft of readme.txt file”

You should see output indicating that 1 file has changed and a new mode was created for readme.txt (Figure 4).

Figure 4: Our commit was successful.

Create a branch and push it to GitHub

Branches are important, as they allow you to move between project states. Let’s say you want to create a new feature for your game-changing app. To do that, create a new branch. Once you’ve completed work on the feature you can merge this feature from the branch to the master branch. To create the new branch, issue the command:

git checkout -b BRANCH

where BRANCH is the name of the new branch. Once the command completes, issue the command git branch to see that it has been created (Figure 5).

Figure 5: Our new branch, called featureX.

Next we need to create a repository on GitHub. If you log into your GitHub account, click the New Repository button from your account main page. Fill out the necessary information and click Create repository (Figure 6).

Figure 6: Creating the new repository on GitHub.

After creating the repository, you will be presented with a URL to use for pushing our local repository. To do this, go back to the terminal window (still within ~/newproject) and issue the commands:

git remote add origin URL

git push -u origin master

Where URL is the url for our new GitHub repository.

You will be prompted for your GitHub username and password. Once you successfully authenticate, the project will be pushed to your GitHub repository and you’re ready to go.

Pulling the project

Say your collaborators make changes to the code on the GitHub project and have merged those changes. You will then need to pull the project files to your local machine, so the files you have on your system match those on the remote account. To do this, issue the command (from within ~/newproject):

git pull origin master

The above command will pull down any new or changed files to your local repository.

The very basics

And that is the very basics of using Git from the command line to work with a project stored on GitHub. There is quite a bit more to learn, so I highly recommend you issue the commands man git, man git-push, and man git-pull to get a more in-depth understanding of what the git command can do.

Happy developing!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Using Linux Containers to Manage Embedded Build Environments

Linux container technology has been proposed by companies like Resin.io as a simpler and more secure way to deploy embedded devices. And, Daynix Computing has developed an open source framework called Rebuild that uses Linux containers in the build management process of embedded IoT development. At the 2017 Open Source Summit, Daynix “virtualization expert” Yan Vugenfirer gave a presentation on Rebuild called “How Linux Containers can Help to Manage Development Environments for IoT and Embedded Systems.”

Vugenfirer started by reminding the audience of the frustrations of embedded development, especially when working with large, complex projects. “You’re dealing with different toolchains, SDKs, and compilers all with different dependencies,” he said. “It gets more complicated if you need to update packages, or change SDKs, or run a codebase over several devices. The code may compile on your machine, but there may be problems in the build server or in the CI (continuous integration) server.”

Rebuild offers an easier way to manage build environments by leveraging the power of Linux containers, said Vugenfirer. “Rebuild allows seamless integration between the containers and your existing build environment and enables easy sharing of environments between team members. We are mapping the local file system into the container in order to preserve all permissions and ownership when working with source code. The code is stored locally on our file system while we work with different environments.”

Based on Ruby 2.0+ Gem and Docker, Rebuild supports Linux, Windows, and Mac OS environments. It’s available in free and commercial versions.

The software lets you run multiple platforms on same machine and gives you the assurance that it will run the same on the build or CI servers as it does on your development workstation, said Vugenfirer. The developer can choose whether actions performed on the build environment change the original.

Build management designed for teams

The software is particularly useful for sharing environments between team members. It provides a “unified experience,” and offers features like track versioning,” said Vugenfirer. Other benefits include easier technical support, as well as the ability to “reproduce an environment in the future with different developers.” Rebuild also “eases preparation for certification audits for medical or automotive devices by letting you show what libraries are certified,” he added.

Developers control Rebuild via the Rebuild CLI interface, which “acts as a gateway for scripts on CI or build servers,” explained Vugenfirer. Below that in the architecture hierarchy sits Docker, and on the base level is the Environments Registry, which works with Docker to manage and share the container images. Rebuild currently supports DockerHub as the default, as well as Docker private registry and Daynix’s own Rebuild Native Registry.

Developers can use existing scripts to build the software by simply prefacing it with the “rbld run” command. “You don’t need to know about Docker or Docker files,” said Vugenfirer.

Daynix CTO Dmitry Fleytman joined Vugenfirer to give a demo of Rebuild that showed how to search a registry for available environments such as Raspbian, Ubuntu, BeagleBone, and QEMU. He then deployed the Raspbian environment and issued a Run command to compile source code. There’s also an interactive mode alternative to issuing a single run command. In either case, the process “needs to be done only once for each environment version,” said Vugenfirer.

Although you can run an environment without modifying it, you can also do the opposite by using the “rbld modify” command, which lets you update packages. Other commands include “rbld commit” for a local commit and “rbld publish” to share it. You can also revert changes with “rbld checkout.”

Rebuild lets you create a new environment from scratch using the file system or build on a base image from one of the distros in environment repository. After using the “rbld create” command, you can then modify, commit, and publish it.

Future plans for Rebuild include adding role management for the Rebuild Native Registry to assign who can create an environment vs. simply running them. Daynix also plans to add more registries, provide IDE extensions, and offer more tracking of environment usage. They have just begun working on Yocto Project integration.

More information can be found in the conference video below:

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Building Containers with HPC Container Maker

Containers package entire workflows, including software, libraries, and even data, into a single file. The container can then be run on any compatible hardware that can run the container type, regardless of the underlying operating system.

Containers are finding increased utility in the worlds of scientific computing, deep learning, HPC, machine learning, and artificial intelligence, because they are reproducible, portable (mobility of compute), user friendly (admins don’t have to install everything), and simple, and they isolate resources, reduce complexity (reduction in dependencies), and make it easy to distribute the application and dependencies.

Using containers, you have virtually everything you need in a single file, including a base operating system (OS), the application or workflow (multiple applications), and all of the dependencies. Sometimes the data is also included in the container, although it is not strictly necessary because you can mount filesystems with the data from the container.

To run or execute the container, you just need an OS that accommodates the specific container technology, a container run time, and a compatible hardware system. It does not have to be the same OS as in the container. For example, you could create a container using something like Docker on a Linux Mint 18.2 system and run it on a CentOS 7.2 system. Besides hardware compatibility, the only requirement is that the Linux Mint system have the ability to create a container correctly and that the CentOS 7.2 system be able to run the container correctly.

The creator of the container on the Linux Mint system includes everything needed in the container,…

Read more at ADMIN

How to Give IT Project Estimates—And When Not to Estimate at All

Everyone wants to know how long a project will take. Here’s how to provide managers with a prediction that’s both accurate and imprecise, using cycle time and counting stories, along with advice on when to avoid estimation altogether.

Estimation is work, too. Many teams account for estimation in their regular flow of work. However, an accurate estimate for a quarter’s worth of work often requires more than the hour or two of estimation as the team proceeds.

There are at least two problems with estimating a quarter’s worth of work: Too often, the requirements aren’t fully defined and, as with Celeste’s team, the estimation interrupts the team from its urgent project work.

The problem is that software estimates are not like an estimate for a road trip. If you live anywhere that has more than one traffic light, you’ve encountered traffic fluctuations. I live in the Boston area, where a drive to the airport can take me 20 minutes or 90 minutes. Most often, it’s in the range of 30 to 45 minutes. That’s substantial variation for an eight-mile drive.

And there’s no innovation in that drive. 

Read more at HPE

Deep Learning and Artificial Intelligence

Artificial intelligence (AI) is in the midst of an undeniable surge in popularity, and enterprises are becoming particularly interested in a form of AI known as deep learning.

According to Gartner, AI will likely generate $1.2 trillion in business value for enterprises in 2018, 70 percent more than last year. “AI promises to be the most disruptive class of technologies during the next 10 years due to advances in computational power, volume, velocity and variety of data, as well as advances in deep neural networks (DNNs),” said John-David Lovelock, research vice president at Gartner.

Those deep neural networks are used for deep learning, which most enterprises believe will be important for their organizations. A 2018 O’Reilly report titled How Companies Are Putting AI to Work through Deep Learning found that only 28 percent of enterprises surveyed were already using deep learning. However, 92 percent of respondents believed that deep learning would play a role in their future projects, and 54 percent described that role as “large” or “essential.”

What Is Deep Learning

To understand what deep learning is, you first need to understand that it is part of the much broader field of artificial intelligence. In a nutshell, artificial intelligence involves teaching computers to think the way that human beings think. That encompasses a wide variety of different applications, like computer vision, natural language processing and machine learning.

Read more at Datamation

10 Key Attributes of Cloud-Native Applications

Cloud native is a term used to describe container-based environments. Cloud-native technologies are used to develop applications built with services packaged in containers, deployed as microservices and managed on elastic infrastructure through agile DevOps processes and continuous delivery workflows.

Where operations teams would manage the infrastructure resource allocations to traditional applications manually, cloud-native applications are deployed on infrastructure that abstracts the underlying compute, storage and networking primitives. Developers and operators dealing with this new breed of applications don’t directly interact with application programming interfaces (APIs) exposed by infrastructure providers. Instead, the orchestrator handles resource allocation automatically, according to policies set out by DevOps teams. The controller and scheduler, which are essential components of the orchestration engine, handle resource allocation and the life cycle of applications.

Cloud-native platforms, like Kubernetes, expose a flat network that is overlaid on existing networking topologies and primitives of cloud providers. Similarly, the native storage layer is often abstracted to expose logical volumes that are integrated with containers. 

Read more at The New Stack

Finding Interesting Documents with grep

Learn the basics of grep with this tutorial from our archives.

The grep command is a very powerful way to find documents on your computer. You can use grep to see if a file contains a word or use one of many forms of regular expression to search for a pattern instead. Grep can check the file that you specify or can search an entire tree of your filesystem recursively looking for matching files.

One of the most basic ways to use grep is shown below, looking for the lines of a file that match a pattern. I limit the search to only text files in the current directory *.txt and the -i option makes the search case-insensitive. As you can see, the only matches for the string “this” are the capitalized string “This”.

$ cat sample.txt
This is the sample file.
It contains a few lines of text
that we can use to search for things.
Samples of text
and seeking those samples
there can be many matches
but not all of them are fun
so start searching for samples
start looking for text that matches

$ grep -i this sample.txt 
This is the sample file.

The -A, -B, and -C options to grep let you see a little bit more context than a single line that matched. These options let you specify the number of trailing, preceding, and both trailing and preceding lines to print, respectively. Matches are shown separated with a “—” line so you can clearly see the context for each match in the presented results. Notice that the last example using -C 1 to grab both the preceding line and trailing line shows four results in the last match. This is because there are two matches (the middle two lines) that share the same context.

$ grep -A 2 It sample.txt 
It contains a few lines of text
that we can use to search for things.
Samples of text

$ grep -C 1 -i the sample.txt 
This is the sample file.
It contains a few lines of text
--
and seeking those samples
there can be many matches
but not all of them are fun
so start searching for samples

The -n option can be used to show the line number that is being presented. Below I grab one line before and one line after the match and see the line numbers, too.

$ grep -n -C 1 tha sample.txt 
2-It contains a few lines of text
3:that we can use to search for things.
4-Samples of text
--
8-so start searching for samples
9:start looking for text that matches

Digging through a bunch of files

You can get grep to recurse into a directory using the -R option. When you use this, the matching file name is shown on the output as well as the match itself. When you combine -R with -n the file name is first shown, then the line number, and then the matching line.

 $ grep -R sample .
./subdir/sample3.txt:another sample in a sub directory
./sample.txt:This is the sample file.
./sample.txt:and seeking those samples
./sample.txt:so start searching for samples
./sample2.txt:This is the second sample file

$ grep -n -R sample .
./subdir/sample3.txt:1:another sample in a sub directory
...

If you have some subdirectories that you don’t want searched, then the –exclude-dir can tell grep to skip over them. Notice that I have used single quotes around the sub* glob below. The difference can be seen in the last commands where I use echo to show the command itself rather than execute it. Notice that the shell has expanded the sub* into ‘subdir’ for me in the last command. If you have subdir1 and subdir2 and use the pattern sub* then your shell will likely expand that glob into the two directory names, and that will confuse grep which is expecting a single glob. If in doubt, enclose the directory to exclude in single quotes as shown in the first command below.

$ grep -R --exclude-dir 'sub*' sample .
./sample.txt:This is the sample file.
./sample.txt:and seeking those samples
./sample.txt:so start searching for samples
./sample2.txt:This is the second sample file

$ echo grep -R --exclude-dir 'sub*' sample .
grep -R --exclude-dir sub* sample .

$ echo grep -R --exclude-dir sub* sample .
grep -R --exclude-dir subdir sample .

Although the recursion built into grep is handy, you might like to combine the find and grep commands. It can be useful to use the find command by itself to see what files you will be executing grep on. The find command below uses regular expressions on the file names to limit the files to consider to only those with the number 2 or 3 in their name and only text files. The -type f limits the output to only files.

$ find . -name '*[23]*txt' -type f
./subdir/sample3.txt
./sample2.txt

You then tell find to execute a command for each file that is found instead of just printing the file name using the -exec option to find. It is convenient to use the -H option to grep to print the filename for each match. You may recall that grep will give you -H by default when run on many files. Using -H can be handy in case find only finds a single file; if that file matches, it is good to know what the file name is as well as the matches.

$ find . -name '*[23]*txt' -type f -exec grep -H sampl {} +

For dealing with common file types, like source code, it might be convenient to use a bash alias such as the one below to “Recursively Grep SRC code”. The search is limited to C/C++ source using file name matching. Many possible extensions are chained together using the -o argument to find meaning “OR”. The “$1” argument passed to the grep command takes the first argument to RGSRC and passes it to grep. The last command searches for the string “Ferris” in any C/C++ source code in the current directory or any subdirectory.

$ cat ~/.bashrc
...
RGSRC() {
 find . ( -name "*.hh" -o -name "*.cpp" -o -name "*.hpp" -o -name "*.h" -o -name "*.c" ) 
    -exec grep -H "$1" {} +
}
...

$ RGSRC Ferris
...
./Ferris.cpp:using namespace Ferris::RDFCore;
...

Regular Expressions

While I have been searching for a single word using grep in the above, you can define what you want using regular expressions. There is support in grep for basic, extended, and Perl compatible regular expressions. Basic regular expressions are the default.

Regular expressions let you define a pattern for what you are after. For example, the regular expression ‘[Ss]imple’ will match the strings ‘simple’ and ‘Simple’. This is different from using -i to perform a case-insensitive search, because ‘sImple’ will not be considered a match for the above regular expression. Each character inside the square brackets can match, and only one of ‘S’ or ‘s’ is allowed before the remaining string ‘imple’. You can have many characters inside the square brackets and also define the an inversion. For example, [^F]oo will match any character than ‘F’ followed by two lower case ‘o’ characters. If you want to find the ‘[‘ character you have to escape it’s special meaning by preceding it with a backslash.

To match any character use the full stop. If you follow a character or square bracketed match with ‘*’ it will match zero or more times. To match one or more use ‘+’ instead. So ‘[B]*ar’ will match ‘ar’, ‘Bar’, ‘BBar’, ‘BBBar’, and so on. You can also use {n} to match n times and {n,m} to match at least n times but no more than m times. To use the ‘+’ and {n,m} modifiers you will have to enable extended regular expressions using the -E option.

These are some of the more fundamental parts of a regular expression, there are more and you can defined some very sophisticated patterns to find exactly what you are after. The first command below will find sek, seek, seeek in the sample file. The second command will find the strings ‘many’ or ‘matches’ in the file.

$ grep -E  's[e]{1,3}k' sample.txt 
and seeking those samples

$ grep -E  'ma(ny|tches)' sample.txt 
there can be many matches
start looking for text that matches

Looking across lines

The grep command works on a line-by-line basis. This means that if you are looking for two words together, then you will have some trouble matching one word at the end of one line and the second word at the start of the next line. So finding the person ‘John Doe’ will work unless the Doe happens to be the first word of the next line.

Although there are other tools, such as awk and Perl, that will allow you to search over multiple lines, you might like to use pcregrep to get the job done. On Fedora, you will have to install the pcre-tools package.

The below command will find the string ‘text that’ with the words separated by any amount of whitespace. In this case, whitespace also includes the newline.

$ pcregrep -M 'text[s]*that' sample.txt
It contains a few lines of text
that we can use to search for things.
start looking for text that matches

A few other things

Another grep option that might be handy is -m, which limits the number of matches sought in a file. The -v will invert the matches, so you see only the lines which do not match the pattern you gave. An example of an inverted match is shown below.

$ grep -vi sampl sample.txt 
It contains a few lines of text
that we can use to search for things.
there can be many matches
but not all of them are fun
start looking for text that matches

Final words

Using grep with either -R to directly inspect an area of your filesystem or in combination with a complicated find command will let you search through large amounts of text fairly quickly. You will likely find grep already installed on many machines. The pcregrep allows you to search multiple lines fairly easily. Next time, I’ll take a look at some other grep-like commands that let you search PDF documents and XML files.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.