Home Blog Page 276

Ubuntu 18.04.2 Refreshes This Long-Term Support Linux Distro

Do you want the best compromise between the latest and greatest open-source software and the stability of an established Linux? If that’s you, and you’re an Ubuntu user, then you want Ubuntu 18.04.2.

This latest version of Ubuntu 18.04, the Long-Term Support (LTS) edition, will be supported until April 2028. If you’re using Ubuntu in business, this is the one you want.

Why? For starters, Ubuntu 18.04.2 has upgraded its Linux kernel from 4.15 to the  4.18 Linux kernel. This kernel comes with Spectre and Meltdown security patches and improved hardware drivers.

Read more at ZDNet

6 Must-Attend Talks at Cloud Foundry Summit on Serverless, Knative, Microservices

Tech conferences often feel the same to me. It’s all about the ratios. Too much technical stuff and you wonder how any of it actually applies to your business problems. Too much business-speak and you might end up looking for a slide deck escape hatch to take you to a code repository. Like Goldilocks eating porridge, or me eating a cupcake, you want to find a mix that’s “just right.” Finding a good balance between deep technical content and compelling business outcomes makes for a better conference. (Now that I think about it, so would serving cupcakes!)

This is one reason why I love Cloud Foundry Summit—the ratios are right! There are always plenty of compelling user stories where you can follow what companies like AllstateRoyal Bank of Canada, and CSAA have been able to accomplish with Cloud Foundry. But Cloud Foundry Summit has a lot to offer for tech enthusiasts too, including plenty of opportunities to learn about related projects and emerging technologies. Last year, Kubernetesfunctions, and event-driven architectures were hot topics.

This year’s Cloud Foundry Summit is right around the corner… If you’re as intrigued by all things serverless as I am, here’s my list of must-attend talks:

Read more at The New Stack

Logical & in Bash

One would think you could dispatch & in two articles. Turns out you can’t. While the first article dealt with using & at the end of commands to push them into the background and then diverged into explaining process management, the second article saw & being used as a way to refer to file descriptors, which led us to seeing how, combined with < and >, you can route inputs and outputs from and to different places.

This means we haven’t even touched on & as an AND operator, so let’s do that now.

& is a Bitwise Operator

If you are at all familiar with binary operations, you will have heard of AND and OR. These are bitwise operations that operate on individual bits of a binary number. In Bash, you use & as the AND operator and | as the OR operator:

AND

0 & 0 = 0

0 & 1 = 0

1 & 0 = 0

1 & 1 = 1

OR

0 | 0 = 0

0 | 1 = 1

1 | 0 = 1

1 | 1 = 1

You can test this by ANDing any two numbers and outputting the result with echo:

$ echo $(( 2 & 3 )) # 00000010 AND 00000011 = 00000010

2

$ echo $(( 120 & 97 )) # 01111000 AND 01100001 = 01100000

96

The same goes for OR (|):


$ echo $(( 2 | 3 )) # 00000010 OR 00000011 = 00000011

3

$ echo $(( 120 | 97 )) # 01111000 OR 01100001 = 01111001

121

Three things about this:

  1. You use (( ... )) to tell Bash that what goes between the double brackets is some sort of arithmetic or logical operation. (( 2 + 2 )), (( 5 % 2 )) (% being the modulo operator) and ((( 5 % 2 ) + 1)) (equals 3) will all work.
  2. Like with variables, $ extracts the value so you can use it.
  3. For once spaces don’t matter: ((2+3)) will work the same as (( 2+3 )) and (( 2 + 3 )).
  4. Bash only operates with integers. Trying to do something like this (( 5 / 2 )) will give you “2”, and trying to do something like this (( 2.5 & 7 )) will result in an error. Then again, using anything but integers in a bitwise operation (which is what we are talking about now) is generally something you wouldn’t do anyway.

TIP: If you want to check what your decimal number would look like in binary, you can use bc, the command-line calculator that comes preinstalled with most Linux distros. For example, using:


bc <<< "obase=2; 97"

will convert 97 to binary (the o in obase stands for output), and …


bc <<< "ibase=2; 11001011"

will convert 11001011 to decimal (the i in ibase stands for input).

&& is a Logical Operator

Although it uses the same logic principles as its bitwise cousin, Bash’s && operator can only render two results: 1 (“true”) and 0 (“false”). For Bash, any number not 0 is “true” and anything that equals 0 is “false.” What is also false is anything that is not a number:


$ echo $(( 4 && 5 )) # Both non-zero numbers, both true = true

1

$ echo $(( 0 && 5 )) #  One zero number, one is false = false

0

$ echo $(( b && 5 )) #  One of them is not number, one is false = false

0

The OR counterpart for && is || and works exactly as you would expect.

All of this is simple enough… until it comes to a command’s exit status.

&& is a Logical Operator for Command Exit Status

As we have seen in previous articles, as a command runs, it outputs error messages. But, more importantly for today’s discussion, it also outputs a number when it ends. This number is called an exit code, and if it is 0, it means the command did not encounter any problem during its execution. If it is any other number, it means something, somewhere, went wrong, even if the command completed.

So 0 is good, any other number is bad, and, in the context of exit codes, 0/good means “true” and everything else means “false.” Yes, this is the exact contrary of what you saw in the logical operations above, but what are you gonna do? Different contexts, different rules. The usefulness of this will become apparent soon enough.

Moving on.

Exit codes are stored temporarily in the special variable ? — yes, I know: another confusing choice. Be that as it may, remember that in our article about variables, and we said that you read the value in a variable using a the $ symbol. So, if you want to know if a command has run without a hitch, you have to read ? as soon as the command finishes and before running anything else.

Try it with:


$ find /etc -iname "*.service"

find: '/etc/audisp/plugins.d': Permission denied 

/etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service 

/etc/systemd/system/dbus-org.freedesktop.ModemManager1.service

[etcetera]

As you saw in the previous article, running find over /etc as a regular user will normally throw some errors when it tries to read subdirectories for which you do not have access rights.

So, if you execute…


echo $?

… right after find, it will print a 1, indicating that there were some errors.

(Notice that if you were to run echo $? a second time in a row, you’d get a 0. This is because $? would contain the exit code of echo $?, which, supposedly, will have executed correctly. So the first lesson when using $? is: use $? straight away or store it somewhere safe — like in another variable, or you will lose it).

One immediate use of ? is to fold it into a list of chained commands and bork the whole thing if anything fails as Bash runs through it. For example, you may be familiar with the process of building and compiling the source code of an application. You can run them on after another by hand like this:


$ configure

.

.

.

$ make

.

.

.

$ make install

.

.

.

You can also put all three on one line…


$ configure; make; make install

… and hope for the best.

The disadvantage of this is that if, say, configure fails, Bash will still try and run make and sudo make install, even if there is nothing to make or, indeed, install.

The smarter way of doing it is like this:


$ configure && make && make install

This takes the exit code from each command and uses it as an operand in a chained && operation.

But, and here’s the kicker, Bash knows the whole thing is going to fail if configure returns a non-zero result. If that happens, it doesn’t have to run make to check its exit code, since the result is going to be false no matter what. So, it forgoes make and just passes a non-zero result onto the next step of the operation. And, as configure && make delivers false, Bash doesn’t have to run make install either. This means that, in a long chain of commands, you can join them with &&, and, as soon as one fails, you can save time as the rest of the commands get canceled immediately.

You can do something similar with ||, the OR logical operator, and make Bash continue processing chained commands if only one of a pair completes.

In view of all this (along with the stuff we covered earlier), you should now have a clearer idea of what the command line we set at the beginning of this article does:

mkdir test_dir 2>/dev/null || touch backup/dir/images.txt && find . -iname "*jpg" > backup/dir/images.txt &

So, assuming you are running the above from a directory for which you have read and write privileges, what it does it do and how does it do it? How does it avoid unseemly and potentially execution-breaking errors? Next week, apart from giving you the solution, we’ll be dealing with brackets: curly, curvy and straight. Don’t miss it!

The Future of Artificial Intelligence at Scale

For this week’s episode of the The New Stack Analysts podcast, TNS editorial director Libby Clark and TNS London correspondent Jennifer Riggins sat down (via Zoom) with futurist Martin Ford, author of “Architects of Intelligence: The truth about AI from the people building it,” and Ofer Hermoni, chair of the technical advisory council for The Linux Foundation’s Deep Learning Foundation projects, to talk about the current state of AI, how it will scale, and its consequences.

Hermoni believes that the open source community — and the still very complicated AI landscape — will act as the democratization of both the future of AI technology and the necessary ethical boundaries. He talks about how to leverage open governance and common standards to make this happen.

Read more at The New Stack

The Hard Part in Becoming a Command Line Wizard

I’ve long been impressed by shell one-liners. They seem like magical incantations. Pipe a few terse commands together, et voilà! Out pops the solution to a problem that would seem to require pages of code.

Are these one-liners real or mythology? To some extent, they’re both. Below I’ll give a famous real example. Then I’ll argue that even though such examples do occur, they may create unrealistic expectations.

Bentley’s exercise

In 1986, Jon Bentley posted the following exercise:

Given a text file and an integer k, print the k most common words in the file (and the number of their occurrences) in decreasing frequency.

Donald Knuth wrote an elegant program in response. Knuth’s program runs for 17 pages in his book Literate Programming.

Read more at John D. Cook blog

4 Management Tools for Git Encryption

See how Git-crypt, BlackBox, SOPS, and Transcrypt stack up for storing secrets in Git.

There are a lot of great open source tools out there for storing secrets in Git. It can be hard to determine the right one for you and your organization—it depends on your use cases and requirements. To help you compare and choose, we’ll look at four of the most popular open source tools for secrets management and see how they stack up against each other:

We won’t review larger solutions like HashiCorp Vault. A production-ready Vault can be a rather large hurdle, especially if your organization is just getting started with secrets management. The tools above are easy to use and set up quickly.

Encryption types

These secrets management tools use GNU Privacy Guard (GPG), symmetric key encryption, and/or cloud key services.

Read more at OpenSource.com

C Programming Tutorial Part 3 – Variables Basics

Up until now, we’ve discussed the basics of what a C program is, how to compile and execute it, and what are preprocessors. If you have gone through these tutorials, it’s time we discuss the next topic, which is variables. 

Variables are one of the core elements of C programming as they store values for programmers to use as per their requirement. Let’s understand their basics through an example. Following is a basic C program:

#include <stdio.h>

int main (void)
{
 int a = 10;
 char b = 'z';
 float c = 1.5;
 printf("n a=%d, b=%c, c=%f n", a,b,c);
 return 0;
}

In previous C programming tutorials, we have already explained things like what is ‘stdio.h,’ what does ‘#include’ mean, and what is a function (especially ‘main’). So, we’ll directly jump onto the variables part.

Read more at HowToForge

Audiophile Linux Promises Aural Nirvana

Linux isn’t just for developers. I know that might come as a surprise for you, but the types of users that work with the open source platform are as varied as the available distributions. Take yours truly for example. Although I once studied programming, I am not a developer.

The creating I do with Linux is with words, sounds, and visuals. I write books, I record audio, and a create digital images and video. And even though I don’t choose to work with distributions geared toward those specific tasks, they do exist. I also listen to a lot of music. I tend to listen to most of my music via vinyl. But sometimes I want to listen to music not available in my format of choice. That’s when I turn to digital music.

Having a Linux distribution geared specifically toward playing music might not be on the radar of the average user, but to an audiophile, it could be a real deal maker.

This bring us to Audiophile Linux. Audiophile Linux is an Arch Linux-based distribution geared toward, as the name suggests, audiophiles. What makes Audiophile Linux special? First and foremost, it’s optimized for high quality audio reproduction. To make this possible, Audiophile Linux features:

  • System and memory optimized for quality audio

  • Custom Real-Time kernel

  • Latency under 5ms

  • Direct Stream Digital support

  • Lightweight window manager (Fluxbox)

  • Pre installed audio and video programs

  • Lightweight OS, free of unnecessary daemons and services

Although Audiophile Linux claims the distribution is easily installed, it’s very much based on Arch Linux, so the installation is nowhere near as easy as, say, Ubuntu. At this point, you might be thinking, “But there’s already Ubuntu Studio, which is as easy to install as Ubuntu, and contains some of the same features!” That may be true, but there are users out there (even those of a more artistic bent) who prefer a decidedly un-Ubuntu distribution. On top of which, Ubuntu Studio would be serious overkill for anyone just looking for high-quality music reproduction. For that, there’s Audiophile Linux.
Let’s install it and see what’s what.

Installation

As I mentioned, Audiophile is based on Arch Linux. Unlike some distributions based on Arch, however, Audiophile Linux doesn’t include a pretty, user-friendly GUI installer. Instead, what you must do is download the ISO image, burn the ISO to either a USB or CD/DVD, and boot from the device. Once booted, you’ll find yourself at a command prompt. Once at that prompt, here are the steps to install.

Create the necessary partition by issuing the command:

fdisk /dev/sdX

where X is the drive letter (discovered with the command fdisk -l).

Type n to create a new partition and then type p to make the partition a primary. When that completes, type w to write the changes. Format the new partition with the command:

mkfs.ext4 /dev/sda1

Mount the new partition with the command:

mount /dev/sda1 /mnt

Finish up the partition with the following commands;

time cp -ax / /mnt

arch-chroot /mnt /bin/bash

cd /etc/apl-files

Install the base packages (and create a username/password with the command:

./runme.sh

Take care of the GRUB boot loader with the following commands:

grub-install --target=i386-pc /dev/sda

grub-mkconfig -o /boot/grub/grub.cfg

Give the root user a password with the following command:

passwd root

Set your timezone like so (substituting your location):

ln -s /usr/share/zoneinfo/America/Kentucky/Louisville /etc/localtime

Set the hardware clock and autologin with the following commands:

hwclock --systohc --utc

./autologin.sh

Reboot the system with the command:

reboot

It Gets a Bit Dicey Now

There’s a problem to be found, which is related to the pacman update process. If you immediately go to update the system with the pacman -Suy command, you’ll find Xorg broken and seemingly no way to repair it. This problem has been around for some time now and has yet to be fixed. How do you get around it? First, you need to remove the libxfont package with the command:

sudo pacman -Rc libxfont

That’s not all. There’s another package that must be removed (Cantata – the Audiophile music player). Issue the command:

sudo pacman -Rc ffmpeg2.8

Now, you can update Audiophile Linux with the command:

sudo pacman -Suy

Once updated, you can finish up the installation with the command:

sudo pacman -S terminus-font pacman -S xorg-server pacman -S firefox

You can then reinstall Cantata with the command:

sudo pacman -S cantata

When this completes, reboot and log into your desktop.

The Desktop

As mentioned earlier, Audiophile Linux opts for lightweight desktop environment, Fluxbox. Although I understand why the developers would want to make use of this desktop (because it’s incredibly lightweight), many users might not enjoy working with such a minimal desktop. And since most audiophiles are going to be working with hardware that can tolerate a more feature-rich desktop. If you want to opt to go that route, you can install a desktop like GNOME with the command:

sudo pacman -S gnome

However, if you want to be a purist (and get the absolute most out of this hardware/software combination), stick with the default Fluxbox. I recommend sticking with Fluxbox especially since you’ll only be using Audiophile Linux for one purpose (listening to music).

Fluxbox uses an incredibly basic interface. Right-click anywhere on the desktop and a menu will appear (Figure 1).

Figure 1: The Fluxbox desktop on Audiophile Linux.

From that menu, you won’t find a lot of applications (Figure 2).

Figure 2: The Audiophile Linux Fluxbox menu.

That’s okay, because you only need one—Cantata (listed in the menu as Play music). However, after the installation, Cantata won’t run. Why? Because of a QT5 problem. To get around this, you need to issue the following commands:

sudo pacman -S binutils

sudo strip --remove-section=.note.ABI-tag /usr/lib64/libQt5Core.so.5

Once you’ve taken care of the above, Cantata will run and you can start playing all of the music you’ve added to the library (Figure 3).

Figure 3: Listening to Devin Townsend Project’s Kingdom.

Worth The Hassle?

I have to confess, at first I was fairly certain Audiophile Linux wouldn’t be worth the trouble of getting it up and running … for the singular purpose of listening to music. However, once those tunes started spilling from my speakers, I was sold.

Although the average listener might not notice the difference with this distribution, audiophiles will. The clarity and playback of digital music on Audiophile Linux far exceeded that on both Elementary OS and Ubuntu Linux. So if that appeals to you, I highly recommend giving Audiophile Linux a spin.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

SQL is No Excuse to Avoid DevOps

A friend recently said to me, “We can’t do DevOps, we use a SQL database.” I nearly fell off my chair. Such a statement is wrong on many levels.

“But you don’t understand our situation!” he rebuffed. “DevOps means we’ll be deploying new releases of our software more frequently! We can barely handle deployments now and we only do it a few times a year!”

I asked him about his current deployment process. …

Let me start by clearing up a number of misconceptions. Then let’s talk about some techniques for making those deployments much, much easier.

First, DevOps is not a technology, it is a methodology. 

DevOps doesn’t require or forbid any particular database technology—or any technology, for that matter. Saying you can or cannot “do DevOps” because you use a particular technology is like saying you can’t apply agile to a project that uses a particular language. SQL may be a common “excuse of the month,” but it is a weak excuse.

I understand how DevOps and the lack of SQL databases could become inexorably linked in some people’s minds. In the 2000s and early 2010s companies that were inventing and popularizing DevOps were frequently big websites that were, by coincidence, also popularizing NoSQL (key/value store) databases. Linking the two, however, is confusing correlation with causation. Those same companies were also popularizing providing gourmet lunches to employees at no charge. We can all agree that is not a prerequisite for DevOps.

Read more at ACM Queue

Zowe Makes Mainframe Evergreen

Mainframes are, and will continue to be, a bedrock for industries and organizations that run mission-critical applications. In one way or another, all of us are mainframe users. Every time you make an online transaction or make a reservation, for example, you are using a mainframe.

According to IBM, corporations use mainframes for applications that depend on scalability and reliability. They rely on mainframes in order to:

  • Perform large-scale transaction processing (thousands of transactions per second)
  • Support thousands of users and application programs concurrently accessing numerous resources
  • Manage terabytes of information in databases
  • Handle large-bandwidth communication

Often when people hear the word mainframe, though, they think of dinosaurs. It’s true mainframes have aged, and one challenge the mainframe community faces is that they struggle to attract fresh developers who want to use latest and shiniest technologies.

Zowe milestones

Zowe, a Linux Foundation project under the umbrella of Open Mainframe Project is changing all that. Through this project, industry heavyweights including IBM, Rocket Software, and Broadcom came together to modernize mainframes running z/OS.

Read more at The Linux Foundation