Home Blog Page 764

Linus Torvalds Announces Linux Kernel 4.7 RC5, Things Are Calming Down

Another Sunday, another Release Candidate build of the upcoming Linux 4.7 kernel is out for testing, as announced by Linus Torvalds himself a few hours ago, June 26, 2016.

According to Linus Torvalds, things appear to have calmed down lately, and Linux kernel 4.7 Release Candidate 5 is a fairly normal milestone that consists of approximately 50% updated drivers, in particular GPU updates, but also various improvements to some hardware architectures, including PowerPC (PPC), x86, and ARM64 (AArch64).

Of course, there are also the usual patches to fix various issues for some of the supported filesystems, as well as for things like scheduler, mm, a few sound drivers, general-purpose input/output (GPIO), Xen, hwmon, as well as RDMA (remote direct memory access).

“I think things are calming down, although, with almost two-thirds of the commits coming in since Friday morning, it doesn’t feel that way…

Read more at Softpedia

Modern Hardware’s Role in a Software Driven Data Center

While Hewlett-Packard initially began its technology endeavors making audio equipment, its first instrumentation computer was engineered in 1966. It was sold to the Woods Hole Oceanographic Institute and used on research vessels for over a decade. It was designed to interface with over 20 HP instruments and was essentially the first iteration of plug and play integration as we know it. This is all the more impressive given that the HP 2116A had a mere 4k of main memory, and a 20MB hard drive.

In this episode of The New Stack Makers embedded below, we’ll learn more recentHPE hardware, notably the latest Cloudline servers, as well as HPE’s involvement with the Open Compute Project and OpenStack, and things to consider for a more unified DevOps workflow. 

Read more at The New Stack

TripleO QuickStart HA Setup && Keeping undercloud persistent between cold reboots

This post follows up http://lxer.com/module/newswire/view/230814/index.html
and might work as timer saver unless status undecloud.qcow2 per
http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/
requires fresh installation to be done from scratch
So, we intend to survive VIRTHOST cold reboot (downtime) and keep previous version of undercloud VM been able to bring it up avoiding build via quickstart.sh and restart procedure from logging into undercloud and immediately run overcloud deployment.

Complete text may be seen here http://bderzhavets.blogspot.com/2016/06/tripleo-quickstart-ha-setup-keeping.html

This Week in Open Source News: Sony Settles PS3 Debacle, New Hyperledger Members, & More

1) After six years, Sony has agreed to pay for its 2010 firmware update, which removed support for the Linux operating system in the PlayStation 3.

Sony Agrees to Pay Millions to Gamers to Settle PS3 Linux Debacle– Ars Technica

2) Belink, BitSE, INVeSHARE, MonetaGo, Moscow Exchange, Norbloc, & Onchain join the Hyperledger Project

The Hyperledger Blockchain Project Sees Seven New Members– Crypto Coin News

3) New project announcmeents at OPNFV Summit in Berlin this week. 

OPNFV Project Scales up Network Functions Virtualization Ecosystem– ComputerWeekly.com

4) The way software companies are built is changing to match the software itself. 

The Next Wave in Software is Open Adoption Software– TechCrunch

5) In the course of one year, Microsoft has seen customers use Linux in a third of its Azure machines, up from a fourth.

Microsoft: Nearly One in Three Azure Virtual Machines Now Are Running Linux– ZDNet

Python Gains Functional Programming Syntax via Coconut

The new language compiles directly to standard Python, so apps don’t need a new interpreter to run.

Coconut, a newly developed open source dialect of Python, provides new syntax for using features found in functional languages like Haskell and Scala. Programs written in Coconut compile directly to vanilla Python, so they can be run on whatever Python interpreter is already in use.

Read more at InfoWorld

Make Peace With Your Processes: Part 3

In parts 1 and 2 of this series, I introduced the ps command and provided tips on how to harness some of its many options to find out which processes are running on your system.

Now, picture a scene in which you want to check for the parents of a process, which I’ll look at more closely in a minute. You can achieve this by using this command:

# ps --ppid 21201

This shows us the processes with a parent process of that ID. In other words we can pinpoint processes that are children of process “21201”in this case.

Having said earlier that usually case-sensitivity shouldn’t cause too many headaches I’m going to completely contradict myself with a few examples of why that statement isn’t always true.

Try running my favorite ps command again; its abbreviated output is shown below:

# ps -ef

UID        PID     PPID  C STIME TTY TIME      CMD

apache   23026 22856  0 Feb26 ?        00:00:00 /usr/sbin/apache2

Now try running the full fat version by using an uppercase “F”:


# ps -eF

UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD

apache   23026 22856  0 44482  3116   0 Feb26 ?        00:00:00 /usr/sbin/apache2

The differences are that the latter includes SZ, RSS and PSR fields. The first two are memory related, whereas PSR shows which CPU the process is using. For more information, there’s lots more in the manual:

# man ps

Moving on, we can look at another alternative to the “-Z” option, which we briefly touched on before:


# ps -efM

unconfined_u:system_r:apache2_t:s0 apache  23031 22856  0 Feb26 ?        00:00:00 /usr/sbin/apache2

A useful BSD throwback. I quite like the look of it — possibly one of the shortest commands known to mankind. Have a look at Listing 1.


# ps l

F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND

4     0  1414     1  20   0   4064   584 n_tty_ Ss+  tty1       0:00 /sbin/mingetty /dev/tty1

4     0  1416     1  20   0   4064   588 n_tty_ Ss+  tty2       0:00 /sbin/mingetty /dev/tty2

4     0  1418     1  20   0   4064   584 n_tty_ Ss+  tty3       0:00 /sbin/mingetty /dev/tty3

4     0  1420     1  20   0   4064   580 n_tty_ Ss+  tty4       0:00 /sbin/mingetty /dev/tty4

4     0  1426     1  20   0   4064   584 n_tty_ Ss+  tty5       0:00 /sbin/mingetty /dev/tty5

4     0  1430     1  20   0   4064   588 n_tty_ Ss+  tty6       0:00 /sbin/mingetty /dev/tty6

4     0  9896  9558  20   0 191392  2740 poll_s S    pts/1      0:00 sudo -i

4     0  9899  9896  20   0 110496  1960 wait   S    pts/1      0:00 -bash

4     0 10776  9899  20   0 108104   980 -      R+   pts/1      0:00 ps l

Listing 1: Shows us the “long formatted” output, which can be embellished with other options, harking from BSD origins.

Clarity

Sometimes even the mighty ps command struggles to precisely refine its output. Imagine a scenario where Java processes are filling up the process table, and all you want to do is find their parent so that you can stop (or “kill”) the process abruptly. To summarize your information, you can use the non-hyphenated “S” switch:

# ps S

This helps you to find a parent when its child processes only live for a short period of time.

What about when your Process Table is brimming with processes, and you need to list a number of process PIDs at once? As you’d expect, there are different ways to achieve this — as shown in Listing 2 — when we run the following command:

# ps -p "1 2" -p 3,4

PID TTY   TIME CMD

   1 ?        00:00:03 init

   2 ?        00:00:01 kthreadd

   3 ?        00:00:01 migration/0

   4 ?        00:00:20 ksoftirqd/0

Listing 2: We can pick and choose the PIDs that we view in a number of ways.

More to Come

Next time, I’ll look at how the well-considered Unix principle of “everything is a file” extends to the Process Table, and I’ll show how to uncover the wealth of information that can be found in the “procfs” pseudo-filesystem.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Open Source NFV Part Four: Open Source MANO

Defined in ETSI ISG NFV architecture, MANO (Management and Network Orchestration) is a layer — a combination of multiple functional entities — that manages and orchestrates the cloud infrastructure, resources and services. It is comprised of, mainly, three different entities — NFV Orchestrator, VNF Manager and Virtual Infrastructure Manager (VIM). The figure below highlights the MANO part of the ETSI NFV architecture.

NFV Orchestrator is responsible for managing the functions such as network service life-cycle management and the overall resource management. 

Read more at The New Stack

After the Hype: Where Containers Make Sense for IT Organizations

The big tech companies are going all in. Google, IBM, Microsoft and many others were out in full force at DockerCon, scrambling to demonstrate how they’re investing in and supporting containers. Recent surveys indicate that container adoption is surging, with legions of users reporting they’re ready to take the next step and move from testing to production. Such is the popularity of containers that SiliconANGLE founder and theCUBE host John Furrier was prompted to proclaim that, thanks to containers, “DevOps is now mainstream.” That will change the game for those who invest in containers while causing “a world of hurt” for those who have yet to adapt, Furrier said.

Up until now, most container adoption has primarily been focused on packaging and isolating applications for easier software development and testing, explained Wei Dang, head of product at CoreOS (pictured, right). This is just the first step in a much larger transition to cloud-native architecture, in which applications are delivered as microservices in containers that run across distributed architecture.

Read more at SiliconAngle

Let Attic Deduplicate and Store your Backups

Data loss is one of those things we never want to worry about. To that end we go to great lengths to find new techniques and software packages to ensure those precious bits of data are safely backed up to various local and remote media.

Backups come in many forms, each with their benefits. One such form is deduplication. If you’re unsure what this is, let me explain. Data deduplication is a specialized technique — used for data compression — to eliminate the duplication of data. This technique is used to improve storage utilization and lessen the amount of data transmitted over a network. In a nutshell, deduplication works like this:

  1. Data is analyzed

  2. During the analysis, byte patterns are identified and stored

  3. When a duplicate byte pattern is found, a small reference point is put in place of the redundancy

What this process effectively does is save space. In some instances, where byte patterns can show up frequently, the amount of space a deduplicated backup saves can be considerable.

Naturally, Linux has plenty of backup solutions that will do deduplication. If you’re looking for one of the easiest, look no further than Attic. Attic is open source, written in Python, and can even encrypt your deduplicated backup for security. Attic is also a command-line only backup solution. Fear not, however…it’s incredibly easy to use.

I will walk you through the process of using Attic. Once you have a handle on the basics, you can take this solution and, with a bit of creativity, make it do exactly what you want.

Installation

I’ll be demonstrating the use of Attic on Ubuntu GNOME 16.04. Attic can be found in the standard repositories, so installation can be taken care of with a single command:

sudo apt-get install attic

That’s it. Once Attic is installed, you have only a couple of tasks to take care of before kicking off your first backup.

Initializing a repository

Before you can fire off a backup, you must first have a location to store a repository, and then you must create a repository. That repository will serve as a filesystem directory to house the deduplicated data from the archives. To initialize a repository, you will use the attic command with the init argument, like so:

attic init /PATH/my-repository.attic

where /PATH is the complete path to where the repository will be housed. You can also create a repository on a remote location by using the attic command:

attic init user@host:my-repository.attic

where user@host is your username and the host where the repository will be housed. For example, I have a Linux box set up at IP address 192.168.1.55 with a user jack. For that, the command would look like:

attic init jack@192.168.1.55:my-repository.attic

Attic also allows you to encrypt your repositories at initialization. For this, you use the command:

attic init --encryption=passphrase /PATH/my-repository.attic

For the encryption, you can use none, passphrase, or keyfile. Attic will, by default, go with none. When you use the passphrase option, you will be prompted to enter the passphrase to be used for encryption (Figure 1).

Figure 1: Adding passphrase encryption to your Attic repository.
By adding encryption, you will be prompted for the repository passphrase every time you act on that repository.

You can also use encryption when initializing a remote repository, like so:

attic init --encryption=passphrase jack@192.168.1.55:my-repository.attic

That’s the gist of creating a repository for housing your deduplicated backup.

Creating a backup

Let’s create a simple backup of the ~/Documents directory. This will use the my-repository.attic repository we just created. The command to create this backup is simple:

attic create /PATH/my-repository.attic::my-documents ~/Documents 

where PATH is the direct path to the my-repository.attic repository.

If you’ve encrypted the repository, you will be prompted for the encryption passphrase before the backup will run. That’s pretty nondescript. What if you plan on using Attic to create daily backups of the ~/Documents folder? Easy:

attic create /PATH/my-repository.attic::MONDAY-my-documents ~/Documents

You can then run that same command, daily, replacing MONDAY with TUESDAY, WEDNESDAY, THURSDAY, etc. You could also use a variable to create a specific date and time like so:

attic create /PATCH/my-repository.attic::$(date +%Y-%m-%d-%H:%M:%S) ~/Documents

The above command would use the current date and time as the archive name.

Each attic command will always traverse the folder directory and backup any child directories within.

Attic also isn’t limited to backing up only one directory, however. Say, for instance, you want to backup both ~/Documents and ~/Pictures. That command would look like:

attic create /PATH/my-repository.attic::SUNDAY ~/Documents ~/Pictures

If you want Attic to output the statistics of the backup, you can add the –stats option. That command would look like:

attic create --stats /PATH/my-repository.attic::SUNDAY ~/Documents ~/Pictures

The output of the command would show when it was run, how long it took, and information on archive size (Figure 2).

Figure 2: Statistics shown for the attic create command.
As you add more archives to the repository, the statistics will obviously change. One important bit of information you will see is the amount of the archive just created vs. the full archive (Figure 3).

Figure 3: Only 987.6 KB was added to the archive on the last run.
If you want to see a listing of the archives within the repository (Figure 4), you can issue the command:

attic list /PATH/my-repository.attic

where PATH is the direct path to the repository.

Figure 4: Listing the archives in an Attic repository.
If you want to list the contents of the SUNDAY archive, you can issue the command:

attic list /PATH/my-repository.attic::SUNDAY

This command will output all files within the SUNDAY archive.

Extracting data from an archive

There may come a time when you have to extract data from an archive. This task is just as easy as creating the archive. Let’s say you need to extract the contents of the ~/Pictures directory from the SUNDAY archive. To do this, you will employ the extract argument, like so:

attic extract /PATH/my-repository::SUNDAY Pictures

where PATH is the direct path to the repository. Should any of the files be missing from the Pictures directory, they’ll be returned, thanks to Attic. The one caveat is that I have seen, in a couple of instances, when files aren’t extracted back to their original path. For example, after removing all files from the ~/Pictures directory, I ran Attic with the extract argument only to see the files extracted to ~/home/Pictures. The difference should be obvious. When you run the extract command, you do not want the leading /. Otherwise, it will create it for you. So, running it with ~/Pictures, will create a new direct path to the folder. Instead of extracting to /home/jack/Pictures, extracting with the leading ~/ will extract to /home/jack/home/jack/Pictures.

This and so much more

There are plenty of other tricks to be done with Attic (pruning, checking, and more). And because Attic is a command-line tool, you can easily work it into shell scripts to create automated deduplicated backups that can even work with encryption. For even more helpful information, check out the Attic Users Guide.

The Rise of New Operations

It has been 6 years since I wrote a blog post titled The Rise of Devops. Many things have changed during this time and I realized a re-evaluation could be interesting.

Today, in 2016, here is where I think we are.

1. Operations main focus is now scalability

In the past, our primary purpose in life was to build and babysit production. Today operations teams focus on scale. For some it could be traffic related (number of concurrent sessions, number of users, size of the dataset). For others it could be ability to move between states safely and at high pace (for example, fintech where high stakes make consumer web approaches to operations too risky).

Read more at Somic