Another Sunday, another Release Candidate build of the upcoming Linux 4.7 kernel is out for testing, as announced by Linus Torvalds himself a few hours ago, June 26, 2016.
According to Linus Torvalds, things appear to have calmed down lately, and Linux kernel 4.7 Release Candidate 5 is a fairly normal milestone that consists of approximately 50% updated drivers, in particular GPU updates, but also various improvements to some hardware architectures, including PowerPC (PPC), x86, and ARM64 (AArch64).
Of course, there are also the usual patches to fix various issues for some of the supported filesystems, as well as for things like scheduler, mm, a few sound drivers, general-purpose input/output (GPIO), Xen, hwmon, as well as RDMA (remote direct memory access).
“I think things are calming down, although, with almost two-thirds of the commits coming in since Friday morning, it doesn’t feel that way…
While Hewlett-Packard initially began its technology endeavors making audio equipment, its first instrumentation computer was engineered in 1966. It was sold to the Woods Hole Oceanographic Institute and used on research vessels for over a decade. It was designed to interface with over 20 HP instruments and was essentially the first iteration of plug and play integration as we know it. This is all the more impressive given that the HP 2116A had a mere 4k of main memory, and a 20MB hard drive.
In this episode of The New Stack Makers embedded below, we’ll learn more recentHPE hardware, notably the latest Cloudline servers, as well as HPE’s involvement with the Open Compute Project and OpenStack, and things to consider for a more unified DevOps workflow.
This post follows up http://lxer.com/module/newswire/view/230814/index.html
and might work as timer saver unless status undecloud.qcow2 per http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/
requires fresh installation to be done from scratch
So, we intend to survive VIRTHOST cold reboot (downtime) and keep previous version of undercloud VM been able to bring it up avoiding build via quickstart.sh and restart procedure from logging into undercloud and immediately run overcloud deployment.
The new language compiles directly to standard Python, so apps don’t need a new interpreter to run.
Coconut, a newly developed open source dialect of Python, provides new syntax for using features found in functional languages like Haskell and Scala. Programs written in Coconut compile directly to vanilla Python, so they can be run on whatever Python interpreter is already in use.
In parts 1 and 2 of this series, I introduced the ps command and provided tips on how to harness some of its many options to find out which processes are running on your system.
Now, picture a scene in which you want to check for the parents of a process, which I’ll look at more closely in a minute. You can achieve this by using this command:
# ps --ppid 21201
This shows us the processes with a parent process of that ID. In other words we can pinpoint processes that are children of process “21201”in this case.
Having said earlier that usually case-sensitivity shouldn’t cause too many headaches I’m going to completely contradict myself with a few examples of why that statement isn’t always true.
Try running my favorite ps command again; its abbreviated output is shown below:
# ps -efUID PID PPID C STIME TTY TIME CMDapache 23026 22856 0 Feb26 ? 00:00:00 /usr/sbin/apache2
Now try running the full fat version by using an uppercase “F”:
# ps -eFUID PID PPID C SZ RSS PSR STIME TTY TIME CMDapache 23026 22856 0 44482 3116 0 Feb26 ? 00:00:00 /usr/sbin/apache2
The differences are that the latter includes SZ, RSS and PSR fields. The first two are memory related, whereas PSR shows which CPU the process is using. For more information, there’s lots more in the manual:
# man ps
Moving on, we can look at another alternative to the “-Z” option, which we briefly touched on before:
Listing 1: Shows us the “long formatted” output, which can be embellished with other options, harking from BSD origins.
Clarity
Sometimes even the mighty ps command struggles to precisely refine its output. Imagine a scenario where Java processes are filling up the process table, and all you want to do is find their parent so that you can stop (or “kill”) the process abruptly. To summarize your information, you can use the non-hyphenated “S” switch:
# ps S
This helps you to find a parent when its child processes only live for a short period of time.
What about when your Process Table is brimming with processes, and you need to list a number of process PIDs at once? As you’d expect, there are different ways to achieve this — as shown in Listing 2 — when we run the following command:
Listing 2: We can pick and choose the PIDs that we view in a number of ways.
More to Come
Next time, I’ll look at how the well-considered Unix principle of “everything is a file” extends to the Process Table, and I’ll show how to uncover the wealth of information that can be found in the “procfs” pseudo-filesystem.
Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.
Defined in ETSI ISG NFV architecture, MANO (Management and Network Orchestration) is a layer — a combination of multiple functional entities — that manages and orchestrates the cloud infrastructure, resources and services. It is comprised of, mainly, three different entities — NFV Orchestrator, VNF Manager and Virtual Infrastructure Manager (VIM). The figure below highlights the MANO part of the ETSI NFV architecture.
NFV Orchestrator is responsible for managing the functions such as network service life-cycle management and the overall resource management.
The big tech companies are going all in. Google, IBM, Microsoft and many others were out in full force at DockerCon, scrambling to demonstrate how they’re investing in and supporting containers. Recent surveys indicate that container adoption is surging, with legions of users reporting they’re ready to take the next step and move from testing to production. Such is the popularity of containers that SiliconANGLE founder and theCUBE host John Furrier was prompted to proclaim that, thanks to containers, “DevOps is now mainstream.” That will change the game for those who invest in containers while causing “a world of hurt” for those who have yet to adapt, Furrier said.
Up until now, most container adoption has primarily been focused on packaging and isolating applications for easier software development and testing, explained Wei Dang, head of product at CoreOS (pictured, right). This is just the first step in a much larger transition to cloud-native architecture, in which applications are delivered as microservices in containers that run across distributed architecture.
Data loss is one of those things we never want to worry about. To that end we go to great lengths to find new techniques and software packages to ensure those precious bits of data are safely backed up to various local and remote media.
Backups come in many forms, each with their benefits. One such form is deduplication. If you’re unsure what this is, let me explain. Data deduplication is a specialized technique — used for data compression — to eliminate the duplication of data. This technique is used to improve storage utilization and lessen the amount of data transmitted over a network. In a nutshell, deduplication works like this:
Data is analyzed
During the analysis, byte patterns are identified and stored
When a duplicate byte pattern is found, a small reference point is put in place of the redundancy
What this process effectively does is save space. In some instances, where byte patterns can show up frequently, the amount of space a deduplicated backup saves can be considerable.
Naturally, Linux has plenty of backup solutions that will do deduplication. If you’re looking for one of the easiest, look no further than Attic. Attic is open source, written in Python, and can even encrypt your deduplicated backup for security. Attic is also a command-line only backup solution. Fear not, however…it’s incredibly easy to use.
I will walk you through the process of using Attic. Once you have a handle on the basics, you can take this solution and, with a bit of creativity, make it do exactly what you want.
Installation
I’ll be demonstrating the use of Attic on Ubuntu GNOME 16.04. Attic can be found in the standard repositories, so installation can be taken care of with a single command:
sudo apt-get install attic
That’s it. Once Attic is installed, you have only a couple of tasks to take care of before kicking off your first backup.
Initializing a repository
Before you can fire off a backup, you must first have a location to store a repository, and then you must create a repository. That repository will serve as a filesystem directory to house the deduplicated data from the archives. To initialize a repository, you will use the attic command with the init argument, like so:
attic init /PATH/my-repository.attic
where /PATH is the complete path to where the repository will be housed. You can also create a repository on a remote location by using the atticcommand:
attic init user@host:my-repository.attic
where user@host is your username and the host where the repository will be housed. For example, I have a Linux box set up at IP address 192.168.1.55 with a userjack. For that, the command would look like:
attic init jack@192.168.1.55:my-repository.attic
Attic also allows you to encrypt your repositories at initialization. For this, you use the command:
For the encryption, you can use none, passphrase, or keyfile. Attic will, by default, go with none. When you use the passphrase option, you will be prompted to enter the passphrase to be used for encryption (Figure 1).
Figure 1: Adding passphrase encryption to your Attic repository.
By adding encryption, you will be prompted for the repository passphrase every time you act on that repository.
You can also use encryption when initializing a remote repository, like so:
That’s the gist of creating a repository for housing your deduplicated backup.
Creating a backup
Let’s create a simple backup of the ~/Documents directory. This will use the my-repository.attic repository we just created. The command to create this backup is simple:
where PATH is the direct path to the my-repository.attic repository.
If you’ve encrypted the repository, you will be prompted for the encryption passphrase before the backup will run. That’s pretty nondescript. What if you plan on using Attic to create daily backups of the ~/Documents folder? Easy:
You can then run that same command, daily, replacing MONDAY with TUESDAY, WEDNESDAY, THURSDAY, etc. You could also use a variable to create a specific date and time like so:
The above command would use the current date and time as the archive name.
Each attic command will always traverse the folder directory and backup any child directories within.
Attic also isn’t limited to backing up only one directory, however. Say, for instance, you want to backup both ~/Documents and ~/Pictures. That command would look like:
The output of the command would show when it was run, how long it took, and information on archive size (Figure 2).
Figure 2: Statistics shown for the attic create command.
As you add more archives to the repository, the statistics will obviously change. One important bit of information you will see is the amount of the archive just created vs. the full archive (Figure 3).
Figure 3: Only 987.6 KB was added to the archive on the last run.
If you want to see a listing of the archives within the repository (Figure 4), you can issue the command:
attic list /PATH/my-repository.attic
where PATHis the direct path to the repository.
Figure 4: Listing the archives in an Attic repository.
If you want to list the contents of the SUNDAY archive, you can issue the command:
attic list /PATH/my-repository.attic::SUNDAY
This command will output all files within the SUNDAY archive.
Extracting data from an archive
There may come a time when you have to extract data from an archive. This task is just as easy as creating the archive. Let’s say you need to extract the contents of the ~/Pictures directory from the SUNDAY archive. To do this, you will employ the extractargument, like so:
where PATH is the direct path to the repository. Should any of the files be missing from the Pictures directory, they’ll be returned, thanks to Attic. The one caveat is that I have seen, in a couple of instances, when files aren’t extracted back to their original path. For example, after removing all files from the ~/Pictures directory, I ran Attic with the extractargument only to see the files extracted to ~/home/Pictures. The difference should be obvious. When you run the extract command, you do not want the leading /. Otherwise, it will create it for you. So, running it with ~/Pictures, will create a new direct path to the folder. Instead of extracting to /home/jack/Pictures, extracting with the leading ~/ will extract to /home/jack/home/jack/Pictures.
This and so much more
There are plenty of other tricks to be done with Attic (pruning, checking, and more). And because Attic is a command-line tool, you can easily work it into shell scripts to create automated deduplicated backups that can even work with encryption. For even more helpful information, check out the Attic Users Guide.
It has been 6 years since I wrote a blog post titled The Rise of Devops. Many things have changed during this time and I realized a re-evaluation could be interesting.
Today, in 2016, here is where I think we are.
1. Operations main focus is now scalability
In the past, our primary purpose in life was to build and babysit production. Today operations teams focus on scale. For some it could be traffic related (number of concurrent sessions, number of users, size of the dataset). For others it could be ability to move between states safely and at high pace (for example, fintech where high stakes make consumer web approaches to operations too risky).