
5 Best Linux Distributions To Recover Dead Computers Data (Linux Data Recovery)

This Week in Open Source News: The White House Releases Code Policy, Linux Security Threats Pose Wide Risk, & More
1) The White House released federal source code policy, requiring agencies to release 20% of new code they commission as open source.
Open Source Won. So, Now What?– WIRED
2) A flaw in the Transmission Control Protocol poses a threat to Internet users, whether they use Linux directly or not.
Use the Internet? This Linux Flaw Could Open You Up to Attack– PCWorld
3) New Trojan targets Linux servers and is exploiting servers running the Redis NoSQL database to use them for bitcoin mining.
Linux.Lady Trojan Turns Linux Servers into Bitcoin Miners– The Inquirer
4) “Will Microsoft’s acquisition of LinkedIn slow down the social networking company’s cadence of open-sourcing core technology for developers?”
LinkedIn: Open-Sourcing Under the Microsoft Regime– eWeek
5) Today’s CEOs increasingly have impressive technical backgrounds and open source is more valuable than ever.
2046 is the Last Year Your CEO Has a Business Major– VentureBeat
Scaling Out with SwarmKit
At LinuxCon+ContainerCon North America this month, Jérôme Petazzoni of Docker will present a free, all-day tutorial “Orchestrating Containers in Production at Scale with Docker Swarm.” As a preview to that talk, this article takes a look specifically at SwarmKit, an open source toolkit used to build multi-node systems.
SwarmKit is a reusable library, like libcontainer, libnetwork, and vpnkit. It is also a plumbing part of the Docker ecosystem. The SwarmKit repository comes with two examples:
-
swarmctl (a CLI tool to “speak” the SwarmKit API);
-
swarmd (an agent that can federate existing Docker Engines into a Swarm).
This organization is similar to the libcontainer codebase, where libcontainer is the reusable library, containerd is a lightweight container engine using it, and container-ctr is a CLI to control containerd.
In this short tutorial, we’ll give an overview of SwarmKit features and its Docker CLI commands, show you how to enable Swarm mode, and then set up your first Swarm cluster. These are the first steps necessary to create and run Swarm services, which can easily be scaled in a pinch.
SwarmKit Features and Concepts
Some of SwarmKit’s features include:
-
highly available, distributed store based on Raft
-
services managed with a declarative API (implementing desired state and reconciliation loop)
-
automatic TLS keying, signing, key renewal and rotation
-
dynamic promotion/demotion of nodes, allowing you to change how many nodes (“managers”) are actively part of the Raft consensus
-
integration with overlay networks and load balancing
Although a useful cluster will typically be at least one node, SwarmKit can function in single-node scenarios. This is useful for testing and allows you to use a consistent API and set of tools from single-node development mode to cluster deployments.
Nodes can be either managers or workers. Workers merely run containers, while managers also actively take part in the Raft consensus. Managers are controlled through the SwarmKit API. One manager is elected as the leader; other managers merely forward requests to it. The managers expose the SwarmKit API, and using the API, you can indicate that you want to run a service.
A service, in turn, is specified by its desired state: for example, which image, how many instances, and so forth:
-
The leader uses different subsystems to break down services into tasks, such as orchestrator, scheduler, allocator, dispatcher
-
A task corresponds to a specific container, assigned to a specific node
-
Nodes know which tasks should be running, and will start or stop containers accordingly (through the Docker Engine API)
You can refer to the nomenclature in the SwarmKit repository for more details.
Swarm Mode
Docker Engine 1.12 features SwarmKit integration, meaning that all the features of SwarmKit can be enabled in Docker 1.12, and you can leverage them using Docker CLI and API. The Docker CLI features three new commands:
-
docker swarm (enable Swarm mode; join a Swarm; adjust cluster parameters)
-
docker node (view nodes; promote/demote managers; manage nmodes)
-
docker service (create and manage services)
The Docker API exposes the same concepts, and the SwarmKit API is also exposed (on a separate socket).
To follow along this demo, you’ll need a VM with Docker 1.12 and Compose 1.8. To experiment with scaling, load balancing, and failover, you will ideally need a few VMs connected together. If you are using a Mac, the easiest way to get started on a single node is to install Docker Mac.
You will also need a Dockerized application. If you need one for demo and testing purposes, you can use DockerCoins: it is built around a microservices architecture and features four very simple services in different languages, as well as a Redis data store.
You need to enable Swarm mode to use the new stuff. By default, everything runs as usual. With Swarm mode enabled, you “unlock” SwarmKit functions (i.e., services, out-of-the-box overlay networks).
Now, try a Swarm-specific command:
$ docker node ls Error response from daemon: this node is not participating as a Swarm manager
Creating your first Swarm
The cluster is initialized with docker swarm init. This should be executed on a first, seed node. DO NOT execute docker swarm init on multiple nodes! You would have multiple disjointed clusters.
To create your cluster from node1, do:
docker swarm init
To check that Swarm mode is enabled, you can run the traditional docker info command:
docker info
The output should include:
Swarm: active NodeID: 8jud7o8dax3zxbags3f8yox4b Is Manager: true ClusterID: 2vcw2oa9rjps3a24m91xhvv0c ...
Next, we will run our first Swarm mode command. Let’s try the exact same command as earlier to list the nodes (well, the only node) of our cluster:
docker node ls
The output should look like the following:
ID NAME MEMBERSHIP STATUS AVAILABILITY MANAGER STATUS d1kf...12wt * ip-172-31-25-65 Accepted Ready Active Leader
Now you have a Swarm cluster!
If you have another node, you can add it to the cluster very easily. In fact, when we did docker swarm init, it showed us which command to use. If you missed it, you can see it again very easily by running:
docker swarm join-token worker
Then, log into the other node (for instance, with SSH) and copy-paste the docker swarm join command that was displayed before. That’s it! The node immediately joins the cluster and can run your workloads.
At this point, if you want to use your cluster to run an arbitrary container, you can do:
docker service create --name helloweb --publish 1234:80 nginx
This will create a container using the official NGINX image, and make it available from the outside world on port 1234 of the cluster. You can now connect to any cluster node on port 1234 and you will see the NGINX “welcome to NGINX” page.
Here, I provided a simple introduction on how to enable Swarm mode and set up your first Swarm cluster. My in-depth ContainerCon training course will provided details about adding nodes to your Swarm, running and testing Swarm services, and more.
Register for LinuxCon + ContainerCon and sign up now to attend Orchestrating Containers in Production at Scale with Docker Swarm” presented by Jérôme Petazzoni.
Container Defense in Depth
The new age of image-based containers exploded onto the scene in early to mid-2013. Since the early days of the Docker container engine, we heard questions of whether they were secure enough. Our very own Dan Walsh was heard many times saying, Docker containers dont contain so the question is, can we safely use them? Especially in production?
Well, containers are really just fancy files and fancy processes which means that almost all of the current information assurance techniques we have are applicable to containers. In fact, many of the tools we have today can be applied more effectively to containers. If we can reprogram our architect brains a bit, we can apply a lot of what we know today to containers.
Lets start by thinking about the control points that we have in a containerized environment. There are three main components to a production container environment. We can control information flow at each layer.
Read more at The New Stack
How to Send and Receive Encrypted Data with GnuPG
The day has come where the security of your data—be it on a server, a work desktop, or your personal machine—is one of the single most important issues you can take on. Whether you’re hoping to secure company information or private email to clients, friends, and/or loved ones, you need to understand how to ensure your data cannot easily be read by the wrong people.
That’s where GNU Privacy Guard comes into play. GNU Privacy Guard (also called GPG or GnuPG) is an implementation of the OpenGP standard that allows you to encrypt/decrypt and sign data communications. It is one of the single most important tools you can use for secure communications on the Linux desktop.
Several applications make use of GPG and, once you have a solid understanding of how it works, you can make the most out of it. And, that’s where this article comes in. I will assume a zero sum knowledge on the subject of PGP and walk you through the process of generating the necessary keys to use to encrypt your communications. In the end, you will have a public and private key pair that can then be used from the command line, from GUI tools, and even with various email clients (to encrypt/decrypt) email communications.
NOTE: I will be demonstrating on Elementary OS Freya. The process is the same on all distributions, with the exception of the installation process.
It all begins with a key
Without a key, you cannot unlock all the wonder that is GPG. To begin with, you must ensure that GnuPG is installed on your machine (chances are, it is). To do this, open a terminal window and issue the command which gpg. You should see /usr/bin/gpg reported. If you are returned a blank prompt, GnuPG is not installed. Here are the steps to rectify that issue:
-
Open a terminal window
-
Issue the command sudo apt-get install -y gnupg
-
When prompted, type your sudo password and hit Enter
-
Allow the installation to complete
GnuPG is ready to serve. The first thing you must do is generate a keypair. As I already mentioned, the key pair is the heart and soul of GnuPG. This key pair consists of two very important pieces:
-
Private key
-
Public key
It is with these two keys that you can encrypt/decrypt information. It works in a very specific way. Let me explain. Both keys are associated with your email address. The private key is that which (as the name suggests) you keep to yourself (and do not share with anyone). The public key, on the other hand, is the key that you share with other people that want to send you encrypted messages. They, in turn, will send you their public keys (so that you can send them encrypted information). Once you’ve exchanged keys, it works like this:
Suppose you want to send an encrypted message to Britta. Britta has given you her public key and you have given her yours. You open up Thunderbird (for which you have installed the Enigmail add-on), type up your message to Britta, and click to encrypt the message. Because you have Britta’s public key, the message will encrypt (using that key) and you can send it on. When Britta opens that email, because she has the matching private key, the email will be decrypted.
In other words, only the person holding the matching private key will be able to decrypt the information within that email. That is why it is so important to safeguard your private key. It is also the reason they are called “key pairs.”
Let’s generate your key pair. Open your terminal window and issue the command gpg –gen-key. At this point, you will be asked some questions. The questions are:
-
Please select what kind of key you want (default being RSA and RSA)
-
What keysize do you want (default being 2048)
-
How long the key should be valid (default being key does not expire)
Next, you will be asked to enter information for the keyholder (you). The information required is:
-
Real name
-
Email address
-
Comment
You will then be asked to verify the information you entered. If it is correct, type O (for Okay). Once you’ve okay’d the information, you will be prompted to enter (and validate) a passphrase for the key. Make sure this passphrase is complex (a combination of alphanumeric, numbers, caps, lowercase, and symbols). Upon successful verification of your new passphrase, you will be instructed to perform other actions (web browsing, typing, reading email, etc.) so the gpg random number generator has a better chance to gain enough entropy to create the key pair. When this process completes, gpg will report the success of the key creation with the public key ID, the key fingerprint, and more. With that in hand, you are ready to begin.
Exporting your public key
This is a very important step. Without doing this, others will not be able to encrypt data for your eyes only. To export your public key, go back to your terminal window and issue the command:
gpg --armor --export EMAIL_ADDRESS > public_key.asc
Where EMAIL_ADDRESS is the actual email address associated with your key.
The file public_key.asc will contain your public key. You can now distribute that key to anyone needing to send you encrypted information. You can also upload that public key to a public key server, so anyone can obtain your public key and encrypt messages for you. To do this, go back to your terminal window and do the following:
-
Issue the command gpg –list-keys
-
Take note of the 8-digit string (the primary ID) associated with the key to be exported
-
Issue the command gpg –keyserver pgp.mit.edu –send-keys PRIMARY_ID (where PRIMARY_ID is the actual ID of your public key)
Using that command, your key will be uploaded to the MIT key server and can then be downloaded and used by anyone.
Importing public keys
Suppose Nathan wants to encrypt an email to Olivia. To do so, Nathan must have Olivia’s public key. He has obtained that key by either Olivia sending it to him or by getting it from a public key server. How does he then import that key so GnuPG is aware of it? Simple.
I’ll assume Nathan received the file olivia_public_key.asc from Olivia and has saved it in a folder he created called ~/.PUBKEYS. To import that file, Nathan would open up a terminal window and issue the command:
gpg --import ~/.PUBKEYS/olvia_public_key.asc
That’s it. Nathan can now send encrypted email to Olivia. Because Olivia has the matching private key, she can decrypt that email.
Your system knows
Once you’ve taken care of the above, anytime you use a GPG-aware application (such as the Enigmail plugin for Thunderbird), it will just know about the public and private keys. If you have multiple private keys in use, you might have to specify which one to use for your email account, but you will not have to do anything to associate public keys with email addresses.
You’re ready to go
Now that you understand the basics of GnPG, you’re ready to start using it. This information should serve as a solid guide to help you work in a much more secure environment and to be able to send and receive encrypted data.
Learn more about security and system management in the Essentials of System Administration course from The Linux Foundation.
Cloud and the ‘Existing X’ Dilemma
Anyone designing an IT strategy that incorporates cloud is faced with the question of what to do with existing assets including apps, infrastructure IT systems, processes, skills, etc. Some will tell you that the only way to handle the “existing X” parts of your strategy is to forget about them and allow those investments to languish. They’ll tell you they’re incompatible with a cloud world and operating model. Let that sink in. You have potentially billions of dollars invested in your current IT infrastructure, thousands of applications generating billions of dollars of bottom-line impact, and thousands of employees who’ve never touched anything cloud-related or public.
When you ask proponents of the “all assets left behind” strategy the question “How do I pull this off?”, they’ll hand wave and say “just look at Netflix’s architecture” five times in a single meeting, and then try to sell you a platform and consulting services to migrate your existing apps to their stack for 10 times what you spent on software licensing for that platform.
Read more at Apprenda
Red Hat Announces the Release of Red Hat Enterprise Linux Atomic Host 7.2.6
Red Hat, through Scott McCarty, is happy to announce the general availability of Red Hat Enterprise Linux Atomic Host 7.2.6, a maintenance update that adds many performance improvements to most of the included components.
As for those behind with their Red Hat Enterprise Linux Atomic Host reading, we’ll take this opportunity to inform them that Red Hat’s Atomic Host offering for the commercial Red Hat Enterprise Linux (RHEL) operating system is a specially crafted version of the OS that has a small footprint and is designed to run containerized workloads.
“For the past two years, Red Hat has been promoting the concept of super-privileged containers (SPCs) to handle the ‘tools and agents’ use case,” says Scott McCarty. “We still believe this is an ideal approach for solving these use cases in a containerized environment. That said, there are plenty of situations where it would be nice to simply add an rpm, or even a handful, to Atomic Host.”
Read more at Softpedia
Trends in Corporate Open Source Engagement
In 1998, I was part of SGI when we started moving to open source and open standards, after having been a long-time proprietary company. Since then, other companies also have moved rapidly to working with open source, and the use and adoption of open source technologies has skyrocketed over the past few years. Today company involvement in open source technologies is fairly mature and can be seen in the following trends:

Open source no longer only optional
I will start by making a bold statement that all companies are using open source—ok, almost all. Some say that we live in a post-proprietary era, one in which there is open source mixed with proprietary in almost all areas of technology.
Read more at Opensource.com
Automate System Tasks Using cron on CentOS 7
A good portion of everyday tasks that Linux sysadmins do can be automated using cron. In this tutorial we are going to show you how to automate system tasks on a VPS running CentOS 7 as an operating system. You can use the same instructions for desktop versions too, including the Fedora distro and all other equivalents.
Read the full article here
Save the Whale: Docker Rightfully Shuns Standardization
You can be forgiven for thinking Docker cares about standardization.
A little more than a year ago Docker donated “its software container format and its runtime, as well as the associated specifications,” to the Open Container Project to be housed under the Linux Foundation. In the FAQ, the Linux Foundation stressed, “Docker has taken the entire contents of the libcontainer project, including nsinit, and all modifications needed to make it run independently of Docker, and donated it to this effort.”
It was euphoric, this kumbaya moment. Many, including Google’s Kelsey Hightower, thought the container leader was offering full Docker standardization in our time. Nope.
As Docker founder Solomon Hykes declared last week, “We didn’t standardize our tech, just a very narrow piece where it made sense.” Importantly, what Hykes is arguing, a “reasonable argument against weaponized standards,” may be the best kind of “Docker standardization” possible.
Read more at InfoWorld



