Home Blog Page 734

Google Open Sources Its 48V Data Center Rack

Bridging the transition to (and Google’s desire for) 48-volt racks.

Google is sharing Open Rack v2.0, a proposed standard for a data center rack that runs on 48-volt power, with the Open Compute Project (OCP). The company is gathering feedback on the standard before final submission.

Google announced the contribution via a blog post today, noting that it has been collaborating with Facebook on it. If the standard is accepted, it will be Google’s first contributions to the OCP community.

Read more at SDxCentral

Intel’s Cloud Project Looks a Lot Like OpenStack

A fledgling open source project at Intel is wiping the slate clean in managing workloads in VMs, in containers, and on bare metal alike. The CIAO Project — “CIAO” is short for “Cloud Integrated Advanced Orchestrator” — has been described in a Register article as what might result if OpenStack were redone from scratch.

CIAO is split into three major components: controller, scheduler, and launcher. The controller provides all the top-level setting and policy enforcement around workloads, while the scheduler places workloads on available nodes.

Read more at InfoWorld

The Core Technologies for Deep Learning

This is the second article in a series taken from the inside HPC Guide to The Industrialization of Deep Learning. Given the compute and data intensive nature of deep learning which has significant overlaps with the needs of the high performance computing market, the TOP500 list provides a good proxy of the current market dynamics and trends.

From the central computation perspective, today’s multicore processor architectures dominate the TOP500 with 91% based on Intel processors. However, looking forwards we can expect to see further developments that may include core CPU architectures such as OpenPOWER and ARM. In addition System on a Chip approaches that combine general purpose processors with technologies such as field programmable gate arrays (FPGAs) and digital signal processors (DSPs) can be expected to play an increasing role in deep learning applications.

Read more at insideHPC

State of Cloud Instance Provisioning

If you are dealing with deploying instances (a.k.a Virtual Machines or VMs) to public cloud (e.g. AWS, Azure), then you might be wondering what your instance goes through before you can start using it.

This article is going to be about that. I hope you enjoy it. Please let me know at the end how you liked it!

All operations that occur from the moment you request for a VM to the moment you can log in to the VM is called provisioning.

Most of the provisioning magic happens at cloud provider’s proprietary/internal software that manages their physical machines in the datacenter. A physical node is picked and the VM image you specified is copied to the machine and hypervisor boots up your VM. This is provisioning from the infrastructure side and we are not going to be talking about it here.

Read more at Ahmet Alp Balkan Blog

How to Use ‘at’ Command to Schedule a Task on Given or Later Time in Linux

As an alternative to cron job scheduler, the at command allows you to schedule a command to run once at a given time without editing a configuration file.

The only requirement consists of installing this utility and starting and enabling its execution:

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read full article

Black Hat: Windows 10 at Risk From Linux Kernel Hidden Inside

A researcher exposes design and control flaws in Windows 10 versions that have the capability to run Linux.

Embedded within some versions of the latest Windows 10 update is a capability to run Linux. Unfortunately, that capability has flaws, which Alex Ionescu, chief architect at Crowdstrike, detailed in a session at the Black Hat USA security conference here and referred to as the Linux kernel hidden in Windows 10.

In an interview with eWEEK, Ionescu provided additional detail on the issues he found and has already reported to Microsoft. The embedded Linux inside of Windows was first announced by Microsoft in March at the Build conference and bring some Ubuntu Linux capabilities to Microsoft’s users. Ionescu said he reported issues to Microsoft during the beta period and some have already been fixed. The larger issue, though, is that there is now a new potential attack surface that organizations need to know about and risks that need to be mitigated, he said.

Read more at eWeek

Linux Kernel 4.7 Offers New Support for Virtual Devices, Drivers, and More

So, Linux kernel 4.7 is here. The release happened July 24, just over 10 weeks after the release of 4.6 and two weeks after the final release candidate (4.7-rc7). This release cycle was slightly longer than usual due to Torvalds traveling commitments.

That said, the last sprint was a pretty leisurely one, something Torvalds attributes to it being “summer in the northern hemisphere.” However, there were some “network drivers that got a bit more loving” and several “Intel Kabylake fixes” in the last batch of patches.

Maybe the biggest news, at least for end users, is that 4.7 includes drivers for the Polaris line of AMD GPUs. This is quite big because at least some of the models in the Polaris line of cards are still not available at the moment of writing. This also means that Linux is now at a stage where it’s getting video card AMD drivers before the hardware is on sale. Nvidia should probably take note.

That said, Nouveau, the project that provides free drivers for Nvidia GPUs, is chugging along nicely and now supports yet another video card, in this case, the GM108. They have also improved the power sensor support for cards across the board. As for the third graphic card manufacturer, aka Intel, the i915 drivers now support color manager.

In other news, the USB/IP subsystem has started supporting virtual devices. Introduced in kernel 3.17, USB/IP is already an interesting little project in itself. It allows you to access USB devices over the network, letting you, for example, peruse images from a webcam, or scan from a scanner on a remote server as if it were locally connected. The only limitation up until kernel 4.6 was the devices had to be real, physical machines.

The support for virtual devices in 4.7 makes USB/IP even more useful, especially for developers: Now they can access emulated smartphones and other emulated devices on virtual machines, or from elsewhere in the network, and run tests on them as if they were running on their personal machine.

Other changes to the kernel include…

  • Another kernel, another increase in the number of supported ARM chips. In this new batch, we have support for first-generation Kindle Fires; the Exynos 3250 chip, which is used in Samsung’s Galaxy Gear line of smartwatches; and the Orange Pi single board computer, to name but three.

  • Speaking of ARM, 4.7 also comes with hibernate and suspend for ARM64 architectures.

  • If you’re into gaming on Linux, you’ll be thrilled to know that 4.7 comes with full support for the Microsoft Xbox One Elite Controller, and high-end gaming keyboards put out by Corsair. Sure, those toys are pricey, but, man, are they sexy.

  • In the networking department, 4.7 now supports Intel’s 8265 Bluetooth device and has improved support for Broadcom’s BCM2E71 chip.

  • An interesting new security feature included into 4.7 is the LoadPin module. This module, once activated, forces the kernel to load the files it needs (modules, firmware, and so on) all from one single filesystem. Assuming that said filesystem is itself immutable — like what you would find on read-only optical disk or on a dm-verity-enabled device — this allows to create a secure read-only system without the need to individually sign every file.

For more information, read the official announcement of the release, or you can also visit Phoronix where they have more on the most significant changes that made their way into 4.7.

 

Open Source OVN to Offer Solid Virtual Networking For OpenStack

Open Virtual Networking (OVN) is a new open source project that brings virtual networking to the Open vSwitch user community and aims to develop a single, standard, vendor-neutral protocol for the virtualization of network switching functions. In their upcoming talk at LinuxCon North America in Toronto this month, Kyle Mestery of IBM and Justin Pettit of VMware will cover the current status of the OVN project, including the first software release planned for this fall. Here, Mestery and Pettit discuss the project and its goals and give us a preview of their talk, “OVN: Scalable Virtual Networking for Open vSwitch.”

Linux.com: Tell us briefly about the OVN project. What are its main goals and what are the problems the project aims to address?

Kyle Mestery: OVN is a project to build a virtual networking solution for Open vSwitch (OVS). The project was started in 2015 and is being developed in the OVS repository by a large group of contributors, including developers from VMware, Red Hat, IBM, and eBay.

The project can integrate with platforms such as OpenStack and Kubernetes to provide a complete and scalable virtual networking solution. OVN is built around both a northbound and southbound database. The NB DB stores logical state of the system, while the SB DB stores information around logical flows and all of the chassis in the system.

Linux.com: How did you become involved in this project?

Kyle: Justin is one of the original members of the OVS team. I have been involved with OVS since 2012. We both wanted to provide a solid virtual networking solution for projects such as OpenStack, and we figured the best way to do this was to work on a new virtual networking solution we could develop with the rest of the OVS team.

Linux.com: What can you tell us about the project’s upcoming software release? What are important features and functionality?

Kyle: The first release of OVN will be this fall. It will include a complete solution to provide virtual networking, including supporting logical L3 routers and gateways, NAT, and floating IPs. It will provide an active/passive HA model for both the NB and SB DBs in the system as well. In addition, the integration with OpenStack Neutron will release this fall around the same time as the Newton release of OpenStack.

Linux.com: What interesting or innovative trends are you seeing around NFV?

Kyle: NFV is a hot topic in recent years. One very interesting trend is around service function chaining, or SFC. SFC attempts to provide a chain of ports for packets to go through, allowing operators to provision different appliances to handle modifying and inspecting packets along the chain. OVN is working to integrate SFC support, and it’s likely to land in the second release at this point.

Linux.com: Why is open source important to this industry?

Kyle: Open source provides the ability for disparate groups to work together to solve problems in a targeted manner.  For example, OVN has traditional software development houses and operators building the software and deciding the requirements for the release together. This means we understand how the system is likely to be deployed and get a lot of functional testing before the release is even considered stable.

Kyle Mestery
Kyle Mestery is a Distinguished Engineer and Director of Open Source Networking at IBM where he leads a team of upstream engineers. He is a member of the OpenStack Technical Committee and was the Neutron PTL for Juno, Kilo, and Liberty. He is a regular speaker at open source conferences and the founder of the Minnesota OpenStack Meetup. Kyle lives with his wife and family in Minnesota. You can find him on Twitter as @mestery.

 

 

Justin Pettit
Justin Pettit is a software developer at VMware. Justin joined VMware through the acquisition of Nicira, where he was a founding employee. He was one of the original authors of the OpenFlow Standard, working on both the specification and reference implementation. He is one of the lead developers of Open vSwitch and OVN, and involved in the development of VMware’s networking products. Prior to Nicira, Justin worked primarily on network security issues.

 

 
LinuxCon + ContainerCon Europe 
 
Look forward to three days and 175+ sessions of content covering the latest in containers, Linux, cloud, security, performance, virtualization, DevOps, networking, datacenter management and much more. You don’t want to miss this year’s event, which marks the 25th anniversary of Linux! Register now before tickets sell out. 

10 Skills to Land Your Open Source Dream Job

In the past two years, as we’ve seen open source move even further into the mainstream of practically every organization from the large to the small, I’ve thought a bit about how the landscape for open source job skills has changed, and what, if anything, might be added to the list of proficiencies to find a career in open source.

So, in the spirit of open source, I’ve remixed Jason’s original look at seven open source skills for career readiness and added three more of my own.

“Work on stuff that matters” is a famous call to action from founder and CEO of O’Reilly Media, Tim O’Reilly. But, how about working on stuff that matters while getting paid for it? There are an abundance of open source-related jobs out there if you’ve got the right skills….

Read more at OpenSource.com

Managing Encrypted Backups in Linux: Part 1

Encrypted backups are great, but what if something goes wrong and you can’t read your encrypted files? In this two-part series, I’ll show how to use rsync and duplicity as your belt-and-suspenders protection against data loss. Part 1 shows how to create and automate simple backups. In part 2, I’ll go into more details on file selection and backing up encryption keys.

My personal backup plan uses both encrypted and unencrypted backups, because my paranoia extends to worrying about broken encryption. If something goes wrong, like a corrupted file system, good luck recovering encrypted data.

I want to always have access to my files. My risk assessment is pretty simple: The most likely cause for losing access is file system corruption or hardware failure. Other possible, but less likely, hazards are fire or theft. I don’t need to encrypt files on my main PC (though in a future installment, I’ll look at backup options for encrypted volumes). My most important files are encrypted and uploaded to remote servers. I’m stuck with capped mobile broadband, so I can’t just encrypt and stuff everything into a remote server.

These backups are all automated except for one step. I use rsync, duplicity, and GnuPG. It works like this:

  • Nightly unencrypted dump of everything to a portable hard drive.
  • Nightly selective encrypted dump to a remote server.
  • Continuous encrypted upload to SpiderOak of my most important files.
  • Weekly rotation of unencrypted drives to my bank safety deposit box.

I rotate two unencrypted portable hard drives to my safe deposit box, so if my house ever burns down I’ll lose, at most, a week’s worth of files. Sometimes I dream of losing the whole lot; what do I need all that junk for? But, I save it anyway.

Simple Unencrypted Backups

Simple unencrypted backups are easy with good old rsync. Hard drives are huge and cheap, so it is feasible to back up everything on my PC every night. I use an rsync exclude file to avoid copying crud I know I’ll never need, such as some dotfiles and certain directories. This is a brief example of an exclude file:

.adobe/
.dbus/
.macromedia/
.Xauthority
.xsession-errors
downloads/
Videos/

This command performs the backup — remember here to mind your trailing slashes. A trailing slash on the source directory copies only the contents of the directory, and not the directory. No trailing slash copies the directory and its contents. The file paths in your exclude file are relative to the directory you are copying:

$ rsync -av "ssh -i /home/carla/.ssh/backup_rsa" --exclude-from=exclude.txt 
   /home/carla/ carla@backup:/home/carla/

I use passphrase-less SSH key authentication to login to my remote server. A passphrase-less key is not vulnerable to a brute-force password attack.

The rsync command goes into a script, ~/bin/nightly-plain:

#!/bin/bash
rsync -av "ssh -i /home/carla/.ssh/backup_rsa" --exclude-from=exclude.txt 
  /home/carla/ carla@backup:/home/carla/

Remember to make it executable and limit read-write permissions to you only:

$ chmod 0700 nightly-plain

I added ~/bin/ permanently to my path by adding these lines to my ~.profile:

# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/bin" ] ; then
    PATH="$HOME/bin:$PATH"
fi

Putting a directory in your path means you can call scripts in that directory without having to spell out the full path. Create your personal cron job like this example, which runs every night at 11:05 PM:

$ crontab -e
05 23 * * * nightly-plain

Encrypted Backups with duplicity

duplicity goes to work 30 minutes later. Of course, you can adjust this interval to fit your own setup.

Ubuntu users should install duplicity from the duplicity PPA, because the version in Main is old and buggy. You also need python-paramiko so you can use SCP to copy your files.

duplicity uses GPG keys, so you must create a GPG key:

$ gpg --gen-key

It is OK to accept the defaults. When you are prompted to enter your name, email address, and comment, use the comment field to give your key a useful label, such as “nightly encrypted backups.” Write down your passphrase, because you will need it to restore and decrypt your files. The worst part of creating a GPG key is generating enough entropy while it is building your key. The usual way is to wiggle your mouse for a couple of minutes. An alternative is to install rng-tools to create your entropy. After installing rng-tools, open a terminal and run this command to create entropy without having to sit and wiggle your mouse:

$ sudo rngd -f -r /dev/random

Now create your GPG key in a second terminal window. When it is finished, go back to the rngd window and stop it with Ctrl+C. Return to your GPG window and view your keys with gpg –list-keys:

$ gpg --list-keys
pub   2048R/30BFE75D 2016-07-12
uid                  Carla Schroder (nightly encrypted backups) <carla@example.com>
sub   2048R/6DFAE9E8 2016-07-12

Now you can make a trial duplicity run. This example encrypts and copies a single directory to a server on my LAN. Note the SCP syntax for the target directory; the example remote somefiles directory is /home/carla/somefiles. SCP and SSH paths are relative to the user, so you don’t need to spell out the full path. If you do you will create a new directory. Use the second part of the pub key ID to specify which GPG key to use:

$ duplicity --encrypt-key 30BFE75D /home/carla/somefiles 
    scp://carla@backupserver/somefiles

A successful run shows a bunch of backup statistics. You can view a file list of your remote files:

$ duplicity list-current-files 
   scp://carla@backupserver/somefiles

Test your ability to restore and unencrypt your files by reversing the source and target directories. You need your passphrase. This example decrypts and downloads the backups to the current directory:

$ PASSPHRASE="password" duplicity 
    scp://carla@backupserver/somefiles .

Or, restore a single file, which in this example is logfile. The file’s path is relative to the target URL, and the directory that you restore it to does not have to exist:

$ PASSPHRASE="password" duplicity --file-to-restore logfile 
  scp://carla@backupserver/somefiles logfiledir/

If you’re encrypting and backing up a single directory like the above example, you can put your duplicity command in a script, and put the script in a cron job. In part 2, I’ll show you how to fine-tune your file selection.

SpiderOak for Continual Encrypted Backups

I use SpiderOak to encrypt and upload my most important files as I work on them. This has saved my day many times from power outages and fat-fingered delete escapades. SpiderOak provides zero-knowledge offsite encrypted file storage, which mean that if you lose your encryption key, you lose access to your files, and SpiderOak cannot help you. Any vendor that can recover your files means they can snoop in them, or get hacked, or give them to law enforcement, so zero-knowledge is your strongest protection.

Come back for part 2 to learn more about file selection and backing up your keys.

Learn more skills for sysadmins in the Essentials of System Administration course from The Linux Foundation.