Home Blog Page 412

This Week in Open Source News: One Million Students Have Taken Linux Foundation edX Courses, OPNFV Announces Verified Program & More

This week in Linux and open source news, The Linux Foundaiton & edX partnership have helped one million students learn open source for free and on their own schedules. Read the rest of this week’s top open source headlines below! 

1) “The Linux Foundation has been able to reach so many students because of its partnership with edX (the non-profit online learning platform from Harvard University and Massachusetts Institute of Technology.)”

One Million Linux and Open Source Software Classes Taken– ZDNe

2) Linux Foundation NFV project announces verified program to help operators establish entry criteria for their proof of concepts (POCs) and request for proposals (RFPs).

OPNFV Verified Program Aims to Ease NFV Adoption RCRWireless News

3) Sylabs, the company behind the open source Singularity container engine, announced its first commercial product.

Sylabs Launches Singularity Pro, a Container Platform for HPC– TechCrunch

4) The Linux Foundation releases speaker list for ELC+OpenIoT North America.

Embedded Linux Conference Sessions to Cover Real-Time Linux, RISC-V, Zephyr, and More

5) “SiFive has opened orders for the Hi-Five Unleashed, a single-board computer using the royalty-free RISC-V ISA.”

Hi-Five Unleashed: The First Linux-Capable RISC-V Single Board Computer is Here– TechRepublic

Containers Will Not Fix Your Broken Culture (and Other Hard Truths)

We focus so often on technical anti-patterns, neglecting similar problems inside our social structures. Spoiler alert: the solutions to many difficulties that seem technical can be found by examining our interactions with others. Let’s talk about five things you’ll want to know when working with those pesky creatures known as humans.

1. Tech is Not a Panacea

According to noted thought leader Jane Austen, it is a truth universally acknowledged that a techie in possession of any production code whatsoever must be in want of a container platform.

Or is it? Let’s deconstruct the unspoken assumptions. Don’t get me wrong—containers are delightful! But let’s be real: we’re unlikely to solve the vast majority of problems in a given organization via the judicious application of kernel features. If you have contention between your ops team and your dev team(s)—and maybe they’re all facing off with some ill-considered DevOps silo inexplicably stuck between them—then cgroups and namespaces won’t have a prayer of solving that.

Read more at ACM Queue

Open Source Project Trends for 2018

Last year, GitHub brought 24 million people from almost 200 countries together to code better and build bigger. From frameworks to data visualizations across more than 25 million repositories, you were busy in 2017—and the activity is picking up even more this year. With 2018 well underway, we’re using contributor, visitor, and star activity to identify some trends in open source projects for the year ahead.

Some of the projects that experienced the largest growth in activity were focused on cross-platform or web development.

Read more at GitHub

Proxmox Virtualization Manager

Without a doubt, if you only want to manage a few VMs, you are significantly better off with a typical virtualization manager than with a tool designed to support the operation of a public cloud platform. Although classic VM managers are wallflowers compared with the popular cloud solutions, they still exist and are very successful. Red Hat Enterprise Virtualization (RHEV) enjoys a popularity similar to SUSE Linux Enterprise Server (SLES) 12, to which you can add extensions for high availability (HA) and which supports alternative storage solutions.

Another solution has been around for years: Proxmox Virtual Environment (VE) by Vienna-based Proxmox Server Solutions GmbH. Recently, Proxmox VE reached version 5.0. In this article, I look at what Proxmox can do, what applications it serves, and what you might pay for support.

KVM and LXC

Proxmox VE sees itself as a genuine virtualization manager and not as a cloud in disguise. At the heart of the product, Proxmox combines two virtualization technologies from which you can choose: KVM, which is now the virtualization standard for Linux, and LXC, for the operation of lightweight containers. Proxmox also gives you the choice of paravirtualizing the whole computer or relying on containers in which to run individual applications (Figure 1).

Read more at ADMIN

The Linux Ranger: What Is It and How Do You Use It?

For those of us who cut our technical teeth on the Unix/Linux command line, the relatively new ranger makes examining files a very different experience. A file manager that works inside a terminal window, ranger provides useful information and makes it very easy to move into directories, view file content or jump into an editor to make changes.

Unlike most file managers that work on the desktop but leave you to the whims of ls, cat and more to get a solid handle on files and contents, ranger provides a very nice mix of file listing and contents displays with an easy way to start editing. In fact, among some Linux users, ranger has become very popular.

Read more at Network World

Singularity HPC Container Start-Up – Sylabs – Emerges from Stealth

The driving force behind Singularity, the popular HPC container technology, is bringing the open source platform to the enterprise with the launch of a new venture, Sylabs Inc., which emerged this week from stealth mode.

Sylabs CEO Gregory Kurtzer, who founded the Singularity project along with other open source efforts, said his startup would bring the horsepower of Singularity containers to a broader set of users. Kurtzer said the launch of Sylabs coincides with greater enterprise reliance on high-end computing. “There’s a shift happening,” he said.

As the enterprise container ecosystem continues to expand, most of that infrastructure is designed to deliver micro-services. The startup’s goal is to deliver “enterprise performance computing,” or EPC, moving beyond services to handle more demanding artificial intelligence, machine and deep learning as well as advanced analytics workloads.

Read more at HPCWire

Advanced Dnsmasq Tips and Tricks

Many people know and love Dnsmasq and rely on it for their local name services. Today we look at advanced configuration file management, how to test your configurations, some basic security, DNS wildcards, speedy DNS configuration, and some other tips and tricks. Next week, we’ll continue with a detailed look at how to configure DNS and DHCP.

Testing Configurations

When you’re testing new configurations, you should run Dnsmasq from the command line, rather than as a daemon. This example starts it without launching the daemon, prints command output, and logs all activity:

# dnsmasq --no-daemon --log-queries
dnsmasq: started, version 2.75 cachesize 150
dnsmasq: compile time options: IPv6 GNU-getopt 
 DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack 
 ipset auth DNSSEC loop-detect inotify
dnsmasq: reading /etc/resolv.conf
dnsmasq: using nameserver 192.168.0.1#53
dnsmasq: read /etc/hosts - 9 addresses

You can see tons of useful information in this small example, including version, compiled options, system name service files, and its listening address. Ctrl+c stops it. By default, Dnsmasq does not have its own log file, so entries are dumped into multiple locations in /var/log. You can use good old grep to find Dnsmasq log entries. This example searches /var/log recursively, prints the line numbers after the filenames, and excludes /var/log/dist-upgrade:

# grep -ir --exclude-dir=dist-upgrade dnsmasq /var/log/

Note the fun grep gotcha with --exclude-dir=: Don’t specify the full path, but just the directory name.

You can give Dnsmasq its own logfile with this command-line option, using whatever file you want:

# dnsmasq --no-daemon --log-queries --log-facility=/var/log/dnsmasq.log

Or enter it in your Dnsmasq configuration file as log-facility=/var/log/dnsmasq.log.

Configuration Files

Dnsmasq is configured in /etc/dnsmasq.conf. Your Linux distribution may also use /etc/default/dnsmasq, /etc/dnsmasq.d/, and /etc/dnsmasq.d-available/. (No, there cannot be a universal method, as that is against the will of the Linux Cat Herd Ruling Cabal.) You have a fair bit of flexibility to organize your Dnsmasq configuration in a way that pleases you.

/etc/dnsmasq.conf is the grandmother as well as the boss. Dnsmasq reads it first at startup. /etc/dnsmasq.conf can call other configuration files with the conf-file= option, for example conf-file=/etc/dnsmasqextrastuff.conf, and directories with the conf-dir= option, e.g. conf-dir=/etc/dnsmasq.d.

Whenever you make a change in a configuration file, you must restart Dnsmasq.

You may include or exclude configuration files by extension. The asterisk means include, and the absence of the asterisk means exclude:

conf-dir=/etc/dnsmasq.d/,*.conf, *.foo
conf-dir=/etc/dnsmasq.d,.old, .bak, .tmp 

You may store your host configurations in multiple files with the --addn-hosts= option.

Dnsmasq includes a syntax checker:

$ dnsmasq --test
dnsmasq: syntax check OK.

Useful Configurations

Always include these lines:

domain-needed
bogus-priv

These prevent packets with malformed domain names and packets with private IP addresses from leaving your network.

This limits your name services exclusively to Dnsmasq, and it will not use /etc/resolv.conf or any other system name service files:

no-resolv

Reference other name servers. The first example is for a local private domain. The second and third examples are OpenDNS public servers:

server=/fooxample.com/192.168.0.1
server=208.67.222.222
server=208.67.220.220

Or restrict just local domains while allowing external lookups for other domains. These are answered only from /etc/hosts or DHCP:

local=/mehxample.com/
local=/fooxample.com/

Restrict which network interfaces Dnsmasq listens to:

interface=eth0
interface=wlan1

Dnsmasq, by default, reads and uses /etc/hosts. This is a fabulously fast way to configure a lot of hosts, and the /etc/hosts file only has to exist on the same computer as Dnsmasq. You can make the process even faster by entering only the hostnames in /etc/hosts, and use Dnsmasq to add the domain. /etc/hosts looks like this:

127.0.0.1       localhost
192.168.0.1     host2
192.168.0.2     host3
192.168.0.3     host4

Then add these lines to dnsmasq.conf, using your own domain, of course:

expand-hosts
domain=mehxample.com

Dnsmasq will automatically expand the hostnames to fully qualified domain names, for example, host2 to host2.mehxample.com.

DNS Wildcards

In general, DNS wildcards are not a good practice because they invite abuse. But there are times when they are useful, such as inside the nice protected confines of your LAN. For example, Kubernetes clusters are considerably easier to manage with wildcard DNS, unless you enjoy making DNS entries for your hundreds or thousands of applications. Suppose your Kubernetes domain is mehxample.com; in Dnsmasq a wildcard that resolves all requests to mehxample.com looks like this:

address=/mehxample.com/192.168.0.5

The address to use in this case is the public IP address for your cluster. This answers requests for hosts and subdomains in mehxample.com, except for any that are already configured in DHCP or /etc/hosts.

Next week, we’ll go into more detail on managing DNS and DHCP, including different options for different subnets, and providing authoritative name services.

Additional Resources

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

The Complete Schedule for Open Networking Summit North America Is Now Live

Early Registration Ends in 3 Days; Save $805 & Register Now!

The Open Networking Summit North America (ONS) schedule is now live and features 75+ sessions across 6 tracks:

  • Networking Business and Architecture
  • Service Provider & Cloud Networking (Business & Architecture)
  • Service Provider & Cloud Networking (Technical)
  • Enterprise IT (Business & Architecture)
  • Enterprise IT DevOps (Technical)
  • Networking Futures

Read more at The Linux Foundation

DevOps Metrics

Collecting measurements that can provide insights across the software delivery pipeline is difficult. Data must be complete, comprehensive, and correct so that teams can correlate data to drive business decisions. For many organizations, adoption of the latest best-of-breed agile and DevOps tools has made the task even more difficult because of the proliferation of multiple systems of record-keeping within the organization.

One of the leading sources of cross-organization software delivery data is the annual State of DevOps Report (found at https://devops-research.com/research.html).2 This industry-wide survey provides evidence that software delivery plays an important role in high-performing technology-driven organizations. The report outlines key capabilities in technology, process, and cultural areas that contribute to software-delivery performance and how this, in turn, contributes to key outcomes such as employee well-being, product quality, and organizational performance.

Read more at ACM Queue

Integrating Continuous Testing for Improved Open Source Security

Preventing new security flaws is conceptually simple, and very aligned with your (hopefully) existing quality control. Because vulnerabilities are just security bugs, a good way to prevent them is to test for them as part of your automated test suite.

The key to successful prevention is inserting the vulnerability test into the right steps in the process, and deciding how strict to make it. Being overly restrictive early on may hinder productivity and create antagonism among your developers. On the flip side, testing too late can make fixing issues more costly, and being too lenient can eventually let vulnerabilities make it to production. It’s all about finding the right balance for your team and process.

Here are a few considerations on how to strike the right balance.

Read more at O’Reilly