YOU’VE PROBABLY NEVER heard of the late Jim Weirich or his software. But you’ve almost certainly used apps built on his work.
Weirich helped create several key tools for Ruby, the popular programming language used to write the code for sites like Hulu, Kickstarter, Twitter, and countless others. His code was open source, meaning that anyone could use it and modify it. “He was a seminal member of the western world’s Ruby community,” says Justin Searls, a Ruby developer and co-founder of the software company Test Double.
When Weirich died in 2014, Searls noticed that no one was maintaining one of Weirich’s software-testing tools. That meant there would be no one to approve changes if other developers submitted bug fixes, security patches, or other improvements. Any tests that relied on the tool would eventually fail, as the code became outdated and incompatible with newer tech.
Selecting technologies means committing to solutions that will support an active, growing business over the long term, so it requires careful consideration and foresight. When an enterprise bets on the wrong horse, the result is often significantly higher development costs and reduced flexibility, both of which can stick around for the long haul.
In the past decade, adoption of open source software at the enterprise level has flourished, as more businesses discover the considerable advantages open source solutions hold over their proprietary counterparts, and as the enterprise mentality around open source continues to shift.
Enterprises looking to make smart use of open source software will find plenty of great reasons to do so. Here are just some of them.
For the longest time, naysayers were fairly intent on shutting down anyone who believed the Linux desktop would eventually make serious headway in the market. Although Linux has yet to breach 5 percent of that market, it continues to claw its way up. And with the help of very modern, highly efficient, user-friendly environments, like PinguyOS, it could make even more headway.
If you’ve never heard of PinguyOS, you’re in for a treat — especially if you’re new to Linux. PinguyOS is a Linux distribution, created by Antoni Norman, that is based on Ubuntu. The intention of PinguyOS is to look good, work well, and — most importantly — be easy to use. For the most part, the developers have succeeded with aplomb. It’s not perfect, but the PinguyOS desktop is certainly one that could make migrating to Linux a fairly easy feat for new users.
In this article, I’ll take a look at what PinguyOS has to offer.
What makes PinguyOS tick?
As I’ve already mentioned, at the heart of PinguyOS is Ubuntu. The current build is a bit behind at Ubuntu 14.04. This means users will not only enjoy some of the best hardware recognition on the Linux market, but the apt package manager is ready to serve. Of course, new users really don’t care about what package manager is employed to install and update applications. What will draw them in is a shiny GUI that makes everything a veritable point-and-click party. That’s where GNOME comes in. I’ve already been on the record saying that GNOME is one of the slickest and most stable desktops on the market. But PinguyOS doesn’t settle for a vanilla take on GNOME. Instead, PinguyOS adds a few extra options to make migration from other desktops a breeze.
To the standard GNOME desktop, PinguyOS adds a quick launch Docky bar to the bottom of the screen and an autohide Docky Places bar on the left edge of the screen (Figure 1).
Figure 1: The default PinguyOS desktop with the Places Dock in action.
As you can see (on the default desktop), there is one piece that tends to appeal to Linux users. That piece is Conky. I’ve used Conky on a number of desktops, for various purposes. In some instances, it’s actually quite handy. For many a Linux user, it seems a must to have detailed reports on such things as CPU, memory, and network usage; uptime; running processes; and more. Don’t get me wrong, Conky is a great tool. However, for new users, I’d say it’s far less interesting or useful. Thing is, new users won’t even know what that window on the desktop even is. Experienced Linux users will see it, think “That’s Conky,” and know how to easily get rid of it (should they not want it on their desktop) or configure it. New users? Not so much.
But that is a rather minor issue for a desktop that has so much to offer. Set aside Conky and you’ll see a Linux distribution that tosses just about everything it can at the desktop, in order to create something very useful. The developers have gone out of their way to add the necessary tools to make GNOME a desktop that welcomes just about every type of user. One way the PinguyOS developers have managed this is via GNOME extensions. Open up the Tweaks tool, click on Extensions, and you’ll see a healthy list of additions to GNOME (Figure 2).
Figure 2: The PinguyOS GNOME extension list.
All told, there are 23 extensions added to GNOME — some of which are enabled by default, some of which are not.
Installed applications
Beyond Conky and GNOME extensions, what else can you expect to find installed, by default, on PinguyOS? Click on the Menu in the top left of the desktop, and you’ll see a fairly complete list of applications, such as:
GNOME Do (do things as quickly as possible)
Shutter (capture and share screenshots)
Play On Linux (Install games via Wine)
Steam (manage Steam games)
Pinta (image creation/edit)
Empathy (instant message client)
Firefox (web browser)
Remmina (remote desktop client)
Skype (VOIP client)
TeamViewer 10 (tool for remote support)
Thunderbird (email client)
LibreOffice (full-featured office suite)
wxBanker (finance manager
Plex Home Theatre/Media Manager
Clementine (audio player)
OpenShot (video editor)
VLC (media player)
That’s a healthy list of tools — one that comes with the slightest price. The minimum installation size of PinguyOS is 15.2 GB. That’s nearly four times the size of a minimum Ubuntu installation. However, you do get a considerable amount of software for your trouble — something that will greatly appeal to new users. Instead of having to bother installing a number of software titles (after OS installation), you should have nearly everything you need to get your work done, in a very user-friendly environment. And with GNOME in control of the desktop, you can be certain PinguyOS will enjoy a very stable and slick desktop.
Tiny Faults
If Ihad to find something wrong with PinguyOS, it would be three particular somethings. The first two, I’ve already mentioned — being based on an outdated version of Ubuntu and the addition of Conky by default. The PinguyOS developers should consider working with Ubuntu 16.04 (also an LTS release). Also, Conky should be an optional addition, one that includes a user-friendly setup wizard upon first boot. The third isn’t quite as minor a nit. Instead of including GNOME Software as the default front end for the package manager, PinguyOS opts to include both Synaptic and the now-defunct Ubuntu Software Center. First off, Ubuntu Software Center shouldn’t be included on any distribution these days. The tool is broken, slow, and buggy. But adding Synaptic as a second option (or, rather, a first option — as it is the tool included in the Dock), is a mistake. This is not to say Synaptic isn’t a solid tool; it is. But considering how much better (and, again, user-friendly) GNOME Software is, it would only make sense to include it as the default.
As I said, Synaptic is a good tool, just not one I’d recommend to new users. Since PinguyOS’s focus is simplicity, maybe migrating to GNOME Software would be the right move.
Minor nits, major hits
Set aside the minor nits found in PinguyOS and you’ll see there is quite a lot to love about this distribution. It’s well polished, stable, offers all the software you need to get your work done, and it’s user-friendly. What more can you ask for from a desktop operating system?
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
The Cloud Foundry Container Runtime is the new name for Kubo, which is Kubernetes running on BOSH. In today’s episode of The New Stack Makers, TNS founder Alex Williams caught up with Cloud Foundry CTO Chip Childers to learn more about Cloud Foundry’s plans for this new runtime, with Childers highlighting how BOSH is serving the needs of today’s developers.
Childers went on to note that the Cloud Foundry Container Runtime and application runtime will sit next to one another, allowing for shared identity between Kuberenetes and Cloud Foundry application runtimes.
“I think that what’s most important right now is thinking about the developers. What is it that the developer in the enterprise needs? …”
Kubernetes isn’t even easy to pronounce, much less explain. So we recently illuminated how to demystify Kubernetes in plain English, so that a wide audience can understand it. (We also noted that the pronunciation may vary a bit, and that’s OK.)
Of course, helping your organization understand Kubernetes isn’t the same thing as helping everyone understand why Kubernetes – and orchestration tools in general – are necessary in the first place.
If you need to make the case for microservices, for example, you’re pitching an architectural approach to building and operating software, not a particular platform for doing so. With Kubernetes, you need to do both: Pitch orchestration as a means of effectively managing containers (and, increasingly, containerized microservices) and Kubernetes as the right platform for doing so.
No advance in information technology in the past six decades has offered a greater range of quantifiable benefits than has virtualization. Many IT professionals think of virtualization in terms of virtual machines (VM) and their associated hypervisors and operating-system implementations, but that only skims the surface. An increasingly broad set of virtualization technologies, capabilities, strategies and possibilities are redefining major elements of IT in organizations everywhere.
Virtualization definition
Examining the definition of virtualization in a broader context, we define virtualization as the art and science of making the function of an object or resource simulated or emulated in software identical to that of the corresponding physically realized object. In other words, we use an abstraction to make software look and behave like hardware, with corresponding benefits in flexibility, cost, scalability, reliability, and often overall capability and performance, and in a broad range of applications. Virtualization, then, makes “real” that which is not, applying the flexibility and convenience of software-based capabilities and services as a transparent substitute for the same realized in hardware.
Google engineers figured out a way to improve latency within the company’s software-defined networking (SDN) platform — Andromeda. Google released the latest version of the platform, Andromeda 2.1, today and says it reduces network latency between Google Compute Engine virtual machines by 40 percent compared to Andromeda 2.0.
Google fellow Amin Vahdat said most people think about bandwidth when they think about the performance of a network. “Our infrastructure does quite well on measures of bandwidth,” said Vahdat. “But most distributed applications care more about latency than bandwidth. We’re constantly getting new hardware to increase bandwidth. But with latency, it does involve these entrenched software layers. We’ve really focused on the latency of our network.”
The Ubuntu desktop has evolved a lot over the years. Ubuntu started off with GNOME 2, then moved onto Unity. From there, it came home to its roots with the GNOME 3 desktop. In this article, we’ll look at the Ubuntu desktops and compare them.
Ubuntu official – now with GNOME 3
Ubuntu moving on from Unity was perhaps the best thing for the project. Not only did it free up resources to refocus Ubuntu’s efforts on other elements of the distro. It also brought the distro back to its roots by returning to the GNOME desktop.
One of the first things you’ll notice about the current Ubuntu 17.10 release is that even though the switch was made to GNOME 3, the installation feels about the same. Obliviously there are subtle differences, but overall it’s a very close match to the Unity desktop.
In this article series, we have been discussing the Understanding OPNFVbook (see links to previous articles below). Here, we will look at OPNFV continuous testing and writing and onboarding virtual network functions (VNFs).
As we have discussed previously, OPNFV integrates a number of network function virtualization (NFV) related upstream projects and continuously tests specific combinations. The test projects in OPNFV are meant to validate the numerous scenarios using multiple test cases. The following figure shows a high-level view of the OPNFV testing framework.
OPNFV Testing Projects (Danube Release).
OPNFV testing projects may be viewed through four distinct lenses: coverage, scope, tier, and purpose. Coverage determines whether a test covers the entire stack, a specific subsystem (e.g., OpenStack) or just one component (e.g., OVS) of an OPNFV scenario, which is a specific integration of upstream projects — a reference architecture. Scope is classified into functional, performance, stress and compliance testing. A tier, on the other hand, describes the complexity (end-to-end network service vs. smoke) and frequency (daily vs. weekly) of a particular test. Finally, the purpose defines why a test is included (e.g., to gate a commit or or to simply be informational).
The following three broad OPNFV testing projects and five sub-projects are covered in the book:
Functest: Provides functional testing and validation of various OPNFV scenarios. Functest includes subsystem and end-to-end tests (e.g., Clearwater vIMS).
Yardstick: Executes performance testing on OPNFV scenarios based on ETSI reference test suites. The five sub-projects are as follows:
QTIP: Compute benchmarking (Note: The Euphrates release includes storage benchmarking as well)
Bottlenecks: Stress testing
Dovetail: Includes compliance tests. Dovetail will form the foundation of the future OPNFV Compliance Verification Program (CVP) for NFV infrastructure, VIM, and SDN controller commercial products.
Note that since publication of the book, two more testing-related projects have been included in the Euphrates release— Sample VNF and NFVBench.
The QTIP project is described in the book as follows:
Remember benchmarks such as MIPS or TPC-C, which attempted to provide a measure of infrastructure performance through one single number? QTIP attempts to do the same for NVFI compute (storage and networking part of roadmap) performance. QTIP is a Yardstick plugin that collects metrics from a number of tests selected from five different categories: integer, floating point, memory, deep packet inspection and cipher speeds. These numbers are crunched to produce a single QTIP benchmark. The baseline is 2,500, and bigger is better! In that sense one of the goal of QTIP is to make Yardstick results very easy to consume.
Another attractive feature of OPNFV is being able to view tests results in real-time through a unified dashboard. Behind the scenes, the OPNFV community has made a significant investment in test APIs, testcase database, results database, and results visualization efforts. Scenario reporting results are available for Functest, Yardstick, Storperf, and VSPERF.
Results of Functest.
The entire OPNFV stack, ultimately, serves one purpose: to run virtual network functions that in turn constitute network services. Chapter 9, which is meant for VNF vendors, looks at two major considerations: how to write VNFs and how to onboard them. After a brief discussion of modeling languages and general principles around VNF packaging, the book gives a concrete example where the Clearwater virtual IP multimedia system (vIMS) VNF is onboarded and tested on OPNFV along with the Cloudify management and orchestration software.
The following figure shows how Clearwater vIMS — a complex cloud native application with a number of interconnected virtual instances — is initially deployed on OPNFV. Once onboarded, end-to-end tests in Functest fully validate the functionality of the VNF along with the underlying VIM, NFVI, and SDN controller functionality.
Also, check out the next OPNFV Plugfest, where members and non-members come together to test OPNFV along with other open source and commercial products.
Nothing beats hands-on playing with IPv6 addresses to get the hang of how they work, and setting up a little test lab in KVM is as easy as falling over — and more fun. In this two-part series, we will learn about IPv6 private addressing and configuring test networks in KVM.
QEMU/KVM/Virtual Machine Manager
Let’s start with understanding what KVM is. Here I use KVM as a convenient shorthand for the combination of QEMU, KVM, and the Virtual Machine Manager that is typically bundled together in Linux distributions. The simplified explanation is that QEMU emulates hardware, and KVM is a kernel module that creates the guest state on your CPU and manages access to memory and the CPU. Virtual Machine Manager is a lovely graphical overlay to all of this virtualization and hypervisor goodness.
But you’re not stuck with pointy-clicky, no, for there are also fab command-line tools to use — such as virsh and virt-install.
Configuring IPv6 networking in KVM is just like configuring IPv4 networks. The main difference is those weird long addresses. Last time, we talked about the different types of IPv6 addresses. There is one more IPv6 unicast address class, and that is unique local addresses, fc00::/7 (see RFC 4193). This is analogous to the private address classes in IPv4, 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.
This diagram illustrates the structure of the unique local address space. 48 bits define the prefix and global ID, 16 bits are for subnets, and the remaining 64 bits are the interface ID:
| 7 bits |1| 40 bits | 16 bits | 64 bits |
+--------+-+------------+-----------+----------------------------+
| Prefix |L| Global ID | Subnet ID | Interface ID |
+--------+-+------------+-----------+----------------------------+
Here is another way to look at it, which is might be more helpful for understanding how to manipulate these addresses:
| Prefix | Global ID | Subnet ID | Interface ID |
+--------+--------------+-------------+----------------------+
| fd | 00:0000:0000 | 0000 | 0000:0000:0000:0000 |
+--------+--------------+-------------+----------------------+
fc00::/7 is divided into two /8 blocks, fc00::/8 and fd00::/8. fc00::/8 is reserved for future use. So, unique local addresses always start with fd, and the rest is up to you. The L bit, which is the eighth bit, is always set to 1, which makes fd00::/8. Setting it to zero makes fc00::/8. You can see this with subnetcalc:
RFC 4193 requires that addresses be randomly generated. You can invent addresses any way you choose, as long as they start with fd, because the IPv6 cops aren’t going to invade your home and give you a hard time. Still, it is a best practice to follow what RFCs say. The addresses must not be assigned sequentially or with well-known numbers. RFC 4193 includes an algorithm for building a pseudo-random address generator, or you can find any number of generators online.
Unique local addresses are not centrally managed like global unicast addresses (assigned to you by your Internet service provider), but even so the probability of address collisions is very low. This is a nice benefit when you need to merge some local networks or want to route between discrete private networks.
You can mix unique local addresses and global unicast addresses on the same subnets. Unique local addresses are routable and require no extra router tweaks. However, you should configure your border routers and firewalls to not allow them to leave your network except between private networks at different locations.
RFC 4193 advises against mingling AAAA and PTR records with your global unicast address records, because there is no guarantee that they will be unique, even though the odds of duplicates are low. Just like we do with IPv4 addresses, keep your private local name services and public name services separate. The tried-and-true combination of Dnsmasq for local name services and BIND for public name services works just as well for IPv6 as it does for IPv4.
Pseudo-Random Address Generator
One example of an online address generator is Local IPv6 Address Generator. You can find many cool online tools like this. You can use it to create a new address for you, or use it with your existing global ID and play with creating subnets.
Come back next week to learn how to plug all of this IPv6 goodness into KVM and do live testing.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.