OpenStack, the massive open source project that provides large businesses with the software tools to run their data center infrastructure, is now almost eight years old. While it had its ups and downs, hundreds of enterprises now use it to run their private clouds and there are even over two dozen public clouds that use the project’s tools. Users now include the likes of AT&T, Walmart, eBay, China Railway, GE Healthcare, SAP, Tencent and the Insurance Australia Group, to name just a few.
“One of the things that’s been happening is that we’re seven years in and the need for turning every type of infrastructure into programmable infrastructure has been proven out. “It’s no longer a debate,” OpenStack COO Mark Collier told me ahead of the projects semi-annual developer conference this week. OpenStack’s own surveys show that the project’s early adopters, who previously only tested it for their clouds, continue to move their production workflows to the platform, too. “We passed the hype phase,” Collier noted.
YOU’VE PROBABLY NEVER heard of the late Jim Weirich or his software. But you’ve almost certainly used apps built on his work.
Weirich helped create several key tools for Ruby, the popular programming language used to write the code for sites like Hulu, Kickstarter, Twitter, and countless others. His code was open source, meaning that anyone could use it and modify it. “He was a seminal member of the western world’s Ruby community,” says Justin Searls, a Ruby developer and co-founder of the software company Test Double.
When Weirich died in 2014, Searls noticed that no one was maintaining one of Weirich’s software-testing tools. That meant there would be no one to approve changes if other developers submitted bug fixes, security patches, or other improvements. Any tests that relied on the tool would eventually fail, as the code became outdated and incompatible with newer tech.
Selecting technologies means committing to solutions that will support an active, growing business over the long term, so it requires careful consideration and foresight. When an enterprise bets on the wrong horse, the result is often significantly higher development costs and reduced flexibility, both of which can stick around for the long haul.
In the past decade, adoption of open source software at the enterprise level has flourished, as more businesses discover the considerable advantages open source solutions hold over their proprietary counterparts, and as the enterprise mentality around open source continues to shift.
Enterprises looking to make smart use of open source software will find plenty of great reasons to do so. Here are just some of them.
For the longest time, naysayers were fairly intent on shutting down anyone who believed the Linux desktop would eventually make serious headway in the market. Although Linux has yet to breach 5 percent of that market, it continues to claw its way up. And with the help of very modern, highly efficient, user-friendly environments, like PinguyOS, it could make even more headway.
If you’ve never heard of PinguyOS, you’re in for a treat — especially if you’re new to Linux. PinguyOS is a Linux distribution, created by Antoni Norman, that is based on Ubuntu. The intention of PinguyOS is to look good, work well, and — most importantly — be easy to use. For the most part, the developers have succeeded with aplomb. It’s not perfect, but the PinguyOS desktop is certainly one that could make migrating to Linux a fairly easy feat for new users.
In this article, I’ll take a look at what PinguyOS has to offer.
What makes PinguyOS tick?
As I’ve already mentioned, at the heart of PinguyOS is Ubuntu. The current build is a bit behind at Ubuntu 14.04. This means users will not only enjoy some of the best hardware recognition on the Linux market, but the apt package manager is ready to serve. Of course, new users really don’t care about what package manager is employed to install and update applications. What will draw them in is a shiny GUI that makes everything a veritable point-and-click party. That’s where GNOME comes in. I’ve already been on the record saying that GNOME is one of the slickest and most stable desktops on the market. But PinguyOS doesn’t settle for a vanilla take on GNOME. Instead, PinguyOS adds a few extra options to make migration from other desktops a breeze.
To the standard GNOME desktop, PinguyOS adds a quick launch Docky bar to the bottom of the screen and an autohide Docky Places bar on the left edge of the screen (Figure 1).
Figure 1: The default PinguyOS desktop with the Places Dock in action.
As you can see (on the default desktop), there is one piece that tends to appeal to Linux users. That piece is Conky. I’ve used Conky on a number of desktops, for various purposes. In some instances, it’s actually quite handy. For many a Linux user, it seems a must to have detailed reports on such things as CPU, memory, and network usage; uptime; running processes; and more. Don’t get me wrong, Conky is a great tool. However, for new users, I’d say it’s far less interesting or useful. Thing is, new users won’t even know what that window on the desktop even is. Experienced Linux users will see it, think “That’s Conky,” and know how to easily get rid of it (should they not want it on their desktop) or configure it. New users? Not so much.
But that is a rather minor issue for a desktop that has so much to offer. Set aside Conky and you’ll see a Linux distribution that tosses just about everything it can at the desktop, in order to create something very useful. The developers have gone out of their way to add the necessary tools to make GNOME a desktop that welcomes just about every type of user. One way the PinguyOS developers have managed this is via GNOME extensions. Open up the Tweaks tool, click on Extensions, and you’ll see a healthy list of additions to GNOME (Figure 2).
Figure 2: The PinguyOS GNOME extension list.
All told, there are 23 extensions added to GNOME — some of which are enabled by default, some of which are not.
Installed applications
Beyond Conky and GNOME extensions, what else can you expect to find installed, by default, on PinguyOS? Click on the Menu in the top left of the desktop, and you’ll see a fairly complete list of applications, such as:
GNOME Do (do things as quickly as possible)
Shutter (capture and share screenshots)
Play On Linux (Install games via Wine)
Steam (manage Steam games)
Pinta (image creation/edit)
Empathy (instant message client)
Firefox (web browser)
Remmina (remote desktop client)
Skype (VOIP client)
TeamViewer 10 (tool for remote support)
Thunderbird (email client)
LibreOffice (full-featured office suite)
wxBanker (finance manager
Plex Home Theatre/Media Manager
Clementine (audio player)
OpenShot (video editor)
VLC (media player)
That’s a healthy list of tools — one that comes with the slightest price. The minimum installation size of PinguyOS is 15.2 GB. That’s nearly four times the size of a minimum Ubuntu installation. However, you do get a considerable amount of software for your trouble — something that will greatly appeal to new users. Instead of having to bother installing a number of software titles (after OS installation), you should have nearly everything you need to get your work done, in a very user-friendly environment. And with GNOME in control of the desktop, you can be certain PinguyOS will enjoy a very stable and slick desktop.
Tiny Faults
If Ihad to find something wrong with PinguyOS, it would be three particular somethings. The first two, I’ve already mentioned — being based on an outdated version of Ubuntu and the addition of Conky by default. The PinguyOS developers should consider working with Ubuntu 16.04 (also an LTS release). Also, Conky should be an optional addition, one that includes a user-friendly setup wizard upon first boot. The third isn’t quite as minor a nit. Instead of including GNOME Software as the default front end for the package manager, PinguyOS opts to include both Synaptic and the now-defunct Ubuntu Software Center. First off, Ubuntu Software Center shouldn’t be included on any distribution these days. The tool is broken, slow, and buggy. But adding Synaptic as a second option (or, rather, a first option — as it is the tool included in the Dock), is a mistake. This is not to say Synaptic isn’t a solid tool; it is. But considering how much better (and, again, user-friendly) GNOME Software is, it would only make sense to include it as the default.
As I said, Synaptic is a good tool, just not one I’d recommend to new users. Since PinguyOS’s focus is simplicity, maybe migrating to GNOME Software would be the right move.
Minor nits, major hits
Set aside the minor nits found in PinguyOS and you’ll see there is quite a lot to love about this distribution. It’s well polished, stable, offers all the software you need to get your work done, and it’s user-friendly. What more can you ask for from a desktop operating system?
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
The Cloud Foundry Container Runtime is the new name for Kubo, which is Kubernetes running on BOSH. In today’s episode of The New Stack Makers, TNS founder Alex Williams caught up with Cloud Foundry CTO Chip Childers to learn more about Cloud Foundry’s plans for this new runtime, with Childers highlighting how BOSH is serving the needs of today’s developers.
Childers went on to note that the Cloud Foundry Container Runtime and application runtime will sit next to one another, allowing for shared identity between Kuberenetes and Cloud Foundry application runtimes.
“I think that what’s most important right now is thinking about the developers. What is it that the developer in the enterprise needs? …”
Kubernetes isn’t even easy to pronounce, much less explain. So we recently illuminated how to demystify Kubernetes in plain English, so that a wide audience can understand it. (We also noted that the pronunciation may vary a bit, and that’s OK.)
Of course, helping your organization understand Kubernetes isn’t the same thing as helping everyone understand why Kubernetes – and orchestration tools in general – are necessary in the first place.
If you need to make the case for microservices, for example, you’re pitching an architectural approach to building and operating software, not a particular platform for doing so. With Kubernetes, you need to do both: Pitch orchestration as a means of effectively managing containers (and, increasingly, containerized microservices) and Kubernetes as the right platform for doing so.
No advance in information technology in the past six decades has offered a greater range of quantifiable benefits than has virtualization. Many IT professionals think of virtualization in terms of virtual machines (VM) and their associated hypervisors and operating-system implementations, but that only skims the surface. An increasingly broad set of virtualization technologies, capabilities, strategies and possibilities are redefining major elements of IT in organizations everywhere.
Virtualization definition
Examining the definition of virtualization in a broader context, we define virtualization as the art and science of making the function of an object or resource simulated or emulated in software identical to that of the corresponding physically realized object. In other words, we use an abstraction to make software look and behave like hardware, with corresponding benefits in flexibility, cost, scalability, reliability, and often overall capability and performance, and in a broad range of applications. Virtualization, then, makes “real” that which is not, applying the flexibility and convenience of software-based capabilities and services as a transparent substitute for the same realized in hardware.
Google engineers figured out a way to improve latency within the company’s software-defined networking (SDN) platform — Andromeda. Google released the latest version of the platform, Andromeda 2.1, today and says it reduces network latency between Google Compute Engine virtual machines by 40 percent compared to Andromeda 2.0.
Google fellow Amin Vahdat said most people think about bandwidth when they think about the performance of a network. “Our infrastructure does quite well on measures of bandwidth,” said Vahdat. “But most distributed applications care more about latency than bandwidth. We’re constantly getting new hardware to increase bandwidth. But with latency, it does involve these entrenched software layers. We’ve really focused on the latency of our network.”
The Ubuntu desktop has evolved a lot over the years. Ubuntu started off with GNOME 2, then moved onto Unity. From there, it came home to its roots with the GNOME 3 desktop. In this article, we’ll look at the Ubuntu desktops and compare them.
Ubuntu official – now with GNOME 3
Ubuntu moving on from Unity was perhaps the best thing for the project. Not only did it free up resources to refocus Ubuntu’s efforts on other elements of the distro. It also brought the distro back to its roots by returning to the GNOME desktop.
One of the first things you’ll notice about the current Ubuntu 17.10 release is that even though the switch was made to GNOME 3, the installation feels about the same. Obliviously there are subtle differences, but overall it’s a very close match to the Unity desktop.
In this article series, we have been discussing the Understanding OPNFVbook (see links to previous articles below). Here, we will look at OPNFV continuous testing and writing and onboarding virtual network functions (VNFs).
As we have discussed previously, OPNFV integrates a number of network function virtualization (NFV) related upstream projects and continuously tests specific combinations. The test projects in OPNFV are meant to validate the numerous scenarios using multiple test cases. The following figure shows a high-level view of the OPNFV testing framework.
OPNFV Testing Projects (Danube Release).
OPNFV testing projects may be viewed through four distinct lenses: coverage, scope, tier, and purpose. Coverage determines whether a test covers the entire stack, a specific subsystem (e.g., OpenStack) or just one component (e.g., OVS) of an OPNFV scenario, which is a specific integration of upstream projects — a reference architecture. Scope is classified into functional, performance, stress and compliance testing. A tier, on the other hand, describes the complexity (end-to-end network service vs. smoke) and frequency (daily vs. weekly) of a particular test. Finally, the purpose defines why a test is included (e.g., to gate a commit or or to simply be informational).
The following three broad OPNFV testing projects and five sub-projects are covered in the book:
Functest: Provides functional testing and validation of various OPNFV scenarios. Functest includes subsystem and end-to-end tests (e.g., Clearwater vIMS).
Yardstick: Executes performance testing on OPNFV scenarios based on ETSI reference test suites. The five sub-projects are as follows:
QTIP: Compute benchmarking (Note: The Euphrates release includes storage benchmarking as well)
Bottlenecks: Stress testing
Dovetail: Includes compliance tests. Dovetail will form the foundation of the future OPNFV Compliance Verification Program (CVP) for NFV infrastructure, VIM, and SDN controller commercial products.
Note that since publication of the book, two more testing-related projects have been included in the Euphrates release— Sample VNF and NFVBench.
The QTIP project is described in the book as follows:
Remember benchmarks such as MIPS or TPC-C, which attempted to provide a measure of infrastructure performance through one single number? QTIP attempts to do the same for NVFI compute (storage and networking part of roadmap) performance. QTIP is a Yardstick plugin that collects metrics from a number of tests selected from five different categories: integer, floating point, memory, deep packet inspection and cipher speeds. These numbers are crunched to produce a single QTIP benchmark. The baseline is 2,500, and bigger is better! In that sense one of the goal of QTIP is to make Yardstick results very easy to consume.
Another attractive feature of OPNFV is being able to view tests results in real-time through a unified dashboard. Behind the scenes, the OPNFV community has made a significant investment in test APIs, testcase database, results database, and results visualization efforts. Scenario reporting results are available for Functest, Yardstick, Storperf, and VSPERF.
Results of Functest.
The entire OPNFV stack, ultimately, serves one purpose: to run virtual network functions that in turn constitute network services. Chapter 9, which is meant for VNF vendors, looks at two major considerations: how to write VNFs and how to onboard them. After a brief discussion of modeling languages and general principles around VNF packaging, the book gives a concrete example where the Clearwater virtual IP multimedia system (vIMS) VNF is onboarded and tested on OPNFV along with the Cloudify management and orchestration software.
The following figure shows how Clearwater vIMS — a complex cloud native application with a number of interconnected virtual instances — is initially deployed on OPNFV. Once onboarded, end-to-end tests in Functest fully validate the functionality of the VNF along with the underlying VIM, NFVI, and SDN controller functionality.
Also, check out the next OPNFV Plugfest, where members and non-members come together to test OPNFV along with other open source and commercial products.