Home Blog Page 455

Ubuntu Desktops Compared

The Ubuntu desktop has evolved a lot over the years. Ubuntu started off with GNOME 2, then moved onto Unity. From there, it came home to its roots with the GNOME 3 desktop. In this article, we’ll look at the Ubuntu desktops and compare them.

Ubuntu official – now with GNOME 3

Ubuntu moving on from Unity was perhaps the best thing for the project. Not only did it free up resources to refocus Ubuntu’s efforts on other elements of the distro. It also brought the distro back to its roots by returning to the GNOME desktop.

One of the first things you’ll notice about the current Ubuntu 17.10 release is that even though the switch was made to GNOME 3, the installation feels about the same. Obliviously there are subtle differences, but overall it’s a very close match to the Unity desktop.

Read more at Datamation

Using OPNFV to Comprehensively Test Your NFV Cloud

In this article series, we have been discussing the Understanding OPNFV book (see links to previous articles below). Here, we will look at OPNFV continuous testing and writing and onboarding virtual network functions (VNFs).                

As we have discussed previously, OPNFV integrates a number of network function virtualization (NFV) related upstream projects and continuously tests specific combinations. The test projects in OPNFV are meant to validate the numerous scenarios using multiple test cases. The following figure shows a high-level view of the OPNFV testing framework.             

OPNFV Testing Projects (Danube Release).

OPNFV testing projects may be viewed through four distinct lenses: coverage, scope, tier, and purpose. Coverage determines whether a test covers the entire stack, a specific subsystem (e.g., OpenStack) or just one component (e.g., OVS) of an OPNFV scenario, which is a specific integration of upstream projects  a reference architecture. Scope is classified into functional, performance, stress and compliance testing. A tier, on the other hand, describes the complexity (end-to-end network service vs. smoke) and frequency (daily vs. weekly) of a particular test. Finally, the purpose defines why a test is included (e.g., to gate a commit or or to simply be informational).

The following three broad OPNFV testing projects and five sub-projects are covered in the book:

  • Functest: Provides functional testing and validation of various OPNFV scenarios. Functest includes subsystem and end-to-end tests (e.g., Clearwater vIMS).

  • Yardstick: Executes performance testing on OPNFV scenarios based on ETSI reference test suites. The five sub-projects are as follows:

    • VSPERF: Virtual switch (OVS, FD.io) performance testing

    • Cperf: SDN controller control-plane and dataplane performance testing

    • Storperf: External block storage performance testing

    • QTIP: Compute benchmarking (Note: The Euphrates release includes storage benchmarking as well)

    • Bottlenecks: Stress testing

  • Dovetail: Includes compliance tests. Dovetail will form the foundation of the future OPNFV Compliance Verification Program (CVP) for NFV infrastructure, VIM, and SDN controller commercial products.

Note that since publication of the book, two more testing-related projects have been included in the Euphrates release  Sample VNF and NFVBench.

The QTIP project is described in the book as follows:

Remember benchmarks such as MIPS or TPC-C, which attempted to provide a measure of infrastructure performance through one single number? QTIP attempts to do the same for NVFI compute (storage and networking part of roadmap) performance. QTIP is a Yardstick plugin that collects metrics from a number of tests selected from five different categories: integer, floating point, memory, deep packet inspection and cipher speeds. These numbers are crunched to produce a single QTIP benchmark. The baseline is 2,500, and bigger is better! In that sense one of the goal of QTIP is to make Yardstick results very easy to consume.

Another attractive feature of OPNFV is being able to view tests results in real-time through a unified dashboard. Behind the scenes, the OPNFV community has made a significant investment in test APIs, testcase database, results database, and results visualization efforts. Scenario reporting results are available for Functest, Yardstick, Storperf, and VSPERF.

Results of Functest.

The entire OPNFV stack, ultimately, serves one purpose: to run virtual network functions that in turn constitute network services. Chapter 9, which is meant for VNF vendors, looks at two major considerations: how to write VNFs and how to onboard them. After a brief discussion of modeling languages and general principles around VNF packaging, the book gives a concrete example where the Clearwater virtual IP multimedia system (vIMS) VNF is onboarded and tested on OPNFV along with the Cloudify management and orchestration software.

The following figure shows how Clearwater vIMS a complex cloud native application with a number of interconnected virtual instances is initially deployed on OPNFV. Once onboarded, end-to-end tests in Functest fully validate the functionality of the VNF along with the underlying VIM, NFVI, and SDN controller functionality.

Initial Deployment of Clearwater vIMS.

Want to learn more? You can download the Understanding OPNFV ebook in PDF (in English or Chinese), or order a printed version on Amazon. Or you can check out the previous blogs:

Also, check out the next OPNFV Plugfest, where members and non-members come together to test OPNFV along with other open source and commercial products.

Testing IPv6 Networking in KVM: Part 1

Nothing beats hands-on playing with IPv6 addresses to get the hang of how they work, and setting up a little test lab in KVM is as easy as falling over  and more fun. In this two-part series, we will learn about IPv6 private addressing and configuring test networks in KVM.

QEMU/KVM/Virtual Machine Manager

Let’s start with understanding what KVM is. Here I use KVM as a convenient shorthand for the combination of QEMU, KVM, and the Virtual Machine Manager that is typically bundled together in Linux distributions. The simplified explanation is that QEMU emulates hardware, and KVM is a kernel module that creates the guest state on your CPU and manages access to memory and the CPU. Virtual Machine Manager is a lovely graphical overlay to all of this virtualization and hypervisor goodness.

But you’re not stuck with pointy-clicky, no, for there are also fab command-line tools to use  such as virsh and virt-install.

If you’re not experienced with KVM, you might want to start with Creating Virtual Machines in KVM: Part 1 and Creating Virtual Machines in KVM: Part 2 – Networking.

IPv6 Unique Local Addresses

Configuring IPv6 networking in KVM is just like configuring IPv4 networks. The main difference is those weird long addresses. Last time, we talked about the different types of IPv6 addresses. There is one more IPv6 unicast address class, and that is unique local addresses, fc00::/7 (see RFC 4193). This is analogous to the private address classes in IPv4, 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.

This diagram illustrates the structure of the unique local address space. 48 bits define the prefix and global ID, 16 bits are for subnets, and the remaining 64 bits are the interface ID:

| 7 bits |1|  40 bits   |  16 bits  |          64 bits           |
+--------+-+------------+-----------+----------------------------+
| Prefix |L| Global ID  | Subnet ID |        Interface ID        |
+--------+-+------------+-----------+----------------------------+

Here is another way to look at it, which is might be more helpful for understanding how to manipulate these addresses:

| Prefix |  Global ID   |  Subnet ID  |   Interface ID       |
+--------+--------------+-------------+----------------------+
|   fd   | 00:0000:0000 |    0000     | 0000:0000:0000:0000  |
+--------+--------------+-------------+----------------------+

fc00::/7 is divided into two /8 blocks, fc00::/8 and fd00::/8. fc00::/8 is reserved for future use. So, unique local addresses always start with fd, and the rest is up to you. The L bit, which is the eighth bit, is always set to 1, which makes fd00::/8. Setting it to zero makes fc00::/8. You can see this with subnetcalc:

$ subnetcalc fd00::/8 -n
Address  = fd00::
            fd00 = 11111101 00000000

$ subnetcalc fc00::/8 -n
Address  = fc00::
            fc00 = 11111100 00000000

RFC 4193 requires that addresses be randomly generated. You can invent addresses any way you choose, as long as they start with fd, because the IPv6 cops aren’t going to invade your home and give you a hard time. Still, it is a best practice to follow what RFCs say. The addresses must not be assigned sequentially or with well-known numbers. RFC 4193 includes an algorithm for building a pseudo-random address generator, or you can find any number of generators online.

Unique local addresses are not centrally managed like global unicast addresses (assigned to you by your Internet service provider), but even so the probability of address collisions is very low. This is a nice benefit when you need to merge some local networks or want to route between discrete private networks.

You can mix unique local addresses and global unicast addresses on the same subnets. Unique local addresses are routable and require no extra router tweaks. However, you should configure your border routers and firewalls to not allow them to leave your network except between private networks at different locations.

RFC 4193 advises against mingling AAAA and PTR records with your global unicast address records, because there is no guarantee that they will be unique, even though the odds of duplicates are low. Just like we do with IPv4 addresses, keep your private local name services and public name services separate. The tried-and-true combination of Dnsmasq for local name services and BIND for public name services works just as well for IPv6 as it does for IPv4.

Pseudo-Random Address Generator

One example of an online address generator is Local IPv6 Address Generator. You can find many cool online tools like this. You can use it to create a new address for you, or use it with your existing global ID and play with creating subnets.

Come back next week to learn how to plug all of this IPv6 goodness into KVM and do live testing.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

What Are the Most Disliked Programming Languages?

On Stack Overflow Jobs, you can create your own Developer Story to showcase your achievements and advance your career. One option you have when creating a Developer Story is to add tags you would like to work with or would not like to work with:

This offers us an opportunity to examine the opinions of hundreds of thousands of developers. There are many ways to measure the popularity of a language; for example, we’ve often used Stack Overflow visits or question views to measure such trends. But this dataset is a rare way to find out what technologies people tend to dislike, when given the opportunity to say so on their CV.

Read more at StackOverflow

GPU Computing: The Key to Unleashing the Mysteries of All That Data

To effectively handle all the data being created in the world today, we need GPU computing.

To harness this goldmine of data, artificial intelligence and machine learning has emerged, which uses algorithms to train data to find patterns. The problem is, traditional CPUs (central processing units) just can’t adequately handle the bulk processing required for this complex, boat-load of data, and this is where GPU (graphics processing unit) computing comes in.

GPU-based servers are fast becoming as necessary to effectively process data for artificial intelligence (especially for deep learning algorithms) as CPUs have been to do virtually everything else. While CPU’s have the capabilities to process tons of unstructured data—eventually—in a matter of days or hours, GPUs can do so in a matter of minutes.

Read more at InfoWorld

The Origin Story of ROS, the Linux of Robotics

Ten years ago, while struggling to bring the vision of the “Linux of Robotics” to reality, I was inspired by the origin stories of other transformative endeavors. In this post I want to share some untold parts of the early story of the Robot Operating System, or ROS, to hopefully inspire those of you currently pursuing your “crazy” ideas.

A glaring need

This origin story starts when Eric Berger, my partner of seven years on this project, and I were starting our PhDs at Stanford.

The impetus for ROS came accidentally when we were searching around for a compelling robotics project to tackle for our PhDs. We talked to countless folks and found the same pattern repeated over and over by those setting out to innovate on robotics software: They spent 90 percent of their time re-writing code others had written before and building a prototype test-bed. Then the last 10 percent of their efforts, at most, were spent on innovation.

Read more at IEEE Spectrum

Beyond Bitcoin: Oracle, IBM Prepare Blockchains for Industrial Use

There’s been a lot of talk recently about blockchains beyond its original use for supporting Bitcoin. Earlier this year, we covered a session in London where the takeaway from the panel was there are too many problems to be solved. But that was in February, and a lot has changed since then.

Judging from some of the blockchain sessions at the recent Oracle OpenWorldconference, the emerging potential uses for blockchain are kind of staggering.

Blockchain uses a technology also described as a “distributed ledger.” That’s an obvious fit for finance, which is all about ledgers, but it turns out the distributed ledger concept can be applied to — well, to almost everything.

According to BlackBook Research, 70 percent of health insurance payers are planning to have blockchain integrated into their systems by the beginning of 2019. That’s 15 months from now. Among the payers with over 500,000 members, that number climbs to 98 percent, with 14 percent currently testing blockchain systems.

Read more at The New Stack

Zorin OS 12 Passes One Million Downloads Mark, 60% Are Windows and Mac Users

Zorin OS is an Ubuntu-based distribution targeted at those who want to migrate from Microsoft’s Windows and Apple’s Mac OS computer operating system to an Open Source alternative that offers them a more secure, stable, and reliable computing environment. Zorin OS 12 is the latest stable version of the Linux OS, and it got its second point release in September 2017.

Both the Zorin OS 12.1 and 12.2 maintenance updates helped the Zorin OS 12 series to pass the one million downloads mark since distro’s initial release on November 18, 2016, and the best part is that over 60 percent of these downloads are from users using either Windows or Mac OS, which means that Zorin OS’ mission was successfully achieves.

Read more at Softpedia.

Learn more about Zorin OS in this review from Jack Wallen.

Demand for Certified SysAdmins and Developers Is On the Rise

Even with a shortage of IT workers, some employers are still discerning in their hiring requirements and are either seeking certified candidates or offering to pay for their employees to become certified.

The Linux Foundation’s 2017 Open Source Jobs Report finds that half of hiring managers are more likely to hire a certified professional, while 47 percent of companies are willing to help pay for employees’ certifications. Meanwhile, 89% of hiring managers find it difficult to find open source talent.

The demand for skills relating to cloud administration, DevOps, and continuous integration/continuous delivery is fueling interest in training and certifications related to open source projects and tools that power the cloud, according to the report. Workers find certification important, too. In fact, 76 percent of open source pros say certifications are useful to their careers.

Existing cloud certifications, such as the Certified Kubernetes Administrator exam, are expected to help address the growing demand for these skills, the report states.

Why certifications are important

Demand for certifications is on the rise. Half of hiring managers prioritize finding certified pros; 50 percent also say they are more likely to hire a certified candidate than one without a certification, up from 44 percent in 2016, according to the Open Source Jobs report. And there’s been a big jump in companies willing to pay for employees to become certified. Nearly half say they’re willing to pay, up from one-third a year ago.

Here’s why. Some employers believe training alone is not enough and perceive that overall, certified IT pros make great employees, according to a CompTIA report. In fact, the study finds 91 percent of employers view IT certifications as a differentiator and say they play a key role in the hiring process.

Then there is perception. “Certifications make a good first impression,” the CompTIA report observes, and there is the belief that certified IT employees are more confident, knowledgeable and reliable, and perform at a higher level. While not specific to a particular technology, a whopping 95 percent of employers in the CompTIA study agree IT certifications provide a baseline set of knowledge for certain IT positions.

Only 21 percent say they definitely would not pay for certifications, down from 30 percent last year, the Open Source Jobs report finds.

The good news for IT professionals is that with certification, pay premiums have consistently grown over the past year. Areas in which IT pros receive higher pay, include: information security; application development/programming languages; databases; networking and communications; and systems administration/engineering – skill sets that are among the hardest to fill.

Dice’s annual salary survey finds salaries for Linux professionals are in line with last year, at over $100,000 annually – higher than the average $92,000 for tech pros nationally.

Formal training and/or certifications are a priority for hiring managers looking for developers (55 percent, compared to 47 percent who said so in 2016) and for Systems Administrators (53 percent vs. 47 percent last year).

Trending skills and certifications in high demand

In its 2018 Salary Guide, Robert Half Technology lists the most highly sought tech skills and certifications in North America. Among them are: .NET, Agile, and Scrum certifications. For application development work, businesses are seeking certifications and skills in areas including PHP and LAMP (Linux, Apache, MySQL, and Perl/Python). In networking and telecommunications, Linux/Unix administration is in high demand as well as in technical services, help desk, and technical support.

Meanwhile, 76 percent of professionals find certifications are useful to their careers, mainly to demonstrate technical knowledge to potential employers (reported by 47 percent of respondents), and 31 percent say that certifications generally make them more employable.

Although salary, not surprisingly, is the biggest incentive for switching jobs (82 percent), certification opportunities is an incentive for 65 percent of respondents in the Open Source Jobs report.

Download the full 2017 Open Source Jobs Report now.

Watch Keynote Videos from OS Summit and ELC Europe 2017 Including a Conversation with Linus Torvalds

If you weren’t able to attend Open Source Summit and Embedded Linux Conference (ELC) Europe last week, don’t worry! We’ve recorded keynote presentations from both events and all the technical sessions from ELC Europe to share with you here.

Check out the on-stage conversation with Linus Torvalds and VMware’s Dirk Hohndel, opening remarks from The Linux Foundation’s Executive Director Jim Zemlin, a special presentation from 11-year-old CyberShaolin founder Reuben Paul, and more.

Read more at The Linux Foundation