When you start comparing computers, you probably pit Windows against macOS—but Linux rarely gets a mention. Still, this lesser-known operating system has a strong and loyal following. That’s because it offers a number of advantages over its competitors.
Whether you’re completely new to Linux or have dabbled with it once or twice already, we want you to consider running it on your next laptop or desktop—or alongside your existing operating system. Read on to decide if it’s time to make the switch.
What is Linux?
If you’re already familiar with Linux, you can skip this section. For everyone else, Linux is a free open-source operating system, which means the code is available for anyone to explore. Technically speaking, the term “Linux” refers to just the kernel, or the core, of the code. However, people often use the name to talk about the whole operating system, including the interface and bundled apps.
Microservices have been a focus across the open source world for several years now. Although open source technologies such as Docker, Kubernetes, Prometheus, and Swarm make it easier than ever for organizations to adopt microservice architectures, getting your team on the same page about microservices remains a difficult challenge.
For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices. The problem is that that there is nothing inherently “micro” about microservices. Some can be small, but size is relative and there’s no standard measurement unit across organizations. A “small” service at one company might be 1 million lines of code, but far fewer at another organization.
Some argue that microservices aren’t a new thing at all, …
As mentioned previously, OPNFV integrates a number of upstream projects along with code contributions from the OPNFV community. To integrate and test these projects and contributions in an automated manner, the OPNFV project uses a variety of DevOps tools, hardware labs and a sophisticated CI pipeline. In fact, there is no better way for a telecom operator to absorb the principles of DevOps than by joining OPNFV.
Chapter 6 starts by discussing each of the various software and cloud-based tools used by OPNFV for DevOps:
Collaboration: JIRA/Confluence
Source code management and code review: Git, Gerrit, and GitHub
CI/software automation: Jenkins
Artifact repository: Google cloud storage and Docker hub
Let’s take a closer look at Gerrit, for example.
Code Reviews – Gerrit
Committing to master requires an approval process, and this process is managed through a tool called Gerrit. Gerrit is an open source web-based code review tool developed by Google. All changes pushed by contributors using a git push or git review command are reviewed in Gerrit by a set of reviewers, who view and inspect the patch. Reviewers also get to see the results of a continuous integration (CI) build and automated verify test run. Reviewers provide scores of +2, +1, -1 or -2. A +2 is a definite accept, while a -2 is a definite reject. A +1 or -1 may result in the change being accepted, rejected or sent back for changes.
OPNFV Gerrit
The chapter then describes the hardware labs used for automated integration and testing jobs. OPNFV has defined a standardized set of hardware, called a Pharos lab, consisting of 6 nodes and associated switches to automatically deploy OPNFV software by using the CI pipeline. The Pharos lab concept has been very successful with 16 labs distributed all around the world working seamlessly.
Centre of Excellence in Next Generation Networks Pharos Lab
Chapter 6 continues by describing the CI pipeline in detail, where changes in upstream projects or community code contributions trigger integration jobs and specific time-durations (such as daily, weekly) trigger testing jobs. The CI pipeline diagram from the book is shown below:
OPNFV CI Pipeline
Chapter 7 start by exploring the concept of OPNFV scenarios. Since OPNFV allows for multiple choices for different software layers, numerous permutations are possible. In addition to the different upstream projects described in the previous blog, OPNFV also allows for diversity in installers. The list of scenarios represents a subset of all possible permutations; effectively each scenario is a tested reference architecture. Examples of scenarios are:
OpenStack + ODL + L3 FD.io + High Availability (HA) using the Apex installer, or
OpenStack + OpenContrail + HA using the JOID installer
The OPNFV Danube release had 55 scenarios. However, if we ignore non-HA scenarios and the specific installer used, we are down to 21 distinct usable scenarios.
The chapter continues by providing an overview of the 4 major installers used in the Danube release: Apex, Compass, Fuel and JOID, and ends with a discussion of additional deployment related projects such as Daisy (a new installer), IPv6, Parser, ARMBand (to run OPNFV on ARM), and FastDataStacks (FD.io with OPNFV).
Want to learn more? You can check out the previous blogpost that discussed the broader NFV transformation complexities and how OPNFV solves an important piece of the puzzle, download the Understanding OPNFV ebook in PDF (in English or Chinese), or order a printed version on Amazon.
Samsung’s DeX is a clever way to turn your phone into a desktop computer. However, there’s one overriding problem: you probably don’t have a good reason to use it instead of a PC. And Samsung is trying to fix that. It’s unveiling Linux on Galaxy, an app-based offering that (surprise) lets you run Linux distributions on your phone. Ostensibly, it’s aimed at developers who want to bring their work environment with them wherever they go. You could dock at a remote office knowing that your setup will be the same as usual.
You can learn an amazing amount of information about your network connections with these three glorious Linux networking commands. iftop tracks network connections by process number, Nethogs quickly reveals what is hogging your bandwidth, and vnstat runs as a nice lightweight daemon to record your usage over time.
iftop
The excellent iftop listens to the network interface that you specify, and displays connections in a top-style interface.
This is a great little tool for quickly identifying hogs, measuring speed, and also to maintain a running total of your network traffic. It is rather surprising to see how much bandwidth we use, especially for us old people who remember the days of telephone land lines, modems, screaming kilobits of speed, and real live bauds. We abandoned bauds a long time ago in favor of bit rates. Baud measures signal changes, which sometimes were the same as bit rates, but mostly not.
If you have just one network interface, run iftop with no options. iftop requires root permissions:
$ sudo iftop
When you have more than one, specify the interface you want to monitor:
$ sudo iftop -i wlan0
Just like top, you can change the display options while it is running.
h toggles the help screen.
n toggles name resolution.
s toggles source host display, and d toggles the destination hosts.
s toggles port numbers.
N toggles port resolution; to see all port numbers toggle resolution off.
t toggles the text interface. The default display requires ncurses. I think the text display is more readable and better-organized (Figure 1).
p pauses the display.
q quits the program.
Figure 1: The text display is readable and organized.
When you toggle the display options, iftop continues to measure all traffic. You can also select a single host to monitor. You need the host’s IP address and netmask. I was curious how much of a load Pandora put on my sad little meager bandwidth cap, so first I used dig to find their IP address:
$ dig A pandora.com
[...]
;; ANSWER SECTION:
pandora.com. 267 IN A 208.85.40.20
pandora.com. 267 IN A 208.85.40.50
Is that not seriously groovy? I was surprised to learn that Pandora is easy on my precious bits, using around 500Kb per hour. And, like most streaming services, Pandora’s traffic comes in spurts and relies on caching to smooth out the lumps and bumps.
You can do the same with IPv6 addresses, using the -G option. Consult the fine man page to learn the rest of iftop’s features, including customizing your default options with a personal configuration file, and applying custom filters (see PCAP-FILTER for a filter reference).
Nethogs
When you want to quickly learn who is sucking up your bandwidth, Nethogs is fast and easy. Run it as root and specify the interface to listen on. It displays the hoggy application and the process number, so that you may kill it if you so desire:
$ sudo nethogs wlan0
NetHogs version 0.8.1
PID USER PROGRAM DEV SENT RECEIVED
7690 carla /usr/lib/firefox wlan0 12.494 556.580 KB/sec
5648 carla .../chromium-browser wlan0 0.052 0.038 KB/sec
TOTAL 12.546 556.618 KB/sec
Nethogs has few options: cycling between kb/s, kb, b, and mb, sorting by received or sent packets, and adjusting the delay between refreshes. See man nethogs, or run nethogs -h.
vnstat
vnstat is the easiest network data collector to use. It is lightweight and does not need root permissions. It runs as a daemon and records your network statistics over time. The vnstat command displays the accumulated data:
By default it displays all network interfaces. Use the -i option to select a single interface. Merge the data of multiple interfaces this way:
$ vnstat -i wlan0+eth0+eth1
You can filter the display in several ways:
-h displays statistics by hours.
-d displays statistics by days.
-w and -m displays statistics by weeks and months.
Watch live updates with the -l option.
This command deletes the database for wlan1 and stops watching it:
$ vnstat -i wlan1 --delete
This command creates an alias for a network interface. This example uses one of the weird interface names from Ubuntu 16.04:
$ vnstat -u -i enp0s25 --nick eth0
By default vnstat monitors eth0. You can change this in /etc/vnstat.conf, or create your own personal configuration file in your home directory. See man vnstat for a complete reference.
You can also install vnstati to create simple, colored graphs (Figure 2):
$ vnstati -s -i wlx7cdd90a0a1c2 -o vnstat.png
Figure 2: You can create simple colored graphs with vnstati.
See man vnstati for complete options.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
In August, four US Senators introduced a bill designed to improve Internet of Things (IoT) security. The IoT Cybersecurity Improvement Act of 2017 is a modest piece of legislation. It doesn’t regulate the IoT market. It doesn’t single out any industries for particular attention, or force any companies to do anything. It doesn’t even modify the liability laws for embedded software. Companies can continue to sell IoT devices with whatever lousy security they want.
What the bill does do is leverage the government’s buying power to nudge the market: any IoT product that the government buys must meet minimum security standards. It requires vendors to ensure that devices can not only be patched, but are patched in an authenticated and timely manner; don’t have unchangeable default passwords; and are free from known vulnerabilities. It’s about as low a security bar as you can set,…
We’re excited that support for getting and managing TLS certificates via the ACME protocol is coming to the Apache HTTP Server Project (httpd). ACME is the protocol used by Let’s Encrypt, and hopefully other Certificate Authorities in the future. We anticipate this feature will significantly aid the adoption of HTTPS for new and existing websites.
We created Let’s Encrypt in order to make getting and managing TLS certificates as simple as possible. For Let’s Encrypt subscribers, this usually means obtaining an ACME client and executing some simple commands. Ultimately though, we’d like for most Let’s Encrypt subscribers to have ACME clients built in to their server software so that obtaining an additional piece of software is not necessary. The less work people have to do to deploy HTTPS the better!
Cloud-native applications and infrastructure require a radically different approach to security. Keep these best practices in mind.
Today organizations large and small are exploring the adoption of cloud-native software technologies. “Cloud-native” refers to an approach that packages software within standardized units called containers, arranges those units into microservices that interface with each other to form applications, and ensures that running applications are fully automated for greater speed, agility, and scalability.
Because this approach fundamentally changes how software is built, deployed, and run, it also fundamentally changes how software needs to be protected. Cloud-native applications and infrastructure create several new challenges for security professionals, who will need to establish new security programs that support their organization’s use of cloud-native technologies.
There have been several blog posts going around about why one would use Docker with R.
In this post I’ll try to add a DevOps point of view and explain how containerizing
R is used in the context of the OpenCPU system for building and deploying R servers.
Has anyone in the #rstats world written really well about the *why* of their use of Docker, as opposed to the the *how*?
Last week at our annual user conference, Node.js Interactive, we announced several new members to the Node.js Foundation. One of the members that joined is Chef. Chef works with more than a thousand companies around the world to deliver their vision of digital transformation.
We sat down with the team at Chef to talk about how Node.js fits within the DevOps movement, why they joined the Node.js Foundation, and also about a new offering from the group called Habitat Builder.