Home Blog Page 435

Linux Kernel Developer: Thomas Gleixner

Since the beginning of the Git era (that is, the 2.6.11 release in 2005), a total of 15,637 developers have contributed to the Linux kernel, according to the recent Linux Kernel Development Report, written by Jonathan Corbet and Greg Kroah-Hartman.

One of the top 30 developers is Thomas Gleixner, CTO at Linutronix GmbH, who serves in various kernel maintainer roles. In this article, Gleixner answers a few questions about his contributions to the Linux kernel.

Read more at The Linux Foundation

This Week in Open Source News: Bell Launches Open Source ONAP, Bug Bounty via Euro Commission & More

This week in open source news, Bell is the first company to launch an open source version of ONAP, the European Commission has announced a bug bounty program, & more

1) Telecom company Bell announced it has become the first to launch an open source version of ONAP.

Bell Becomes First Operator to Launch ONAP in Production– RCRWireless News

2) “The European Commission has announced its first-ever bug bounty program.”

European Commission Kicks Off Open-Source Bug Bounty– Info Security Magazine

3)“The vision for Canonical is to provide the platform that you see everywhere other than the personal domain,” says founder Mark Shuttleworth.

Spaceman Shuttleworth Finds Earthly Riches With Ubuntu Software– Bloomberg

4) Belarus-born Maps.Me helps bridge a major database hole in territories where there’s no 3G for Palestinian providers.

When Waze Won’t Help, Palestinians Make Their Own Maps– WIRED

5) The mysterious Ataribox will run Linux and “provide a ‘full PC experience for the TV,’ complete with AMD graphics hardware.”

You Can Pre-Order Ataribox Very Soon, But The Thing Is Still Sort Of A Mystery– Forbes

Stop Calling Everything “Open Source”: What “Open Source” Really Means

What does open source mean? That’s an increasingly tough question to answer because the term is now being applied everywhere and to everything — which is not good. To understand why open source is losing its meaning, you have to start by tracing the origins of the phrase.

Open source was a term originally used in the intelligence community. It had nothing to do with software.

Then, in 1998, a group of people who advocated the free sharing of software source code coined the term open source software. They did so primarily because they sought an alternative to free software, the term that was initially used to describe software whose source code was freely available.

For political reasons not worth discussing here, some people today continue to prefer the term free software. By and large, however, open source software has become the de facto way to describe software with freely redistributable source code.

Read more at Channel Futures (previously The VAR Guy)

Kubernetes 1.9 Release Brings Greater Stability and Storage Features

The Kubernetes developer community is capping off a successful year with the release of Kubernetes 1.9, adding important new features that should help to further encourage enterprise adoption.

Kubernetesis the most popular container orchestrator management software. It’s used to simplify the deployment and management of software containers, which are a popular tool among developers that allows them to run their applications across multiple computing environments without making any changes to the underlying code.

Read more at Silicon Angle

Why Linux HDCP Isn’t the End of the World

Recently, Sean Paul from Google’s ChromeOS team, submitted a patch series to enable HDCP support for the Intel display driver. HDCP – or High-bandwidth Digital Content Protection to its parents – is used to encrypt content over HDMI and DisplayPort links, which can only be decoded by trusted devices.

By Daniel Stone, Graphics Lead at Collabora.

HDCP is typically used to protect high-quality content. A source device will try to negotiate a HDCP link with its downstream receiver such as your TV or a frame-capture device. If a HDCP link can be negotiated, the pixel content will be encrypted over the wire and decrypted by the trusted downstream device. If a HDCP link cannot be successfully negotiated and pixel data remains unencrypted, the typical behaviour is to fall back to a lower resolution, or quality that is in some way less desirable to capture.

This is a form of copy protection usually lumped in with Digital Rights Management, something the open source community is often jumpy about. Most of the sound and fury typically comes from people mixing up the acronym with the kernel’s display management framework called the Direct Rendering Manager; this is thus the first known upstream submission of DRM for DRM.

Regardless, there is no reason for the open-source community to worry at all.

HDCP support is implemented almost entirely in the hardware. Rather than adding a mandatory encryption layer for content, the HDCP kernel support is dormant unless userspace explicitly requests an encrypted link. It then attempts to enable encryption in the hardware and informs userspace of the result. So there’s the first out: if you don’t want to use HDCP, then don’t enable it! The kernel doesn’t force anything on an unwilling userspace. Sinks (such as TVs) cannot demand an upstream link provide HDCP, either.

HDCP support is also only over the wire, not on your device. A common misconception is that DRM means that the pixel frames coming from your video decoder are encrypted. Not so: all content is completely unencrypted locally, with encryption only occurring at the very last step before the stream of pixels becomes a stream of physical electrons on a wire.

Continue reading on Collabora’s blog.

IPv6 Auto-Configuration in Linux

In Testing IPv6 Networking in KVM: Part 1, we learned about unique local addresses (ULAs). In this article, we will learn how to set up automatic IP address configuration for ULAs.

When to Use Unique Local Addresses

Unique local addresses use the fd00::/8 address block, and are similar to our old friends the IPv4 private address classes: 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16. But they are not intended as a direct replacement. IPv4 private address classes and network address translation (NAT) were created to alleviate the shortage of IPv4 addresses, a clever hack that prolonged the life of IPv4 for years after it should have been replaced. IPv6 supports NAT, but I can’t think of a good reason to use it. IPv6 isn’t just bigger IPv4; it is different and needs different thinking.

So what’s the point of ULAs, especially when we have link-local addresses (fe80::/10) and don’t even need to configure them? There are two important differences. One, link-local addresses are not routable, so you can’t cross subnets. Two, you control ULAs; choose your own addresses, make subnets, and they are routable.

Another benefit of ULAs is you don’t need an allocation of global unicast IPv6 addresses just for mucking around on your LAN. If you have an allocation from a service provider then you don’t need ULAs. You can mix global unicast addresses and ULAs on the same network, but I can’t think of a good reason to have both, and for darned sure you don’t want to use network address translation (NAT) to make ULAs publicly accessible. That, in my peerless opinion, is daft.

ULAs are for private networks only and should be blocked from leaving your network, and not allowed to roam the Internet. Which should be simple, just block the whole fd00::/8 range on your border devices.

Address Auto-Configuration

ULAs are not automatic like link-local addresses, but setting up auto-configuration is easy as pie with radvd, the router advertisement daemon. Before you change anything, run ifconfig or ip addr show to see your existing IP addresses.

You should install radvd on a dedicated router for production use, but for testing you can install it on any Linux PC on your network. In my little KVM test lab, I installed it on Ubuntu, apt-get install radvd. It should not start after installation, because there is no configuration file:

$ sudo systemctl status radvd
● radvd.service - LSB: Router Advertising Daemon
   Loaded: loaded (/etc/init.d/radvd; bad; vendor preset: enabled)
   Active: active (exited) since Mon 2017-12-11 20:08:25 PST; 4min 59s ago
     Docs: man:systemd-sysv-generator(8)

Dec 11 20:08:25 ubunut1 systemd[1]: Starting LSB: Router Advertising Daemon...
Dec 11 20:08:25 ubunut1 radvd[3541]: Starting radvd:
Dec 11 20:08:25 ubunut1 radvd[3541]: * /etc/radvd.conf does not exist or is empty.
Dec 11 20:08:25 ubunut1 radvd[3541]: * See /usr/share/doc/radvd/README.Debian
Dec 11 20:08:25 ubunut1 radvd[3541]: * radvd will *not* be started.
Dec 11 20:08:25 ubunut1 systemd[1]: Started LSB: Router Advertising Daemon.

It’s a little confusing with all the start and not started messages, but radvd is not running, which you can verify with good old ps|grep radvd. So we need to create /etc/radvd.conf. Copy this example, replacing the network interface name on the first line with your interface name:

interface ens7 {
  AdvSendAdvert on;
  MinRtrAdvInterval 3;
  MaxRtrAdvInterval 10;
  prefix fd7d:844d:3e17:f3ae::/64
        {
                AdvOnLink on;
                AdvAutonomous on;
        };

};

The prefix defines your network address, which is the first 64 bits of the address. The first two characters must be fd, then you define the remainder of the prefix, and leave the last 64 bits empty as radvd will assign the last 64 bits. The next 16 bits after the prefix define the subnet, and the remaining bits define the host address. Your subnet size must always be /64. RFC 4193 requires that addresses be randomly generated; see Testing IPv6 Networking in KVM: Part 1 for more information on creating and managing ULAs.

IPv6 Forwarding

IPv6 forwarding must be enabled. This command enables it until restart:

$ sudo sysctl -w net.ipv6.conf.all.forwarding=1

Uncomment or add this line to /etc/sysctl.conf to make it permanent:

net.ipv6.conf.all.forwarding = 1

Start the radvd daemon:

$ sudo systemctl stop radvd
$ sudo systemctl start radvd

This example reflects a quirk I ran into on my Ubuntu test system; I always have to stop radvd, no matter what state it is in, and then start it to apply any changes.

You won’t see any output on a successful start, and often not on a failure either, so run sudo systemctl radvd status. If there are errors, systemctl will tell you. The most common errors are syntax errors in /etc/radvd.conf.

A cool thing I learned after complaining on Twitter: when you run journalctl -xe --no-pager to debug systemctl errors, your output lines will wrap, and then you can actually read your error messages.

Now check your hosts to see their new auto-assigned addresses:

$ ifconfig
ens7      Link encap:Ethernet  HWaddr 52:54:00:57:71:50  
          [...]
          inet6 addr: fd7d:844d:3e17:f3ae:9808:98d5:bea9:14d9/64 Scope:Global
          [...]

And there it is! Come back next week to learn how to manage DNS for ULAs, so you can use proper hostnames instead of those giant IPv6 addresses.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

What’s New in Kubernetes Containers

The latest version of the open source container orchestration framework Kubernetes, Kubernetes 1.9, brings to the container-orchestration framework both full-blown and beta-test versions of significant new features:

  • The general availability of the Workloads API.
  • Beta support for Windows Server.
  • An alpha version of a new container storage API

What’s new in Kubernetes 1.9

Kubernetes 1.9 was released in December 2017.

Production version of the Workloads API

Promoted to beta in Kubernetes 1.8 and now in production release in Kubernetes 1.9, the Apps Workloads API provides ways to define workloads based on their behaviors, such as long-running apps that need persistent state.

Read more at InfoWorld

Simplicity Before Generality, Use Before Reuse

A common problem in component frameworks, class libraries, foundation services, and other infrastructure code is that many are designed to be general purpose without reference to concrete applications. This leads to a dizzying array of options and possibilities that are often unused or misused — or just not useful.

Generally, developers work on specific systems; specifically, the quest for unbounded generality rarely serves them well (if at all). The best route to generality is through understanding known, specific examples, focusing on their essence to find an essential common solution. Simplicity through experience rather than generality through guesswork.

Favouring simplicity before generality acts as a tiebreaker between otherwise equally viable design alternatives. When there are two possible solutions, favour the one that is simpler and based on concrete need rather than the more intricate one that boasts of generality. 

 

Read more at Medium

How to Squeeze the Most out of Linux File Compression

If you have any doubt about the many commands and options available on Linux systems for file compression, you might want to take a look at the output of the apropos compress command. Chances are you’ll be surprised by the many commands that you can use for compressing and decompressing files, as well as for comparing compressed files, examining and searching through the content of compressed files, and even changing a compressed file from one format to another (i.e., .z format to .gz format).

You’re likely to see all of these entries just for the suite of bzip2 compression commands. Add in zip, gzip, and xz, and you’ve got a lot of interesting options.

Read more at NetworkWorld

CoreOS’s Open Cloud Services Could Bring Cloud Portability to Container-Native Apps

With the release of Tectonic 1.8, CoreOS provides a way to easily deploy container-native applications as services, even across multiple service providers and in-house resources.

“We take open source APIs, make them super easy to consume, and create a catalog of these things to run on top of Kubernetes so they are portable no matter where you go,” said Brandon Philips, CoreOS chief technology officer.

The company launched this latest iteration of Tectonic, its commercial distribution of the Kubernetes open source container orchestration engine, at the Cloud Native Computing Foundation‘s Kubecon 2017 event, held last week in Austin.

Read more at The New Stack