This past week in open source, the 2018 Linux Foundation Events list is live, how Linux wound up dominating the TOP500 list, and more! Read on to stay in the know.
1) Adrian Bridgwater breaks down what’s coming up in 2018 for Linux Foundation events.
3) This Fall, the kernel team extended the next version of Linux’s Long Term Support (LTS) from two to six years– but that doesn’t necessarily mean the same for future versions.
Make this festive season one to remember with a project you can build for around $25 over a weekend and share in with your your friends and family.
Grab a mince pie and a cup of coffee as we build your very own Festive Lights decoration that is powered by a Raspberry Pi Zero W and Docker containers. It’ll synchronise its colour globally across the world in real-time and is controllable through Twitter using the Cheerlights platform.
We’ll customise a small festive decoration by adding in a Raspberry Pi Zero W and colourful, low-power lights from Pimoroni, then use Docker to build, ship and run the code without any guess-work.
Increasingly, as open source programs become more pervasive at organizations of all sizes, tech and DevOps workers are choosing to or being asked to launch their own open source projects. From Google to Netflix to Facebook, companies are also releasing their open source creations to the community. It’s become common for open source projects to start from scratch internally, after which they benefit from collaboration involving external developers.
Launching a project and then rallying community support can be more complicated than you think, however. A little up-front work can help things go smoothly, and that’s exactly where the new guide toStarting an Open Source Projectcomes in.
There are many different types of Kubernetes distributions in the container orchestration realm. They range from fully community produced to fully commercial and vary according to the tools and features they offer, as well as the levels of abstraction and control the provide. So which Kubernetes distribution is right for your organization?
Your needs as a user — including the working environment, the availability of expertise, and the specific use case you’re dealing with — determine whether Containers as a Service (CaaS) or an abstracted platform is the right choice. No single, straightforward framework exists to guarantee a perfect decision. Still, the two charts we present below may be a start.
NXP Semiconductors, a world leader in secure connectivity solutions, just announced a Linux distribution that is intended to support factory automation. It’s called Open Industrial Linux (OpenIL), and it’s promising true industrial-grade security based on trusted computing, hardened software, cryptographic operations and end-to-end security.
The fact that factory managers and industrial equipment manufacturers are turning to Linux is not surprising considering its operational stability, professional approach to system security, and its obvious low cost of ownership. The importance of the security and reliability of manufacturing security to the well being of any industrial nation is clear from the focus that DHS places on this sector.
Don’t be a watt-waster. If your computers don’t need to be on then shut them down. For convenience and nerd creds, you can configure your Linux computers to wake up and shut down automatically.
Precious Uptimes
Some computers need to be on all the time, which is fine as long as it’s not about satisfying an uptime compulsion. Some people are very proud of their lengthy uptimes, and now that we have kernel hot-patching that leaves only hardware failures requiring shutdowns. I think it’s better to be practical. Save electricity as well as wear on your moving parts, and shut them down when they’re not needed. For example, you can wake up a backup server at a scheduled time, run your backups, and then shut it down until it’s time for the next backup. Or, you can configure your Internet gateway to be on only at certain times. Anything that doesn’t need to be on all the time can be configured to turn on, do a job, and then shut down.
Sleepies
For computers that don’t need to be on all the time, good old cron will shut them down reliably. Use either root’s cron, or /etc/crontab. This example creates a root cron job to shut down every night at 11:15 p.m.
# crontab -e -u root
# m h dom mon dow command
15 23 * * * /sbin/shutdown -h now
This example runs only on weekdays.
15 23 * * 1-5 /sbin/shutdown -h now
You can create multiple cronjobs for different days and times. See man 5 cron to learn about all the time and date fields.
You may also use /etc/crontab, which is fast and easy, and everything is in one file. You have to specify the user:
15 23 * * 1-5 root shutdown -h now
Wakies
Auto-wakeups are very cool; most of my SUSE colleagues are in Nuremberg, so I am crawling out of bed at 5 a.m. to have a few hours of overlap with their schedules. My work computer turns itself on at 5:30 a.m., and then all I have to do is drag my coffee and myself to my desk to start work. It might not seem like pressing a power button is a big deal, but at that time of day every little thing looms large.
Waking up your Linux PC can be less reliable than shutting it down, so you may want to try different methods. You can use wakeonlan, RTC wakeups, or your PC’s BIOS to set scheduled wakeups. These all work because, when you power off your computer, it’s not really all the way off; it is in an extremely low-power state and can receive and respond to signals. You need to use the power supply switch to turn it off completely.
BIOS Wakeup
A BIOS wakeup is the most reliable. My system BIOS has an easy-to-use wakeup scheduler (Figure 1). Chances are yours does, too. Easy peasy.
Figure 1: My system BIOS has an easy-to-use wakeup scheduler.
wakeonlan
wakeonlan is the next most reliable method. This requires sending a signal from a second computer to the computer you want to power on. You could use an Arduino or Raspberry Pi to send the wakeup signal, a Linux-based router, or any Linux PC. First, look in your system BIOS to see if wakeonlan is supported — which it should be — and then enable it, as it should be disabled by default.
Then, you’ll need an Ethernet network adapter that supports wakeonlan; wireless adapters won’t work. You’ll need to verify that your Ethernet card supports wakeonlan:
The Supports Wake-on line tells you what features are supported:
d — all wake ups disabled
p — wake up on physical activity
u — wake up on unicast messages
m — wake up on multicast messages
b — wake up on broadcast messages
a — wake up on ARP messages
g — wake up on magic packet
s — set the Secure On password for the magic packet
man ethtool is not clear on what the p switch does; it suggests that any signal will cause a wake up. In my testing, however, it doesn’t do that. The one that must be enabled is g -- wake up on magic packet, and the Wake-on line shows that it is already enabled. If it is not enabled, you can use ethtool to enable it, using your own device name, of course:
# ethtool -s eth0 wol g
This may or may not survive a restart, so to make it a sure thing, you can create a root cron job to run the enable command after every restart:
@reboot /usr/bin/ethtool -s eth0 wol g
Figure 2: Enable Wake on LAN.
Another option is recent Network Manager versions have a nice little checkbox to enable wakeonlan (Figure 2).
There is a field for setting a password, but if your network interface doesn’t support the Secure On password, it won’t work.
Now you need to configure a second PC to send the wakeup signal. You don’t need root privileges, so create a cron job for your user. You need the MAC address of the network interface on the machine you’re waking up:
30 08 * * * /usr/bin/wakeonlan D0:50:99:82:E7:2B
RTC Alarm Clock
Using the real-time clock for wakeups is the least reliable method. Check out Wake Up Linux With an RTC Alarm Clock; this is a bit outdated as most distros use systemd now. Come back next week to learn more about updated ways to use RTC wakeups.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
Although you hear a lot about containers and Kubernetes these days, there’s a lot of mystery around them. In her Lightning Talk at All Things Open 2017, “From 0 to Kubernetes,” Amy Chen clears up the confusion.
Amy, a software engineer at Rancher Labs, describes containers as baby computers living inside another computer that are suffering an “existential crisis” as they try to figure out their place in the world. Kubernetes is the way all those baby computers are organized.
Over the past several weeks, we have been discussing the Understanding OPNFVbook (see links to previous articles below). In this last article in the series, we will look at why you should care about the project and how you can get involved.
OPNFV provides both tangible and intangible benefits to end users. Tangible benefits include those that directly impact business metrics, whereas the intangibles include benefits that speed up the overall NFV transformation journey but are harder to measure. The nature of the OPNFV project, where it primarily focuses on integration and testing of upstream projects and adds carrier-grade features to these upstream projects, can make it difficult to understand these benefits.
To understand this more clearly, let’s go back to the era before OPNFV.
At this year’s Cloud Foundry Summit Europe, the story was about developers as the heroes. They’re the ones who make the platforms. They are akin to the engineers who played such a pivotal role in designing the railroads, or in modern times made the smartphone possible. This means a more important role for developer advocates who, at organizations such as Google, are spending a lot more time with customers. These are the subject matter experts helping developers build out their platforms. They are gathering data to develop feedback loops that flow back into open source communities for ongoing development.
In her interview, Bannerman noted that while many companies have already completed a migration over to the cloud, some have not yet done so, and platforms such as Cloud Foundry are helping them to bridge that gap.
Docker is a tool that simplifies the installation process for software engineers. Coming from a statistics background I used to care very little about how to install software and would occasionally spend a few days trying to resolve system configuration issues. Enter the god-send Docker almighty.
Think of Docker as a light virtual machine (I apologise to the Docker gurus for using that term). Generally someone writes a *Dockerfile* that builds a *Docker Image* which contains most of the tools and libraries that you need for a project. You can use this as a base and add any other dependencies that are required for your project. Its underlying philosophy is that if it works on my machine it will work on yours.