Home Blog Page 329

Open Source Akraino Edge Computing Project Leaps Into Action

The ubiquitous topic of edge computing has so far primarily focused on IoT and machine learning. A new Linux Foundation project called Akraino Edge Stack intends to standardize similar concepts for use on edge telecom and networking systems in addition to IoT gateways. The goal to build an “open source software stack that supports high-availability cloud services optimized for edge computing systems and applications,” says the project.

“The Akraino Edge Stack project is focused on anything related to the edge, including both telco and enterprise use cases,” said Akraino evangelist Kandan Kathirvel, Director of Cloud Strategy & Architecture at AT&T, in an interview with Linux.com.

The project announced it has “moved from formation into execution,” and revealed a slate of new members including Arm, Dell, Juniper, and Qualcomm. New member Ericsson is joining AT&T Labs to host the first developer conference on Aug. 23-24.

Akraino Edge Stack was announced in February based on code contributions from AT&T for carrier-scale edge computing. In March, Intel announced it was joining the project and open sourcing parts of its Wind River Titanium Cloud and Network Edge Virtualization SDK for the emerging Akraino stack. Intel was joined by a dozen, mostly China-based members including China Mobile, China Telecom, China Unicom, Docker, Huawei, Tencent, and ZTE.

The Akraino Edge Stack project has now announced broader based support with new members Arm, Dell EMC, Ericsson, inwinSTACK, Juniper Networks, Nokia, Qualcomm, Radisys, Red Hat, and Wind River. The project says it has begun to develop “blueprints that will consist of validated hardware and software configurations against defined use case and performance specifications.” The initial blueprints and seed code will be opened to the public at the end of the week following the Akraino Edge Stack Developer Summit at AT&T Labs in Middletown, New Jersey.

The project announced a lightweight governance framework with a Technical Steering Committee (TSC), composed of “active committers within the community.” There is “no prerequisite of financial contribution,” says the project.

Edge computing meets edge networking

Like most edge computing projects and products, such as AWS Greengrass, the Linux Foundation’s EdgeX Foundry, and Google’s upcoming Cloud IoT Edge, the technology aims to bring cloud technologies and analytics to smaller-scale computers that sit closer to the edge of the network. The goal is to reduce the latency of cloud/device interactions, while also reducing costly bandwidth delivery and improving reliability via a distributed network.

Akraino will offer blueprints for IoT, but it is more focused more on bringing edge services to telecom and networking systems such as cellular base stations, smaller networking servers, customer premises equipment, and virtualized central offices (VCOs). The project will supply standardized blueprints for implementing virtual network functions (VNFs) in these systems for applications ranging from threat detection to augmented reality to specialized services required to interconnect cars and drones. Virtualization avoids the cost and complexity of integrating specialized hardware with edge networking systems.

“One key difference from other communities is that we offer blueprints,” said AT&T’s Kathirvel. “Blueprints are declarative configurations of everything including the hardware, software, operational and security tools, security tools — everything you need to run production in large scale.”

When asked for further clarification between Akraino’s stack and the EdgeX Foundry’s middleware for industrial IoT, Kathirvel said that EdgeX is more focused on the intricacies of IIoT gateway/sensor communications whereas Akraino has a broader focus and is more concerned with cloud connections.

“Akraino Edge Stack is not limited to IoT — we’re bringing everything together in respect to the edge,” said Kathirvel. “It’s complementary with EdgeX Foundry in that you could take EdgeX code and create a blueprint and maintain that within the Akraino Edge Stack as an end to end stack. In addition, the community is working on additional use cases to support different classes of edge hardware.”

Meeting new demands for sub-20ms latency

Initially, Akraino Edge Stack use cases will be “focused on provider deployment,” said Kathirvel, referring to telecom applications. These will include emerging, 5G-enabled applications such as “AR/VR and connected cars” in which sub 20 millisecond or lower latency is required.  In addition, edge computing can reduce the extent to which network bandwidth must be boosted to accommodate demanding multimedia-rich and cloud-intensive end-user applications.

Akraino Edge Stack borrows virtualization and container technologies from open source networking projects such as OpenStack. The goal is to create a common API stack for deploying applications using VNFs running within containers. A VNF is a software-based implementation of the networked virtual machines implemented via closely related NFV (network functions virtualization) initiatives.

In a May 23 presentation (YouTube video) at the OpenStack Summit Vancouver, Kathirvel and fellow Akraino contributor Melissa Evers-Hood of Intel, listed several other projects and technologies that the stack will accommodate with blueprints, including Ceph (distributed cloud storage), Kata Containers, Kubernetes, and the Intel/Wind River backed StarlingX for open cloud infrastructure. Aside from EdgeX and OpenStack, other Linux Foundation hosted projects on the list include DANOS (Disaggregated Network Operating System) and the LF’s new Acumos AI project for developing a federated platform to manage and share models for AI and machine learning.

Akraino aligns closely with OpenStack edge computing initiatives, as well as the Linux Foundation’s ONAP (Open Network Automation Platform). ONAP, which was founded in Feb. 2017 from the merger of the earlier ECOMP and OPEN-O projects, is developing a framework for real-time, policy-driven software automation of VNFs.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

How To Install Prometheus on Ubuntu 18.04 LTS

Prometheus is a free and open source software ecosystem that allows us to collect metrics from our applications and stores them in a database, especially a time-series based DB. It is a very powerful monitoring system suitable for dynamic environments. Prometheus is written in Go and use query language for data processing. Prometheus provides metrics of CPU, memory, disk usage, I/O, network statistics, MySQL server and Nginx.

In this tutorial, we will explain how to install Prometheus on Ubuntu 18.04 server.

Requirements

  • A server running Ubuntu 18.04 LTS.
  • A non-root user with sudo privileges.

Install Prometheus

By default, Prometheus is not available in the Ubuntu 18.04 LTS (Bionic Beaver) default repository. So you will need to add the repository for that.

Read more at HowToForge

Building in the Open: ONS Europe Demos Highlight Networking Industry Collaboration

LF Networking (LFN), launched on January 1st of this year, has already made a significant impact in the open source networking ecosystem gaining over 100 members in the just the first 100 days. Critically, LFN has also continues to attract support and participation from many of the world’s top network operators, including six new members announced in May: KT, KDDI, SK Telecom, Sprint, Swisscom; and Deutsche Telekom announced just last month. In fact, member companies of LFN now represent more than 60% of the world’s mobile subscribers. Open source is becoming the de facto way to develop software and it’s the technical collaboration at the project level that makes it so powerful.

Similar to the demos in the LFN Booth at ONS North America, the LFN Booth at ONS Europe will once again showcase the top, community-led, technical demos from the LFN family of projects. We have increased the number of demo stations from 8 to 10, and for the first time, are showcasing demos from the big data analytics project PNDA, and demos that include the newly added LFN Project, Tungsten Fabric (formerly OpenContrail). Technology from founding LFN Projects FD.ioONAPOPNFV, and OpenDaylight will also be represented, along with adjacent projects like AcumosKubernetesOpenCIOpen Compute Project, and OpenStack.

Read more at The Linux Foundation

Encrypting NFSv4 with Stunnel TLS

NFS clients and servers push file traffic over clear-text connections in the default configuration, which is incompatible with sensitive data. TLS can wrap this traffic, finally bringing protocol security. Before you use your cloud provider’s NFS tools, review all of your NFS usage and secure it where necessary.

The Network File System (NFS) is the most popular file-sharing protocol in UNIX. Decades old and predating Linux, the most modern v4 releases are easily firewalled and offer nearly everything required for seamless manipulation of remote files as if they were local.

The most obvious feature missing from NFSv4 is native, standalone encryption. Absent Kerberos, the protocol operates only in clear text, and this presents an unacceptable security risk in modern settings. NFS is hardly alone in this shortcoming, as I have already covered clear-text SMB in a previous article. Compared to SMB, NFS over stunnel offers better encryption (likely AES-GCM if used with a modern OpenSSL) on a wider array of OS versions, with no pressure in the protocol to purchase paid updates or newer OS releases.

Read more at Linux Journal

What is Machine Learning?

This is the first of a series of articles intended to make Machine Learning more approachable to those who do not have a technical training. I hope it is helpful.

Advancements in computer technology over the past decades have meant that the collection of electronic data has become more commonplace in most fields of human endeavor. Many organizations now find themselves holding large amounts of data spanning many prior years. This data can relate to people, financial transactions, biological information, and much, much more.

Simultaneously, data scientists have been developing iterative computer programs called algorithms that can look at this large amount of data, analyse it and identify patterns and relationships that cannot be identified by humans. Analyzing past phenomena can provide extremely valuable information about what to expect in the future from the same, or closely related, phenomena. In this sense, these algorithms can learn from the past and use this learning to make valuable predictions about the future.

While learning from data is not in itself a new concept, Machine Learning differentiates itself from other methods of learning by a capacity to deal with a much greater quantity of data, and a capacity to handle data that has limited structure. This allows Machine Learning to be successfully utilized on a wide array of topics that had previously been considered too complex for other learning methods.

Read more at Towards Data Science

Building a Cloud Native Future

Cloud and open source are changing the world and can play an integral role in how companies transform themselves. That was the message from Abby Kearns, executive director of open source platform as a service provider Cloud Foundry Foundation, who delivered a keynote address earlier this summer at LinuxCon + ContainerCon + CloudOpen China, known as LC3.

“Cloud native technologies and cloud native applications are growing,’’ Kearns said. Over the next 18 months, there will be a 100 percent increase in the number of cloud native applications organizations are writing and using, she added. “This means you can no longer just invest in IT,” but need to in cloud and cloud technologies as well. …

To give the audience an idea of what the future will look like and where investments are being made in cloud and open source, Kearns cited a few examples. The automotive industry is changing rapidly, she said, and a Volkswagen automobile, for example, is no longer just a car; it has become a connected mobile device filled with sensors and data.

“Volkswagen realized they need to build out developer teams and applications that could take advantage of many clouds across 12 different brands,” she said. The car company has invested in Cloud Foundry and cloud native technologies to help them do that, she added.

“At the end of the day it’s about the applications that extend that car through mobile apps, supply chain management — all of that pulled together to bring a single concise experience for the automotive industry.”

Watch the complete keynote at The Linux Foundation.

 

AryaLinux: A Distribution and a Platform

Most Linux distributions are simply that: A distribution of Linux that offers a variation on an open source theme. You can download any of those distributions, install it, and use it. Simple. There’s very little mystery to using Linux these days, as the desktop is incredibly easy to use and server distributions are required in business.

But not every Linux distribution ends with that idea; some go one step further and create both a distribution and a platform. Such is the case with AryaLinux. What does that mean? Easy. AryaLinux doesn’t only offer an installable, open source operating system, they offer a platform with which users can build a complete GNU/Linux operating system. The provided scripts were created based on the instructions from Linux From Scratch and Beyond Linux From Scratch.

If you’ve ever attempted to build you own Linux distribution, you probably know how challenging it can be. AryaLinux has made that process quite a bit less stressful. In fact, although the build can take quite a lot of time (up to 48 hours), the process of building the AryaLinux platform is quite easy.

But don’t think that’s the only way you can have this distribution. You can download a live version of AryaLinux and install as easily as if you were working with Ubuntu, Linux Mint, or Elementary OS.

Let’s get AryaLinux up and running from the live distribution and then walk through the process of building the platform, using the special builder image.

The Live distribution

From the AryaLinux download page, you can get a version of the operating system that includes either GNOME or Xfce. I chose the GNOME route and found it to be configured to include Dash to dock and Applications menu extensions. Both of these will please most average GNOME users. Once you’ve downloaded the ISO image, burn it to either a DVD/CD or to a USB flash drive and boot up the live instance. Do note, you need to have at least 25GB of space on a drive to install AryaLinux. If you’re planning on testing this out as a virtual machine, create a 30-40 GB virtual drive, otherwise the installer will fail every time.

Once booted, you will be presented with a login screen, with the default user selected. Simply click the user and login (there is no password required).

To locate the installer, click the Applications menu, click Activities Overview, type “installer” and click on the resulting entry. This will launch the AryaLinux installer … one that looks very familiar to many Linux installers (Figure 1).

Figure 1: The AryaLinux installer is quite easy to navigate.

In the next window (Figure 2), you are required to define a root partition. To do this, type “/” (no quotes) in the Choose the root partition section.

Figure 2: Defining your root partition for the AryaLinux installation.

If you don’t define a home partition, it will be created for you. If you don’t define a swap partition, none will be created. If you have a need to create a home partition outside of the standard /home, do it here. The next installation windows have you do the following:

  • Create a standard user.

  • Create an administrative password.

  • Choose locale and keyboard.

  • Choose your timezone.

That’s all there is to the installation. Once it completes, reboot, remove the media (or delete the .iso from your Virtual Machine storage listing), and boot into your newly-installed AryaLinux operating system.

What’s there?

Out of the box, you should find everything necessary to use AryaLinux as a full-functioning desktop distribution. Included is:

  • LibreOffice

  • Rhythmbox

  • Files

  • GNOME Maps

  • GIMP

  • Simple Scan

  • Chromium

  • Transmission

  • Avahi SSH/VNC Server Browser

  • Qt5 Assistant/Designer/Linguist/QDbusViewer

  • Brasero

  • Cheese

  • Echomixer

  • VLC

  • Network Tools

  • GParted

  • dconf Editor

  • Disks

  • Disk Usage Analyzer

  • Document Viewer

  • And more

The caveats

It should be noted that this is the first official release of AryaLinux, so there will be issues. Right off the bat I realized that no matter what I tried, I could not get the terminal to open. Unfortunately, the terminal is a necessary tool for this distribution, as there is no GUI for updating or installing packages. In order to get to a bash prompt, I had to use a virtual screen. That’s when the next caveat came into play. The package manager for AryaLinux is alps, but its primary purpose is working in conjunction with the build scripts to install the platform. Unfortunately there is no included man page for alps on AryaLinux and the documentation is very scarce. Fortunately, the developers did think to roll in Flatpak support, so if you’re a fan of Flatpak, you can install anything you need (so long as it’s available as a flatpak package) using that system.

Building the platform

Let’s talk about building the AryaLinux platform. This isn’t much harder than installing the standard distribution, only it’s done via the command line. Here’s what you do:

  1. Download the AryaLinux Builder Disk.

  2. Burn the ISO to either DVD/CD or USB flash drive.

  3. Boot the live image.

  4. Once you reach the desktop, open a terminal window from the menu.

  5. Change to the root user with the command sudo su.

  6. Change directories with the command cd aryalinux/base-system

  7. Run the build script with the command ./build-arya

You will first be asked if you want to start a fresh build or resume a build (Figure 3). Remember, the AryaLinux build takes a LOT of time, so there might be an instance where you’ve started a build and need to resume.

Figure 3: Running the AryaLinux build script.

To start a new build, type “1” and then hit Enter on your keyboard. You will now be asked to define a  number of options (in order to fulfill the build script requirements). Those options are:

  • Bootloader Device

  • Root Partition

  • Home Partition

  • Locale

  • OS Name

  • OS Version

  • OS Codename

  • Domain Name

  • Keyboard Layout

  • Printer Paper Size

  • Enter Full Name

  • Username

  • Computer Name

  • Use multiple cores for build (y/n)

  • Create backups (y/n)

  • Install X Server (y/n)

  • Install Desktop Environment (y/n)

  • Choose Desktop Environment (XFCE, Mate, KDE, GNOME)

  • Do you want to configure advanced options (y/n)

  • Create admin password

  • Create password for standard user

  • Install bootloader (y/n)

  • Create Live ISO (y/n)

  • Select a timezone

After you’ve completed the above, the build will start. Don’t bother watching it, as it will take a very long time to complete (depending upon your system and network connection). In fact, the build can take anywhere from 8-48 hours. After the build completes, reboot and log into your newly built AryaLinux platform.

Who is AryaLinux for?

I’ll be honest, if you’re just a standard desktop user, AryaLinux is not for you. Although you can certainly get right to work on the desktop, if you need anything outside of the default applications, you might find it a bit too much trouble to bother with. If, on the other hand, you’re a developer, AryaLinux might be a great platform for you. Or, if you just want to see what it’s like to build a Linux distribution from scratch, AryaLinux is a pretty easy route.

Even with its quirks, AryaLinux holds a lot of promise as both a Linux distribution and platform. If the developers can see to it to build a GUI front-end for the alps package manager, AryaLinux could make some serious noise.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

 

Zephyr Project Embraces RISC-V with New Members and Expanded Board Support

The Linux Foundation’s Zephyr Project, which is developing the open source Zephyr real-time operating system (RTOS) for microcontrollers, announced six new members, including RISC-V members Antmicro and SiFive. The project also announced expanded support for developer boards. Zephyr is now certified to run 100 boards spanning ARM, x86, ARC, NIOS II, XTENSA, and RISCV32 architectures.

Antmicro, SiFive, and DeviceTone, which makes IoT-savvy smart clients, have signed up as Silver members, joining Oticon, runtime.io, Synopsys, and Texas Instruments. The other three new members — Beijing University of Posts and Telecommunications, The Institute of Communication and Computer Systems (ICCS), and Northeastern University – have joined the Vancouver Hack Space as Associate members.

The Platinum member leadership of Intel, Linaro, Nordic Semiconductor, and NXP remains the same. NXP, which has returned to an independent course after Qualcomm dropped its $44 billion bid, supplied one of the first Zephyr dev boards – its Kinetis-based FRDM-K64F (Freedom-K64F) – joining two Arduino boards and Intel’s Galileo Gen 2. Like Nordic, NXP is a leading microcontroller unit (MCU) chipmaker in addition to producing Linux-friendly Cortex-A SoCs like the i.MX8.

RTOSes go open source

Zephyr is still a toddler compared to more established open source RTOS projects like industry leader FreeRTOS, and the newer Arm Mbed, which has the advantage of being sponsored by the IP giant behind Cortex-M MCUs. Yet, the growing migration from proprietary to open source RTOSes signals good times for everyone.

“There is a major shift going on the RTOS space with so many things driving the increase in preference for open source choices,” said Thea Aldrich, the Zephyr Project’s new Evangelist and Developer Advocate, in an interview with Linux.com. “In a lot of ways, we’re seeing the same factors and motivations at play as happened with Linux many years ago. I am the most excited to see the movement on the low end.”

RISC-V alignment

The decision to align Zephyr with similarly future-looking open source projects like RISC-V appears to be a sound strategic move. “Antmicro and SiFive bring a lot of excitement and energy and great perspective to Zephyr,” said Aldrich.

With SiFive, the Zephyr Project now has the premiere RISC-V hardware player on board. SiFive created the first MCU-class RISC-V SoC with its open source Freedom E300, which powers its Arduino-compatible HiFive1 and Arduino Cinque boards. The company also produced the first Linux-friendly RISC-V SoC with its Freedom U540, the SoC that powers its HiFive Unleashed SBC. (SiFive will soon have RISC-V-on-Linux competition from an India-based project called Shakti.)

Antmicro is the official maintainer of RISC-V in the Zephyr Project and is active in the RISC-V community. Its open source Renode IoT development framework is integrated in the Mi-V platform of Microsemi, the leading RISC-V soft-core vendor. Antmicro has also developed a variety of custom software-based implementations of RISC-V for commercial customers.

Antmicro and SiFive announced a partnership in which SiFive will provide Renode to its customers as part of “a comprehensive solution covering build, debug and test in multi-node systems.” The announcement touts Renode’s ability to simulate an entire SoC for RISC-V developers, not just the CPU.

Zephyr now supports RISC-V on QEMU, as well as the SiFive HiFive1, Microsemi’s FPGA-based, soft-core M2GL025 Mi-V board, and the Zedboard Pulpino. The latter is an implementation of PULP’s open source PULPino RISC-V soft core that runs on the venerable Xilinx Zynq based ZedBoard.

Other development boards on the Zephyr dev board list include boards based on MCUs from Microchip, Nordic, NXP, ST, and others, as well as the BBC Microbit and 96Boards Carbon. Supported SBCs that primarily run Linux, but can also run Zephyr on their MCU companion chips, include the MinnowBoard Max, Udoo Neo, and UP Squared.

Zephyr 1.13 on track

The Zephyr Project is now prepping a 1.13 build due in September, following the usual three-month release cycle. The release adds support for Precision Time Protocol PTP and SPDX license tracking, among other features. Zephyr 1.13 continues to expand upon Zephyr’s “safety and security certifications and features,” says Aldrich, a former Eclipse Foundation Developer Advocate.  

Aldrich first encountered Zephyr when she found it to be an ideal platform for tracking her cattle with sensors on a small ranch in Texas. “Zephyr fits in really nicely as the operating system for sensors and other devices way out on the edge,” she says.

Zephyr has other advantages such as its foundation on the latest open source components and its support for the latest wireless and sensor devices. Aldrich was particularly attracted to the Zephyr Project’s independence and transparent open source governance.

“There are a lot of choices for open source RTOSes and each has its own strengths and weaknesses,” continued Aldrich, “We have a lot of really strong aspects of our project but the community and how we operate is what comes to mind first. It’s a truly collaborative effort. For us, open source is more than a license. We’ve made it transparent how technical decisions are made and community input is incorporated.”

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Kubernetes Design and Development Explained

Kubernetes is quickly becoming the de facto way to deploy workloads on distributed systems. In this post, I will help you develop a deeper understanding of Kubernetes by revealing some of the principles underpinning its design.

Declarative Over Imperative

As soon as you learn to deploy your first workload (a pod) on the Kubernetes open source orchestration engine, you encounter the first principle of Kubernetes: the Kubernetes API is declarative rather than imperative.

In an imperative API, you directly issue the commands that the server will carry out, e.g. “run container,” “stop container,” and so on. In a declarative API, you declare what you want the system to do, and the system will constantly drive towards that state.

Think of it like manually driving vs setting an autopilot system.

So in Kubernetes, you create an API object (using the CLI or REST API) to represent what you want the system to do. And all the components in the system work to drive towards that state, until the object is deleted.

For example, when you want to schedule a containerized workload instead of issuing a “run container” command, you create an API object, a pod, that describes your desired state:

simple-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: internal.mycorp.com:5000/mycontainer:1.7.9

Read more at The New Stack

James Bottomley on Linux, Containers, and the Leading Edge

It’s no secret that Linux is basically the operating system of containers, and containers are the future of the cloud, says James Bottomley, Distinguished Engineer at IBM Research and Linux kernel developer. Bottomley, who can often be seen at open source events in his signature bow tie, is focused these days on security systems like the Trusted Platform Module and the fundamentals of container technology.

With Open Source Summit happening this month in conjunction with Linux Security Summit — and Open Source Summit Europe coming up fast — we talked with Bottomley about these and other topics. …

The Linux Foundation: Who should attend Open Source Summit and why?

Bottomley: I think it’s no secret that Linux is basically the OS of containers and containers are the future of the cloud, so anyone who is interested in keeping up to date with what’s going on in the cloud because this would be the only place they can keep up with the leading edge of Linux.

Read more at The Linux Foundation