Home Blog Page 329

What is Machine Learning?

This is the first of a series of articles intended to make Machine Learning more approachable to those who do not have a technical training. I hope it is helpful.

Advancements in computer technology over the past decades have meant that the collection of electronic data has become more commonplace in most fields of human endeavor. Many organizations now find themselves holding large amounts of data spanning many prior years. This data can relate to people, financial transactions, biological information, and much, much more.

Simultaneously, data scientists have been developing iterative computer programs called algorithms that can look at this large amount of data, analyse it and identify patterns and relationships that cannot be identified by humans. Analyzing past phenomena can provide extremely valuable information about what to expect in the future from the same, or closely related, phenomena. In this sense, these algorithms can learn from the past and use this learning to make valuable predictions about the future.

While learning from data is not in itself a new concept, Machine Learning differentiates itself from other methods of learning by a capacity to deal with a much greater quantity of data, and a capacity to handle data that has limited structure. This allows Machine Learning to be successfully utilized on a wide array of topics that had previously been considered too complex for other learning methods.

Read more at Towards Data Science

Building a Cloud Native Future

Cloud and open source are changing the world and can play an integral role in how companies transform themselves. That was the message from Abby Kearns, executive director of open source platform as a service provider Cloud Foundry Foundation, who delivered a keynote address earlier this summer at LinuxCon + ContainerCon + CloudOpen China, known as LC3.

“Cloud native technologies and cloud native applications are growing,’’ Kearns said. Over the next 18 months, there will be a 100 percent increase in the number of cloud native applications organizations are writing and using, she added. “This means you can no longer just invest in IT,” but need to in cloud and cloud technologies as well. …

To give the audience an idea of what the future will look like and where investments are being made in cloud and open source, Kearns cited a few examples. The automotive industry is changing rapidly, she said, and a Volkswagen automobile, for example, is no longer just a car; it has become a connected mobile device filled with sensors and data.

“Volkswagen realized they need to build out developer teams and applications that could take advantage of many clouds across 12 different brands,” she said. The car company has invested in Cloud Foundry and cloud native technologies to help them do that, she added.

“At the end of the day it’s about the applications that extend that car through mobile apps, supply chain management — all of that pulled together to bring a single concise experience for the automotive industry.”

Watch the complete keynote at The Linux Foundation.

 

AryaLinux: A Distribution and a Platform

Most Linux distributions are simply that: A distribution of Linux that offers a variation on an open source theme. You can download any of those distributions, install it, and use it. Simple. There’s very little mystery to using Linux these days, as the desktop is incredibly easy to use and server distributions are required in business.

But not every Linux distribution ends with that idea; some go one step further and create both a distribution and a platform. Such is the case with AryaLinux. What does that mean? Easy. AryaLinux doesn’t only offer an installable, open source operating system, they offer a platform with which users can build a complete GNU/Linux operating system. The provided scripts were created based on the instructions from Linux From Scratch and Beyond Linux From Scratch.

If you’ve ever attempted to build you own Linux distribution, you probably know how challenging it can be. AryaLinux has made that process quite a bit less stressful. In fact, although the build can take quite a lot of time (up to 48 hours), the process of building the AryaLinux platform is quite easy.

But don’t think that’s the only way you can have this distribution. You can download a live version of AryaLinux and install as easily as if you were working with Ubuntu, Linux Mint, or Elementary OS.

Let’s get AryaLinux up and running from the live distribution and then walk through the process of building the platform, using the special builder image.

The Live distribution

From the AryaLinux download page, you can get a version of the operating system that includes either GNOME or Xfce. I chose the GNOME route and found it to be configured to include Dash to dock and Applications menu extensions. Both of these will please most average GNOME users. Once you’ve downloaded the ISO image, burn it to either a DVD/CD or to a USB flash drive and boot up the live instance. Do note, you need to have at least 25GB of space on a drive to install AryaLinux. If you’re planning on testing this out as a virtual machine, create a 30-40 GB virtual drive, otherwise the installer will fail every time.

Once booted, you will be presented with a login screen, with the default user selected. Simply click the user and login (there is no password required).

To locate the installer, click the Applications menu, click Activities Overview, type “installer” and click on the resulting entry. This will launch the AryaLinux installer … one that looks very familiar to many Linux installers (Figure 1).

Figure 1: The AryaLinux installer is quite easy to navigate.

In the next window (Figure 2), you are required to define a root partition. To do this, type “/” (no quotes) in the Choose the root partition section.

Figure 2: Defining your root partition for the AryaLinux installation.

If you don’t define a home partition, it will be created for you. If you don’t define a swap partition, none will be created. If you have a need to create a home partition outside of the standard /home, do it here. The next installation windows have you do the following:

  • Create a standard user.

  • Create an administrative password.

  • Choose locale and keyboard.

  • Choose your timezone.

That’s all there is to the installation. Once it completes, reboot, remove the media (or delete the .iso from your Virtual Machine storage listing), and boot into your newly-installed AryaLinux operating system.

What’s there?

Out of the box, you should find everything necessary to use AryaLinux as a full-functioning desktop distribution. Included is:

  • LibreOffice

  • Rhythmbox

  • Files

  • GNOME Maps

  • GIMP

  • Simple Scan

  • Chromium

  • Transmission

  • Avahi SSH/VNC Server Browser

  • Qt5 Assistant/Designer/Linguist/QDbusViewer

  • Brasero

  • Cheese

  • Echomixer

  • VLC

  • Network Tools

  • GParted

  • dconf Editor

  • Disks

  • Disk Usage Analyzer

  • Document Viewer

  • And more

The caveats

It should be noted that this is the first official release of AryaLinux, so there will be issues. Right off the bat I realized that no matter what I tried, I could not get the terminal to open. Unfortunately, the terminal is a necessary tool for this distribution, as there is no GUI for updating or installing packages. In order to get to a bash prompt, I had to use a virtual screen. That’s when the next caveat came into play. The package manager for AryaLinux is alps, but its primary purpose is working in conjunction with the build scripts to install the platform. Unfortunately there is no included man page for alps on AryaLinux and the documentation is very scarce. Fortunately, the developers did think to roll in Flatpak support, so if you’re a fan of Flatpak, you can install anything you need (so long as it’s available as a flatpak package) using that system.

Building the platform

Let’s talk about building the AryaLinux platform. This isn’t much harder than installing the standard distribution, only it’s done via the command line. Here’s what you do:

  1. Download the AryaLinux Builder Disk.

  2. Burn the ISO to either DVD/CD or USB flash drive.

  3. Boot the live image.

  4. Once you reach the desktop, open a terminal window from the menu.

  5. Change to the root user with the command sudo su.

  6. Change directories with the command cd aryalinux/base-system

  7. Run the build script with the command ./build-arya

You will first be asked if you want to start a fresh build or resume a build (Figure 3). Remember, the AryaLinux build takes a LOT of time, so there might be an instance where you’ve started a build and need to resume.

Figure 3: Running the AryaLinux build script.

To start a new build, type “1” and then hit Enter on your keyboard. You will now be asked to define a  number of options (in order to fulfill the build script requirements). Those options are:

  • Bootloader Device

  • Root Partition

  • Home Partition

  • Locale

  • OS Name

  • OS Version

  • OS Codename

  • Domain Name

  • Keyboard Layout

  • Printer Paper Size

  • Enter Full Name

  • Username

  • Computer Name

  • Use multiple cores for build (y/n)

  • Create backups (y/n)

  • Install X Server (y/n)

  • Install Desktop Environment (y/n)

  • Choose Desktop Environment (XFCE, Mate, KDE, GNOME)

  • Do you want to configure advanced options (y/n)

  • Create admin password

  • Create password for standard user

  • Install bootloader (y/n)

  • Create Live ISO (y/n)

  • Select a timezone

After you’ve completed the above, the build will start. Don’t bother watching it, as it will take a very long time to complete (depending upon your system and network connection). In fact, the build can take anywhere from 8-48 hours. After the build completes, reboot and log into your newly built AryaLinux platform.

Who is AryaLinux for?

I’ll be honest, if you’re just a standard desktop user, AryaLinux is not for you. Although you can certainly get right to work on the desktop, if you need anything outside of the default applications, you might find it a bit too much trouble to bother with. If, on the other hand, you’re a developer, AryaLinux might be a great platform for you. Or, if you just want to see what it’s like to build a Linux distribution from scratch, AryaLinux is a pretty easy route.

Even with its quirks, AryaLinux holds a lot of promise as both a Linux distribution and platform. If the developers can see to it to build a GUI front-end for the alps package manager, AryaLinux could make some serious noise.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

 

Zephyr Project Embraces RISC-V with New Members and Expanded Board Support

The Linux Foundation’s Zephyr Project, which is developing the open source Zephyr real-time operating system (RTOS) for microcontrollers, announced six new members, including RISC-V members Antmicro and SiFive. The project also announced expanded support for developer boards. Zephyr is now certified to run 100 boards spanning ARM, x86, ARC, NIOS II, XTENSA, and RISCV32 architectures.

Antmicro, SiFive, and DeviceTone, which makes IoT-savvy smart clients, have signed up as Silver members, joining Oticon, runtime.io, Synopsys, and Texas Instruments. The other three new members — Beijing University of Posts and Telecommunications, The Institute of Communication and Computer Systems (ICCS), and Northeastern University – have joined the Vancouver Hack Space as Associate members.

The Platinum member leadership of Intel, Linaro, Nordic Semiconductor, and NXP remains the same. NXP, which has returned to an independent course after Qualcomm dropped its $44 billion bid, supplied one of the first Zephyr dev boards – its Kinetis-based FRDM-K64F (Freedom-K64F) – joining two Arduino boards and Intel’s Galileo Gen 2. Like Nordic, NXP is a leading microcontroller unit (MCU) chipmaker in addition to producing Linux-friendly Cortex-A SoCs like the i.MX8.

RTOSes go open source

Zephyr is still a toddler compared to more established open source RTOS projects like industry leader FreeRTOS, and the newer Arm Mbed, which has the advantage of being sponsored by the IP giant behind Cortex-M MCUs. Yet, the growing migration from proprietary to open source RTOSes signals good times for everyone.

“There is a major shift going on the RTOS space with so many things driving the increase in preference for open source choices,” said Thea Aldrich, the Zephyr Project’s new Evangelist and Developer Advocate, in an interview with Linux.com. “In a lot of ways, we’re seeing the same factors and motivations at play as happened with Linux many years ago. I am the most excited to see the movement on the low end.”

RISC-V alignment

The decision to align Zephyr with similarly future-looking open source projects like RISC-V appears to be a sound strategic move. “Antmicro and SiFive bring a lot of excitement and energy and great perspective to Zephyr,” said Aldrich.

With SiFive, the Zephyr Project now has the premiere RISC-V hardware player on board. SiFive created the first MCU-class RISC-V SoC with its open source Freedom E300, which powers its Arduino-compatible HiFive1 and Arduino Cinque boards. The company also produced the first Linux-friendly RISC-V SoC with its Freedom U540, the SoC that powers its HiFive Unleashed SBC. (SiFive will soon have RISC-V-on-Linux competition from an India-based project called Shakti.)

Antmicro is the official maintainer of RISC-V in the Zephyr Project and is active in the RISC-V community. Its open source Renode IoT development framework is integrated in the Mi-V platform of Microsemi, the leading RISC-V soft-core vendor. Antmicro has also developed a variety of custom software-based implementations of RISC-V for commercial customers.

Antmicro and SiFive announced a partnership in which SiFive will provide Renode to its customers as part of “a comprehensive solution covering build, debug and test in multi-node systems.” The announcement touts Renode’s ability to simulate an entire SoC for RISC-V developers, not just the CPU.

Zephyr now supports RISC-V on QEMU, as well as the SiFive HiFive1, Microsemi’s FPGA-based, soft-core M2GL025 Mi-V board, and the Zedboard Pulpino. The latter is an implementation of PULP’s open source PULPino RISC-V soft core that runs on the venerable Xilinx Zynq based ZedBoard.

Other development boards on the Zephyr dev board list include boards based on MCUs from Microchip, Nordic, NXP, ST, and others, as well as the BBC Microbit and 96Boards Carbon. Supported SBCs that primarily run Linux, but can also run Zephyr on their MCU companion chips, include the MinnowBoard Max, Udoo Neo, and UP Squared.

Zephyr 1.13 on track

The Zephyr Project is now prepping a 1.13 build due in September, following the usual three-month release cycle. The release adds support for Precision Time Protocol PTP and SPDX license tracking, among other features. Zephyr 1.13 continues to expand upon Zephyr’s “safety and security certifications and features,” says Aldrich, a former Eclipse Foundation Developer Advocate.  

Aldrich first encountered Zephyr when she found it to be an ideal platform for tracking her cattle with sensors on a small ranch in Texas. “Zephyr fits in really nicely as the operating system for sensors and other devices way out on the edge,” she says.

Zephyr has other advantages such as its foundation on the latest open source components and its support for the latest wireless and sensor devices. Aldrich was particularly attracted to the Zephyr Project’s independence and transparent open source governance.

“There are a lot of choices for open source RTOSes and each has its own strengths and weaknesses,” continued Aldrich, “We have a lot of really strong aspects of our project but the community and how we operate is what comes to mind first. It’s a truly collaborative effort. For us, open source is more than a license. We’ve made it transparent how technical decisions are made and community input is incorporated.”

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Kubernetes Design and Development Explained

Kubernetes is quickly becoming the de facto way to deploy workloads on distributed systems. In this post, I will help you develop a deeper understanding of Kubernetes by revealing some of the principles underpinning its design.

Declarative Over Imperative

As soon as you learn to deploy your first workload (a pod) on the Kubernetes open source orchestration engine, you encounter the first principle of Kubernetes: the Kubernetes API is declarative rather than imperative.

In an imperative API, you directly issue the commands that the server will carry out, e.g. “run container,” “stop container,” and so on. In a declarative API, you declare what you want the system to do, and the system will constantly drive towards that state.

Think of it like manually driving vs setting an autopilot system.

So in Kubernetes, you create an API object (using the CLI or REST API) to represent what you want the system to do. And all the components in the system work to drive towards that state, until the object is deleted.

For example, when you want to schedule a containerized workload instead of issuing a “run container” command, you create an API object, a pod, that describes your desired state:

simple-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: internal.mycorp.com:5000/mycontainer:1.7.9

Read more at The New Stack

James Bottomley on Linux, Containers, and the Leading Edge

It’s no secret that Linux is basically the operating system of containers, and containers are the future of the cloud, says James Bottomley, Distinguished Engineer at IBM Research and Linux kernel developer. Bottomley, who can often be seen at open source events in his signature bow tie, is focused these days on security systems like the Trusted Platform Module and the fundamentals of container technology.

With Open Source Summit happening this month in conjunction with Linux Security Summit — and Open Source Summit Europe coming up fast — we talked with Bottomley about these and other topics. …

The Linux Foundation: Who should attend Open Source Summit and why?

Bottomley: I think it’s no secret that Linux is basically the OS of containers and containers are the future of the cloud, so anyone who is interested in keeping up to date with what’s going on in the cloud because this would be the only place they can keep up with the leading edge of Linux.

Read more at The Linux Foundation

How Blockchain and the Auto Industry Will Fit Together

“At this point, most of the specific potential uses for blockchain in various industries are quite speculative and a number of years out,” says Gordon Haff, technology evangelist at Red Hat. “What we can do, though, is think about the type of uses that play to blockchain strengths.”

It is plenty productive to explore sectors where, as Haff says, blockchain’s strong suits might be a good fit. The automotive industry quickly stands out. Some of its fundamental characteristics and concerns – think about the massive global supply chain, the complex web of licensing, taxation, and other regulations, and the important safety and trust issues – make it a fascinating candidate for blockchain-enabled innovation…

However, one catalyst for change is that the auto industry is deeply connected with some other sectors where blockchain technology shows promise. Marta Piekarska, director of ecosystem at Hyperledger, points out several major ones: supply chain, insurance, and payments. And that’s not necessarily a comprehensive list. The vehicles we drive and ride in cross many more avenues than we may realize.

“The automotive industry might be unique in the way that it combines many other platforms: entertainment, manufacturing, tracking of CO2 emissions, payments, and many others,” she explains.

Read more at Enterprisers 

What is CI/CD?

Continuous integration (CI) and continuous delivery (CD) are extremely common terms used when talking about producing software. But what do they really mean? In this article, I’ll explain the meaning and significance behind these and related terms, such as continuous testing and continuous deployment.

Quick summary

An assembly line in a factory produces consumer goods from raw materials in a fast, automated, reproducible manner. Similarly, a software delivery pipeline produces releases from source code in a fast, automated, and reproducible manner. The overall design for how this is done is called “continuous delivery.” The process that kicks off the assembly line is referred to as “continuous integration.” The process that ensures quality is called “continuous testing” and the process that makes the end product available to users is called “continuous deployment.” And the overall efficiency experts that make everything run smoothly and simply for everyone are known as “DevOps” practitioners.

What does “continuous” mean?

Continuous is used to describe many different processes that follow the practices I describe here. It doesn’t mean “always running.” It does mean “always ready to run.” In the context of creating software, it also includes several core concepts/best practices. 

Read more at OpenSource.com

10 Reasons to Attend ONS Europe in September | Registration Deadline Approaching – Register & Save $605

Here’s a sneak peek at why you need to be at Open Networking Summit Europe in Amsterdam next month! But hurry – spots are going quickly. Secure your spot and register by September 1 to save $605.

Open Networking Summit, the premier open networking event in North America now in its 7th year, comes to Europe for the first time next month. This event is like no other, with content presented by your peers in the networking community, sessions carefully selected by networking specialists in the program committee, and plenty of networking and collaboration opportunities, this is an event you won’t want to miss.

Highlights include:

  1. Learn About the Future & Lessons Learned in Open Networking: Hear about innovative ideas on the disruption and change of the landscape of networking and networking-enabled markets in the next 3-5 years across AI, ML, and deep learning applied to networking, SD-WAN, IIOT, data insights, business intelligence, blockchain & telecom, and more. Get an in-depth scoop on the lessons learned from today’s global deployments.
  2. 100+ Sessions Covering Telecom, Enterprise, and Cloud Networking: With a blend of deep technical/developer sessions and business/architecture sessions, there are a plethora of learning opportunities for everyone. Plan your schedule now and choose from sessions, labs, tutorials, and lightning talks presented by Airbnb, Deutsche Telekom AG, Thomas Reuters, Huawei, General Motors, Türk Telekom, China Mobile, and many more.

Read more at The Linux Foundation

A Git Origin Story

A look at Linux kernel developers’ various revision control solutions through the years, Linus Torvalds’ decision to use BitKeeper and the controversy that followed, and how Git came to be created.

Originally, Linus Torvalds used no revision control at all. Kernel contributors would post their patches to the Usenet group, and later to the mailing list, and Linus would apply them to his own source tree. Eventually, Linus would put out a new release of the whole tree, with no division between any of the patches. The only way to examine the history of his process was as a giant diff between two full releases. 

This was not because there were no open-source revision control systems available. CVS had been around since the 1980s, and it was still the most popular system around. At its core, it would allow contributors to submit patches to a central repository and examine the history of patches going into that repository….

One of Linus’ primary concerns, in fact, was speed. This was something he had never fully articulated before, or at least not in a way that existing projects could grasp. With thousands of kernel developers across the world submitting patches full-tilt, he needed something that could operate at speeds never before imagined. 

Read more at Linux Journal