Home Blog Page 330

Zephyr Project Embraces RISC-V with New Members and Expanded Board Support

The Linux Foundation’s Zephyr Project, which is developing the open source Zephyr real-time operating system (RTOS) for microcontrollers, announced six new members, including RISC-V members Antmicro and SiFive. The project also announced expanded support for developer boards. Zephyr is now certified to run 100 boards spanning ARM, x86, ARC, NIOS II, XTENSA, and RISCV32 architectures.

Antmicro, SiFive, and DeviceTone, which makes IoT-savvy smart clients, have signed up as Silver members, joining Oticon, runtime.io, Synopsys, and Texas Instruments. The other three new members — Beijing University of Posts and Telecommunications, The Institute of Communication and Computer Systems (ICCS), and Northeastern University – have joined the Vancouver Hack Space as Associate members.

The Platinum member leadership of Intel, Linaro, Nordic Semiconductor, and NXP remains the same. NXP, which has returned to an independent course after Qualcomm dropped its $44 billion bid, supplied one of the first Zephyr dev boards – its Kinetis-based FRDM-K64F (Freedom-K64F) – joining two Arduino boards and Intel’s Galileo Gen 2. Like Nordic, NXP is a leading microcontroller unit (MCU) chipmaker in addition to producing Linux-friendly Cortex-A SoCs like the i.MX8.

RTOSes go open source

Zephyr is still a toddler compared to more established open source RTOS projects like industry leader FreeRTOS, and the newer Arm Mbed, which has the advantage of being sponsored by the IP giant behind Cortex-M MCUs. Yet, the growing migration from proprietary to open source RTOSes signals good times for everyone.

“There is a major shift going on the RTOS space with so many things driving the increase in preference for open source choices,” said Thea Aldrich, the Zephyr Project’s new Evangelist and Developer Advocate, in an interview with Linux.com. “In a lot of ways, we’re seeing the same factors and motivations at play as happened with Linux many years ago. I am the most excited to see the movement on the low end.”

RISC-V alignment

The decision to align Zephyr with similarly future-looking open source projects like RISC-V appears to be a sound strategic move. “Antmicro and SiFive bring a lot of excitement and energy and great perspective to Zephyr,” said Aldrich.

With SiFive, the Zephyr Project now has the premiere RISC-V hardware player on board. SiFive created the first MCU-class RISC-V SoC with its open source Freedom E300, which powers its Arduino-compatible HiFive1 and Arduino Cinque boards. The company also produced the first Linux-friendly RISC-V SoC with its Freedom U540, the SoC that powers its HiFive Unleashed SBC. (SiFive will soon have RISC-V-on-Linux competition from an India-based project called Shakti.)

Antmicro is the official maintainer of RISC-V in the Zephyr Project and is active in the RISC-V community. Its open source Renode IoT development framework is integrated in the Mi-V platform of Microsemi, the leading RISC-V soft-core vendor. Antmicro has also developed a variety of custom software-based implementations of RISC-V for commercial customers.

Antmicro and SiFive announced a partnership in which SiFive will provide Renode to its customers as part of “a comprehensive solution covering build, debug and test in multi-node systems.” The announcement touts Renode’s ability to simulate an entire SoC for RISC-V developers, not just the CPU.

Zephyr now supports RISC-V on QEMU, as well as the SiFive HiFive1, Microsemi’s FPGA-based, soft-core M2GL025 Mi-V board, and the Zedboard Pulpino. The latter is an implementation of PULP’s open source PULPino RISC-V soft core that runs on the venerable Xilinx Zynq based ZedBoard.

Other development boards on the Zephyr dev board list include boards based on MCUs from Microchip, Nordic, NXP, ST, and others, as well as the BBC Microbit and 96Boards Carbon. Supported SBCs that primarily run Linux, but can also run Zephyr on their MCU companion chips, include the MinnowBoard Max, Udoo Neo, and UP Squared.

Zephyr 1.13 on track

The Zephyr Project is now prepping a 1.13 build due in September, following the usual three-month release cycle. The release adds support for Precision Time Protocol PTP and SPDX license tracking, among other features. Zephyr 1.13 continues to expand upon Zephyr’s “safety and security certifications and features,” says Aldrich, a former Eclipse Foundation Developer Advocate.  

Aldrich first encountered Zephyr when she found it to be an ideal platform for tracking her cattle with sensors on a small ranch in Texas. “Zephyr fits in really nicely as the operating system for sensors and other devices way out on the edge,” she says.

Zephyr has other advantages such as its foundation on the latest open source components and its support for the latest wireless and sensor devices. Aldrich was particularly attracted to the Zephyr Project’s independence and transparent open source governance.

“There are a lot of choices for open source RTOSes and each has its own strengths and weaknesses,” continued Aldrich, “We have a lot of really strong aspects of our project but the community and how we operate is what comes to mind first. It’s a truly collaborative effort. For us, open source is more than a license. We’ve made it transparent how technical decisions are made and community input is incorporated.”

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Kubernetes Design and Development Explained

Kubernetes is quickly becoming the de facto way to deploy workloads on distributed systems. In this post, I will help you develop a deeper understanding of Kubernetes by revealing some of the principles underpinning its design.

Declarative Over Imperative

As soon as you learn to deploy your first workload (a pod) on the Kubernetes open source orchestration engine, you encounter the first principle of Kubernetes: the Kubernetes API is declarative rather than imperative.

In an imperative API, you directly issue the commands that the server will carry out, e.g. “run container,” “stop container,” and so on. In a declarative API, you declare what you want the system to do, and the system will constantly drive towards that state.

Think of it like manually driving vs setting an autopilot system.

So in Kubernetes, you create an API object (using the CLI or REST API) to represent what you want the system to do. And all the components in the system work to drive towards that state, until the object is deleted.

For example, when you want to schedule a containerized workload instead of issuing a “run container” command, you create an API object, a pod, that describes your desired state:

simple-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: internal.mycorp.com:5000/mycontainer:1.7.9

Read more at The New Stack

James Bottomley on Linux, Containers, and the Leading Edge

It’s no secret that Linux is basically the operating system of containers, and containers are the future of the cloud, says James Bottomley, Distinguished Engineer at IBM Research and Linux kernel developer. Bottomley, who can often be seen at open source events in his signature bow tie, is focused these days on security systems like the Trusted Platform Module and the fundamentals of container technology.

With Open Source Summit happening this month in conjunction with Linux Security Summit — and Open Source Summit Europe coming up fast — we talked with Bottomley about these and other topics. …

The Linux Foundation: Who should attend Open Source Summit and why?

Bottomley: I think it’s no secret that Linux is basically the OS of containers and containers are the future of the cloud, so anyone who is interested in keeping up to date with what’s going on in the cloud because this would be the only place they can keep up with the leading edge of Linux.

Read more at The Linux Foundation

How Blockchain and the Auto Industry Will Fit Together

“At this point, most of the specific potential uses for blockchain in various industries are quite speculative and a number of years out,” says Gordon Haff, technology evangelist at Red Hat. “What we can do, though, is think about the type of uses that play to blockchain strengths.”

It is plenty productive to explore sectors where, as Haff says, blockchain’s strong suits might be a good fit. The automotive industry quickly stands out. Some of its fundamental characteristics and concerns – think about the massive global supply chain, the complex web of licensing, taxation, and other regulations, and the important safety and trust issues – make it a fascinating candidate for blockchain-enabled innovation…

However, one catalyst for change is that the auto industry is deeply connected with some other sectors where blockchain technology shows promise. Marta Piekarska, director of ecosystem at Hyperledger, points out several major ones: supply chain, insurance, and payments. And that’s not necessarily a comprehensive list. The vehicles we drive and ride in cross many more avenues than we may realize.

“The automotive industry might be unique in the way that it combines many other platforms: entertainment, manufacturing, tracking of CO2 emissions, payments, and many others,” she explains.

Read more at Enterprisers 

What is CI/CD?

Continuous integration (CI) and continuous delivery (CD) are extremely common terms used when talking about producing software. But what do they really mean? In this article, I’ll explain the meaning and significance behind these and related terms, such as continuous testing and continuous deployment.

Quick summary

An assembly line in a factory produces consumer goods from raw materials in a fast, automated, reproducible manner. Similarly, a software delivery pipeline produces releases from source code in a fast, automated, and reproducible manner. The overall design for how this is done is called “continuous delivery.” The process that kicks off the assembly line is referred to as “continuous integration.” The process that ensures quality is called “continuous testing” and the process that makes the end product available to users is called “continuous deployment.” And the overall efficiency experts that make everything run smoothly and simply for everyone are known as “DevOps” practitioners.

What does “continuous” mean?

Continuous is used to describe many different processes that follow the practices I describe here. It doesn’t mean “always running.” It does mean “always ready to run.” In the context of creating software, it also includes several core concepts/best practices. 

Read more at OpenSource.com

10 Reasons to Attend ONS Europe in September | Registration Deadline Approaching – Register & Save $605

Here’s a sneak peek at why you need to be at Open Networking Summit Europe in Amsterdam next month! But hurry – spots are going quickly. Secure your spot and register by September 1 to save $605.

Open Networking Summit, the premier open networking event in North America now in its 7th year, comes to Europe for the first time next month. This event is like no other, with content presented by your peers in the networking community, sessions carefully selected by networking specialists in the program committee, and plenty of networking and collaboration opportunities, this is an event you won’t want to miss.

Highlights include:

  1. Learn About the Future & Lessons Learned in Open Networking: Hear about innovative ideas on the disruption and change of the landscape of networking and networking-enabled markets in the next 3-5 years across AI, ML, and deep learning applied to networking, SD-WAN, IIOT, data insights, business intelligence, blockchain & telecom, and more. Get an in-depth scoop on the lessons learned from today’s global deployments.
  2. 100+ Sessions Covering Telecom, Enterprise, and Cloud Networking: With a blend of deep technical/developer sessions and business/architecture sessions, there are a plethora of learning opportunities for everyone. Plan your schedule now and choose from sessions, labs, tutorials, and lightning talks presented by Airbnb, Deutsche Telekom AG, Thomas Reuters, Huawei, General Motors, Türk Telekom, China Mobile, and many more.

Read more at The Linux Foundation

A Git Origin Story

A look at Linux kernel developers’ various revision control solutions through the years, Linus Torvalds’ decision to use BitKeeper and the controversy that followed, and how Git came to be created.

Originally, Linus Torvalds used no revision control at all. Kernel contributors would post their patches to the Usenet group, and later to the mailing list, and Linus would apply them to his own source tree. Eventually, Linus would put out a new release of the whole tree, with no division between any of the patches. The only way to examine the history of his process was as a giant diff between two full releases. 

This was not because there were no open-source revision control systems available. CVS had been around since the 1980s, and it was still the most popular system around. At its core, it would allow contributors to submit patches to a central repository and examine the history of patches going into that repository….

One of Linus’ primary concerns, in fact, was speed. This was something he had never fully articulated before, or at least not in a way that existing projects could grasp. With thousands of kernel developers across the world submitting patches full-tilt, he needed something that could operate at speeds never before imagined. 

Read more at Linux Journal

Why Locking Down the Kernel Won’t Stall Linux Improvements

The Linux Kernel Hardening Project is making significant strides in reducing vulnerabilities and increasing the effort required to exploit vulnerabilities that remain. Much of what has been implemented is obviously valuable, but sometimes the benefit is more subtle. In some cases, changes with clear merit face opposition because of performance issues. In other instances, the amount of code change required can be prohibitive. Sometimes the cost of additional security development overwhelms the value expected from it.

The Linux Kernel Hardening Project is not about adding new access controls or scouring the system for backdoors. It’s about making the kernel harder to abuse and less likely for any abuse to result in actual harm. The former is important because the kernel is the ultimate protector of system resources. The latter is important because with 5,000 developers working on 25 million lines of code, there are going to be mistakes in both how code is written and in judgment about how vulnerable a mechanism might be. Also, the raw amount of ingenuity being applied to the process of getting the kernel to do things it oughtn’t continues to grow in lockstep with the financial possibilities of doing so.

Read more at The New Stack

Top Linux Developers’ Recommended Programming Books

Without question, Linux was created by brilliant programmers who employed good computer science knowledge. Let the Linux programmers whose names you know share the books that got them started and the technology references they recommend for today’s developers. How many of them have you read?

Linux is, arguably, the operating system of the 21st century. While Linus Torvalds made a lot of good business and community decisions in building the open source community, the primary reason networking professionals and developers adopted Linux is the quality of its code and its usefulness. While Torvalds is a programming genius, he has been assisted by many other brilliant developers.

I asked Torvalds and other top Linux developers which books helped them on their road to programming excellence. This is what they told me.

By shining C

Linux was developed in the 1990s, as were other fundamental open source applications. As a result, the tools and languages the developers used reflected the times, which meant a lot of C programming language. 

Read more at HPE

Diversity Empowerment Summit Highlights Importance of Allies

Diversity and inclusion are hot topics as projects compete to attract more talent to power development efforts now as well as build their ranks to carry the projects into the future. The Diversity Empowerment Summit co-located with Open Source Summit coming up in Vancouver August 29-31, will offer key insights to help your project succeed in these endeavors.

Although adoption of diversity and inclusion policies is generally seen as simply the right thing to do, finding good paths to building and implementing such policies within existing community cultures continues to be challenging. The Diversity Empowerment Summit, however, provides hard insights, new ideas, and proven examples to help open source professionals navigate this journey.

Nithya Ruff,  Senior Director, Open Source Practice at Comcast, and member of the Board of Directors for The Linux Foundation, says “the mission of open source communities to attract and retain diverse contributors with unique talent and perspectives has gathered momentum, but we cannot tackle these issues without the support of allies and advocates.” Ruff will be moderating a panel discussion at the conference examining the role of allies in diversity and inclusion and exploring solid strategies for success.

Read more at The Linux Foundation