Home Blog Page 361

How the Zephyr Project Is Working to Make IoT Secure

Fragmentation has been a big problem for IoT since the beginning. Companies were doing their own workarounds, there were no standardizations, and there was no collaborative platform that everyone could work on together. Various open source projects are working to solve this problem, but many factors contribute to the woes of IoT devices. Anas Nashif, Technical Steering Committee (TSC) Chair of the Zephyr project believes that software licensing can help.

Nashif admits that there are already many open source projects trying to address the domain of embedded devices and microcontrollers. “But none of these projects offered a complete solution in terms of being truly open source or being compatible in terms of having an attractive license that would encourage you actually to use it in your product. Some of these projects are controlled by a single vendor and, as such, don’t have an acceptable governance model that breeds confidence within users,” said Nashif.

The ideal situation is a project with a democratic governance model, released under a permissive license, without a single entity in control; it should be driven by a community. That’s exactly what Zephyr is. It’s an open source project to create a real-time operating system (RTOS) optimized for resource constrained devices, across multiple architectures. Zephyr is a Linux Foundation project that was launched by Intel about two years ago.

“Zephyr is basically an attempt to drive community and developers towards one single IoT and embedded OS in open source that addresses many issues that many of the industrial members have been dealing with over the last few years,” said Nashif, an open source veteran who has been working at Intel for more than 13 years.

It’s not Linux

Zephyr doesn’t use the Linux kernel. Its kernel comes from Wind River’s VxWorks Microkernel Profile for VxWorks. The first version of Zephyr, which was launched some two years ago, came out with a kernel, an IP stack, L2 stack, and few services. Then Intel decided to open source it. They took a saw to it and cleaned the code, then they started talking to industry leaders, especially The Linux Foundation. The project was launched with Intel, NXP, and Synopsis as launch members.

The 1.0 release didn’t focus on a complete solution from day one; the idea was to cover the areas that most people at that time were interested in, especially IoT. The initial release of Zephyr came out with a couple of boards on which it could run, so people could try it out. “The idea was actually to get attention from those facing the same problem in the ecosystem and get them involved in the project,” said Nashif.

At the same time, the Zephyr team wanted to get the attention of the community of hobbyists and makers. “The maker community has started using microcontrollers to automate a lot of things,” said Nashif. This community now does some exciting things with these projects and has become very active.

What about licensing?

Previously,  we mentioned that software licensing played a role in fragmentation of IoT space. “Zephyr was launched under the Apache license. This is very permissive , which means you can take it and do whatever you want,” said Nashif. Doing whatever you want includes keeping pieces of your stack proprietary, something which is not doable with Linux, which is released under GNU GPL v2.

Nashif has been involved with open source work for decades; he worked on Linux for almost 15 years, so he is well aware of the nuances. But he admitted that he has come across many companies who can’t use Linux on their embedded devices or microcontrollers.

Basically, if you are developing something that you can’t disclose, then you can’t use Linux, and you need to go and do your own thing, according to Nashif. “That’s basically what causes fragmentation and people reinvent the wheel over and over again. We are trying to address this with Zephyr,” he said.

Nashif said many people in this space are still skeptical of open source; they don’t want to use open source fearing they will have to release their own code, too, but Zephyr is helping to change that mindset.

When the Mesh Networking Specifications were finalized, Intel was able to offer an implementation of it for Zephyr OS. Many users working in the IoT space were excited to see the implementation as they could easily use Zephyr instead of using their own custom solution. So, now they have started to look at open source more seriously.

“There are few companies who have never done open source before, but after trying Zephyr they have started to contribute back,” said Nashif. “They have learned that not contributing back is like shooting yourself in the foot.”

Who is using Zephyr?

Because Zephyr a fully open source project, it’s difficult to track exactly who is using it in what use cases. However, Nashif said that he was aware of it being used in smart lights, connect home devices, and many other use-cases of mesh networks.

“We were in Germany attending an event and we came across a vendor who was using Zephyr for an inventory management system. People are using it in wearables, smart glasses, and even watches,” said Nashif.

Intel is also using Zephyr in its products. The company recently announced two new open source projects, under The Linux Foundation umbrella: ACRN and Sound Open Firmware project. Nashif said that both of these projects can use Zephyr. The good thing about Zephyr is that it’s not limited to IoT or microcontrollers, it can be used anywhere — even laptops or servers.

Security measures

One of the biggest challenges that Nashif sees for Zephyr is the demand for functional safety,  security, and privacy. There’s been a lot of reporting around vulnerabilities and exploits in the IoT space. “Security shouldn’t be an afterthought. Security must be part of how you develop and run your process,” said Nashif. The Zephyr Project is working on mechanisms to meet these safety and security requirements.

Security is always a mix of hardening the hardware, the OS, and the applications that run on it. Zephyr can’t control hardware level hardening; that comes from the hardware vendors. “What we do, however, is provide the basics and run the process in a way which does not allow,  for example, exploits and bugs to go unnoticed,” said Nashif.

The project has been busy introducing memory protection features to Zephyr, which are been already available in commercial RTOS and environments. “We support threat isolation and memory protection on three major architectures supported by Zephyr,” he said.

No system can be 100 percent secure. Continuous updates to patch holes and fix bugs are needed. Zephyr allows for over the air, machine-to-machine, and over Bluetooth updates. Regardless of the environment it’s used in, there are easy ways to keep Zephyr-powered devices updated.

Conclusion

Zephyr is trying to solve some of the most critical problems facing the IoT and maker community. And, the fully open source project exists under the umbrella of The Linux Foundation so anyone can start using and contributing to it. This may be the answer the IoT community was looking for.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, cloud, containers, AI, community, and more.

Building Leadership in Open Source: A Free Guide

Contributing code is just one aspect of creating a successful open source project. The open source culture is fundamentally collaborative, and active involvement in shaping a project’s direction is equally important. The path toward leadership is not always straightforward, however, so the latest Open Source Guide for the Enterprise from The TODO Group provides practical advice for building leadership in open source projects and communities.  

Being a good leader and earning trust within a community takes time and effort, and this free guide discusses various aspects of leadership within a project, including matters of governance, compliance, and culture. Building Leadership in an Open Source Community, featuring contributions from Gil Yehuda of Oath and Guy Martin of Autodesk, looks at how decisions are made, how to attract talent, when to join vs. when to create an open source project, and it offers specific approaches to becoming a good leader in open source communities.

Read more at The Linux Foundation

Open Source SDN Project Could Let Network Admins Duplicate Production Environments

Software Defined Networking (SDN) is an increasingly attractive option for organizations looking to automate more of their data center operations. However, SDN deployments typically accompany vendor lock-in, as hardware manufacturers such as Cisco provide proprietary software solutions to go with bundles of network hardware. Similarly, turn-key software defined data center (SDDC) solutions often rely on top-down vendor integration, or have similar limitations for using products from qualified vendors.

One team is working to change that. Japanese software firm axsh is developing an open- source software stack—code named LiquidMetal—that combines their existing OpenVNet SDN software, with OpenVDC VM orchestration software.

With the two, the developers have made it possible to take an off-the-shelf dedicated switch, and configure it for any desired network topology, in effect making it possible to create complete identical copies of a given production network…

Read more at Tech Republic

Spanning the Tree: Dr. Radia Perlman & Untangling Networks

As computer networks get bigger, it becomes increasingly hard to keep track of the flow of data over this network. How do you route data, making sure that the data is spread to all parts of the network? You use an algorithm called the spanning tree protocol — just one of the contributions to computer science of a remarkable engineer, Dr. Radia Perlman. But before she created this fundamental Internet protocol, she also worked on LOGO, the first programming language for children, creating a dialect for toddlers.

Born in 1952, Perlman was a prodigy who excelled in math and science, and in her own words, “Every time there was a new subject or a quiz I would be very excited at the opportunity to solve all sorts of puzzles”. She graduated from MIT in 1973 and got her Masters degree in 1976.

While she was working on her Master’s degree, she worked with Seymour Papert at the MIT Artificial Intelligence Lab, which was working on LOGO, the first programming language for children. In the simplest version of this language, kids could learn the fundamentals of programming by writing programs that controlled the motion of an on-screen or motorized turtle. …

After getting her masters and leaving MIT, Perlman joined BBN, a defense contractor, then moved to Digital Equipment Corporation (DEC) in 1980. At DEC, she was tasked with looking into ways to deal with the increasing complexity of the local area networks (LANs) that the company was creating. Specifically, how do you stop data from getting trapped in a loop?

Read more at Hackaday

Kubernetes Recipes: Maintenance and Troubleshooting

This is a full chapter from “Kubernetes Cookbook”—read the full book on O’Reilly’s learning platform.

In this chapter, you will find recipes that deal with both app-level and cluster-level maintenance. We cover various aspects of troubleshooting, from debugging pods and containers, to testing service connectivity, interpreting a resource’s status, and node maintenance. Last but not least, we look at how to deal with etcd, the Kubernetes control plane storage component. This chapter is relevant for both cluster admins and app developers.

Enabling Autocomplete for kubectl

Problem

It is cumbersome to type full commands and arguments for the kubectl command, so you want an autocomplete function for it.

Solution

Enable autocompletion for kubectl.

For Linux and the bash shell, you can enable kubectl autocompletion in your current shell using the following command:

$ source <(kubectl completion bash)

For other operating systems and shells, please check the documentation.

See Also

Read more at O’Reilly

Flatcar Linux: The CoreOS Operating System Lives on Beyond Red Hat

During the last KubeCon + CloudNativeCon in Copenhagen, attendees were re-introduced to Kinvolk, a Berlin-based group of open source contributors, including Chris Kühl, who were early contributors to the rkt container runtime devised at CoreOS and since donated to the CNCF. Now, Kühl and his colleagues have committed to producing and maintaining a fork of CoreOS Container Linux. Called Flatcar Linux, its immediate goal is to maintain its container-agnostic architecture, and maybe later try resuming its own development path.

“With Container Linux, CoreOS created an OS that is pretty close to ideal for cloud-native infrastructure,” stated Kühl in a note to The New Stack. “When the acquisition was announced, there was a lot of confusion about what would happen to it. Thus with Flatcar Linux, we first wanted to ease those concerns by offering a drop-in replacement.

“But Flatcar Linux was likely to happen even before the acquisition was announced,” he continued. “We had been getting requests to support Container Linux and, as we mention in the FAQ, we didn’t see any means of providing commercial support for an OS without controlling the full build pipeline and maintaining it.”

Rebooting the Bootstrap

On April 30, the Kinvolk group officially released Flatcar Linux as a public project, with its own repository.  A check of the first draft of its documentation reveals the group is obviously continuing CoreOS’ work in making container configuration files easier for human beings to produce. 

Read more at The New Stack

How to Get Involved with Hyperledger Projects

Few technology trends have as much momentum as blockchain — which is now impacting industries from banking to healthcare. The Linux Foundation’s Hyperledger Project is helping drive this momentum as well as providing leadership around this complex technology, and many people are interested in getting involved. In fact, Hyperledger nearly doubled its membership in 2017 and recently added Deutsche Bank as a new member.  

A recent webinar, Get Involved: How to Get Started with Hyperledger Projects, focuses particularly on making Hyperledger projects more approachable. The free webinar is now available online and is hosted by David Boswell, Director of Ecosystem at Hyperledger and Tracy Kuhrt, Community Architect.

Hyperledger Fabric, Sawtooth, and Iroha

Hyperledger currently consists of 10 open source projects, seven that are in incubation and three that have graduated to active status.  “The three active projects are Hyperledger Fabric, Hyperledger Sawtooth, and Hyperledger Iroha,” said Boswell.

Fabric is a platform for distributed ledger solutions, underpinned by a modular architecture. “One of the major features that Hyperledger Fabric has is a concept called channels. Channels are a private sub-network of communication between two or more specific network members for the purpose of conducting private and confidential transactions.”

According to the website, Hyperledger Iroha is designed to be easy to incorporate into infrastructural projects requiring distributed ledger technology. It features simple construction, with emphasis on mobile application development.

Hyperledger Sawtooth is a modular platform for building, deploying, and running distributed ledgers, and you can find out more about it in this post.  One of the main attractions Sawtooth offers is “dynamic consensus.”

“This allows you to change the consensus mechanism that’s being used on the fly via a transaction, and this transaction, like other transactions, gets stored on the blockchain,” said Boswell. “With Hyperledger Sawtooth, there are ways to explicitly let the network know that you are making changes to the same piece of information across multiple transactions. By being able to provide this explicit knowledge, users are able to update the same piece of information within the same block.”

Sawtooth can also facilitate smart contracts. “You can write your smart contract in a number of different languages, including C++ JavaScript, Go, Java, and Python,” said Boswell. Demonstrations and resources for Sawtooth are available here:

How to contribute to Hyperledger projects

In the webinar, Kuhrt and Boswell explain how you can contribute to Hyperledger projects. “All of our working groups are open to anyone that wants to participate, including the training and education working group,” said Kuhrt. “This particular working group meets on a biweekly basis and is currently working to determine where it can have the greatest impact. I think this is really a great place to get in at the start of something happening.”

What are the first steps if you want to make actual project contributions? “The first step is to explore the contributing guide for a project,” said Kuhrt. “All open source projects have a document at the root of their source directory called contributing, and these guides are really to help you find information about how you’d file a bug, what kind of coding standards are followed by the project, where to find the code, where to look for issues that you might start working with, and requirements for pull requests.”

Now is a great time to learn about Hyperledger and blockchain technology, and you can find out more in the next webinar coming up May 31:

Blockchain and the enterprise. But what about security?

Date: Thursday, May 31, 2018

Time: 10:00 AM Pacific Daylight Time

This talk will leave you with understanding how Blockchain does, and does not, change the security requirements for your enterprise. Sign up now!

Submit to Speak at Hyperledger Global Forum

Hyperledger Global Forum will offer the unique opportunity for more than 1,200 users and contributors of Hyperledger projects from across the globe to meet, align, plan, and hack together in-person. Share your expertise and speak at Hyperledger Global Forum! We are accepting proposals through Sunday, July 1, 2018. Submit Now >>

This article originally appeared on The Linux Foundation

New Keynotes & Executive Leadership Track Announced for LinuxCon + ContainerCon + CloudOpen China

We’re pleased to announce numerous hosted workshops at LinuxCon + ContainerCon + CloudOpen China (LC3). taking place in Beijing, June 25 – 27, which provide attendees with the opportunity to learn and experience even more.

Hosted Events at LC3:

Read more at The Linux Foundation

Reports Of The Impending Demise Of Operations Are Greatly Exaggerated

Our industry has spent the past 7-8 years proclaiming the need for better integration of Dev and Ops to improve flow and quality. Despite this work — or perhaps because of it — there is a new rift forming between Dev and Ops.

Once upon a time, Developers had to be convinced that they should even care about operational concerns. But now, here we are in the middle of 2018, and there is a growing segment of Devs who proclaim that Ops is a thing of the past, won’t exist in the future, and good riddance. “Ops is dead.” “Containers and Serverless make Ops unnecessary.” “Just give us a login and get out of our way.”

Of course — like everything else in our industry — the tooling, the tasks, the organizational boundaries, and even the name of Operations are changing. But these assertions about the demise of Operations as a distinct craft and professional role are unrealistic and somewhat naive….

Developers have historically held a reductionist view that deployment equals operations. This view is that deployment is the finish line and if there is a problem then just deploy it again with a different version. To be fair, in the smallest of organizations (i.e., a handful of devs working in cloud infrastructure) or the largest of organizations (siloed development team building a single component of a much larger system), this is the daily view of the developer.

However, spend some time in larger enterprises, and there is a broad range of necessary day-to-day operations activities that aren’t code deployments. It is a huge list that includes responding to alerts, investigating performance, capacity planning, responding to ad-hoc business requests, managing caches, managing CDNs, configuring DNS services, managing SSL certs, managing proxies, managing firewalls/networks, running message systems, and more.

Read more at Rundeck

Software-Defined Storage or Hyperconverged Infrastructure?

It’s easy to get software-defined storage (SDS) confused with hyperconverged infrastructure (HCI). Both solutions “software-define” the infrastructure and abstract storage from the underlying hardware. They both run on commodity servers and pair well with virtualization. Reporters, analysts, vendors and even seasoned IT professionals talk about them in the same breath.

But there are important distinctions between HCI and SDS. It comes down to how you want to manage your storage. SDS requires deep storage expertise; HCI does not. While there are some differences in capital costs, there is much more in operational costs. More so, each solves different problems and fits best for different use cases.

To start, let’s take a deeper look at what makes HCI and SDS different….

Software-defined storage (SDS) abstracts the management of physical storage, typically by creating a shared storage pool using industry-standard servers. It frees you from legacy storage arrays, or masks them underneath a software layer. That storage is managed separately from the compute and hypervisor layer.

Read more at The New Stack