Home Blog Page 608

5 Open Source Software Defined Networking Projects to Know

Throughout 2016, Software Defined Networking (SDN) continued to rapidly evolve and gain maturity. We are now beyond the conceptual phase of open source networking, and the companies that were assessing the potential of these projects two years ago have begun enterprise deployments. As has been predicted for several years, SDN is beginning to redefine corporate networking.

Market researchers are essentially unanimous on the topic. IDC published a study of the SDN market earlier this year and predicted a 53.9 percent CAGR from 2014 through 2020, at which point the market will be valued at $12.5 billion. In addition, the Technology Trends 2016 report ranked SDN as the best technology investment for 2016.

“Cloud computing and the 3rd Platform have driven the need for SDN, which will represent a market worth more than $12.5 billion in 2020. Not surprisingly, the value of SDN will accrue increasingly to network-virtualization software and to SDN applications, including virtualized network and security services. Large enterprises are now realizing the value of SDN in the datacenter, but ultimately, they will also recognize its applicability across the WAN to branch offices and to the campus network,” said Rohit Mehra, Vice President of Network Infrastructure at IDC.

The Linux Foundation recently announced the release of its 2016 report “Guide to the Open Cloud: Current Trends and Open Source Projects.” This third annual report provides a comprehensive look at the state of open cloud computing, and includes a section on unikernels. You can download the report now, and one of the first things to notice is that it aggregates and analyzes research, illustrating how trends in containers, unikernels, and more are reshaping cloud computing. The report provides descriptions and links to categorized projects central to today’s open cloud environment.

In this series, we are looking at various categories and providing extra insight on how the areas are evolving. Below, you’ll find several important SDN projects and the impact that they are having, along with links to their GitHub repositories, all gathered from the Guide to the Open Cloud:

Software-Defined Networking

ONOS

Open Network Operating System (ONOS), a Linux Foundation project, is a software-defined networking OS for service providers that has scalability, high availability, high performance and abstractions to create apps and services. ONOS on GitHub

OpenContrail

OpenContrail is Juniper Networks’ open source network virtualization platform for the cloud. It provides all the necessary components for network virtualization: SDN controller, virtual router, analytics engine, and published northbound APIs. Its REST API configures and gathers operational and analytics data from the system. OpenContrail on GitHub

OpenDaylight

OpenDaylight, an OpenDaylight Foundation project at The Linux Foundation, is a programmable, software-defined networking platform for service providers and enterprises. Based on a microservices architecture, it enables network services across a spectrum of hardware in multivendor environments. OpenDaylight on GitHub

Open vSwitch

Open vSwitch, a Linux Foundation project, is a production-quality, multilayer virtual switch. It’s designed for massive network automation through programmatic extension, while still supporting standard management interfaces and protocols including NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, and 802.1ag. It supports distribution across multiple physical servers similar to VMware’s vNetwork distributed vswitch or Cisco’s Nexus 1000V. OVS on GitHub

OPNFV

Open Platform for Network Functions Virtualization (OPNFV), a Linux Foundation project, is a reference NFV platform for enterprise and service provider networks. It brings together upstream components across compute, storage and network virtualization in order create an end-to-end platform for NFV applications. OPNFV on Bitergia

Learn more about trends in open source cloud computing and see the full list of the top open source cloud computing projects. Download The Linux Foundation’s Guide to the Open Cloud report today!

Running Linux on Tiny Peripherals by Marcel Holtmann, Intel

Recently, Marcel Holtmann from Intel began looking at the challenge of shrinking Linux to run on small, IoT devices as a hobby project, and at LinuxCon Europe, he presented what he’s learned about how to run Linux on tiny peripherals.

How to Manage the Security Vulnerabilities of Your Open Source Product

The security vulnerabilities that you need to consider when developing open source software can be overwhelming. Common Vulnerability Enumeration (CVE) IDs, zero-day, and other vulnerabilities are seemingly announced every day. With this flood of information, how can you stay up to date?

“If you shipped a product that was built on top of Linux kernel 4.4.1, between the release of that kernel and now, there have been nine CVEs against that kernel version,” says Ryan Ware, Security Architect at Intel, in the Q&A below. “These all would affect your product despite the fact they were not known at the time you shipped.”

Ryan Ware, Security Architect at Intel
In his upcoming presentation at ELC + OpenIoT Summit, Ryan Ware, Security Architect at Intel, will present strategies for how you can navigate these waters and successfully manage the security of your product. In this preview of his talk, Ware discusses the most common developer mistakes, strategies to stay on top of vulnerabilities, and more.

Linux.com: Let’s start from the beginning. Can you tell readers briefly about the Common Vulnerabilities and Exposures (CVE), 0-day, and other vulnerabilities? What are they, and why are they important?

Ryan Ware: Excellent questions. Common Vulnerabilities and Exposures (CVE) is a database maintained by the MITRE Corporation (a not-for-profit organization) at the behest of the United States government. It’s currently funded under the US Department of Homeland Security.  It was created in 1999 to house information about all publicly known security vulnerabilities. Each of these vulnerabilities has its own identifier (a CVE-ID) and can be referenced by that. This is how the term CVE, which really applies to the whole database, has morphed into meaning an individual security vulnerability: a CVE.

Many of the vulnerabilities that end up in the CVE database started out life as 0-day vulnerabilities. These are vulnerabilities that for whatever reason haven’t followed a more ordered disclosure process such as Responsible Disclosure. The key is that they’ve become public and exploitable without the software vendor being able to respond with a remediation of some type — usually a software patch. These and other unpatched software vulnerabilities are critically important because until they are patched, the vulnerability is exploitable. In many ways, the release of a CVE or a 0-Day is like a starting gun going off.  Until you reach the end of the race, your customers are vulnerable.

Linux.com: How many are there? How do you determine which are pertinent to your product?

Ryan: Before going into how many, everyone shipping software of any kind needs to keep something in mind. Even if you take all possible efforts to ensure that the software you ship doesn’t have known vulnerabilities in it, your software *does* have vulnerabilities. They are just not known. For example, if you shipped a product that was built on top of Linux kernel 4.4.1, between the release of that kernel and now, there have been nine CVEs against that kernel version. These all would affect your product despite the fact they were not known at the time you shipped.

At this point in time, the CVE database contains 80,957 entries (January 30, 2017) and includes entries going all the way back to 1999 when there were 894 documented issues. The largest number in a year to date was 2014 when 7,946 issues were documented. That said, I believe that the decrease in numbers over the last two years isn’t due to there being fewer security vulnerabilities. This is something I’ll touch on in my talk.

Linux.com: What are some strategies that developers can use to stay on top of all this information?

Ryan: There are various ways a developer can float on top of the flood of vulnerability information. One of my favorite tools for doing so is CVE Details. They present the information from MITRE in a very digestible way. The best feature they have is the ability to create custom RSS feeds so you can follow vulnerabilities for the components you care about. Those with more complex tracking problems may want to start by downloading the MITRE CVE database (freely available) and pulling regular updates. Other excellent tools, such as cvechecker, allow you to check for known vulnerabilities in your software.

For key portions of your software stack, I also recommend one amazingly effective tool: Get involved with the upstream community. These are the people who understand the software you are shipping best. There are no better experts in the world. Work with them.

Linux.com: How can you know whether your product has all the vulnerabilities covered? Are there tools that you recommend?

Ryan: Unfortunately, as I said above, you will never have all vulnerabilities removed from your product. Some of the tools I mentioned above are key. However, there is one piece of software I haven’t mentioned yet that is absolutely critical to any product you ship: a software update mechanism. If you do not have the ability to update the product software out in the field, you have no ability to address security concerns when your customers are affected.  You must be able to update, and the easier the update process, the better your customers will be protected.

Linux.com: What else do developers need to know to successfully manage security vulnerabilities?

Ryan: There is one mistake that I see over and over. Developers always need to keep in mind the idea of minimizing their attackable surface area. What does this mean? In practice, it means only including the things in your product that your product actually needs! This not only includes ensuring you do not incorporate extraneous software packages into your product, but that you also compile projects with configurations that turn off features you don’t need.

How does this help? Imagine it’s 2014. You’ve just gone into work to see that the tech news is all Heartbleed all the time. You know you include OpenSSL in your product because you need to do some basic cryptographic functionality but you don’t use TLS heartbeat, the feature with the vulnerability in question. Would you rather:

a. Spend the rest of your work working with customers and partners handholding them through a critical software update fixing a highly visible security issue?

b. Be able to tell customers and partners simply that you compiled your products OpenSSL with the “–DOPENSSL_NO_HEARTBEATS” flag and they aren’t vulnerable, allowing you to focus on new features and other productive activities.

The easiest vulnerability to address is the one you don’t include.

Embedded Linux Conference + OpenIoT Summit North America will be held on February 21-23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>

Running Linux on Tiny Peripherals

It seems like every day new IoT devices with a very limited amount of RAM and storage space are appearing in homes, gardens, businesses, labs, and elsewhere. This includes things like heart rate monitors, thermometers, home automation components, and other devices that only need to perform small, limited tasks. About a year ago, Marcel Holtmann from Intel began looking at the challenge of shrinking Linux to run on these types of small, IoT devices as a hobby project, and at LinuxCon Europe, he presented what he’s learned about how to run Linux on tiny peripherals.

When you begin working with these tiny devices, you ultimately need to decide whether to use a real-time operating system (RTOS) or stick with Linux, Holtmann says. He mentioned that at the October 2015 Kernel Summit in Seoul, Jon Corbet talked about the fear that if they couldn’t shrink Linux, Linux would lose out on the vast numbers of IoT opportunities and become less relevant. In some cases, Holtmann points out that Linux will be a better choice than a RTOS. For example, if you need an IPv6 stack to support a particular feature or Wi-Fi connectivity, most RTOS’s won’t have them, but Linux does. Where do you spend your time? Shrinking Linux or building a new feature into your RTOS?

The costs of these devices are coming down drastically, to the point where people are starting to use devices like the Raspberry Pi for speaker gifts and other giveaways. Holtmann talked about three categories of devices: First, simple sensors, like thermometers and Bluetooth heart rate monitors that are very inexpensive; second, IP nodes, like IPv6-enabled devices used for home automation and with a variety of mesh technologies; and, third, larger gateway devices like routers and Wi-Fi access points. 

Holtmann said that for the tiny devices he wanted to focus on, you can’t just install your favorite distribution and hope for the best. He picked several very specific components to build his Linux-based devices: gummiboot (UEFI boot loader), Linux (OS kernel), musl (C library), ELL (utility library), and BlueZ (Bluetooth library). From the idea to implementation, he worked mostly in QEMU to avoid tangling with the hardware until he figured out what would actually work. When he was ready to test it on real hardware, he booted it from a MicroSD card on a Minnowboard Max with a USB-based Bluetooth dongle. 

For all of the details about exactly how he optimized Linux to run on a tiny device, watch the video of Holtmann’s entire presentation below.

Interested in speaking at Open Source Summit North America on September 11-13? Submit your proposal by May 6, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

Security Hygiene for Software Professionals

As software makers, we face a unique threat model. The computers or accounts we use to develop and deliver software are of more value to an attacker than what ordinary computer users have—cloud service keys can be stolen and used for profit, and the software we ship can be loaded with malware without our knowledge. And that’s before we consider that the code we write has a tremendous value of its own and should be protected.

Taking responsibility for our security hygiene is, thankfully, not very difficult. Today, most tools we need are either already present in our operating systems or can be added without much effort. In this post, I’ll take you down a list of things you should consider.

Read more at Atomic Object

Linux: Kodi 17 Media Player Released

The Kodi media player developers have been busy working on a new release, and now it’s finally here. This release of Kodi includes some significant interface changes that should make Kodi users quite happy.

Kodi, of course, is the popular media application that some have linked with piracy of movies and TV shows. The app has generated controversy in the media, and the release of Kodi 17 will probably draw even more scrutiny as additional users begin installing it on their computers.

Read more at InfoWorld

The Internet of Things: 10 Types of Enterprise Deployments

As the Internet of Things (IoT) market continues its explosive growth and development, more and more businesses are looking at the ways they can generate business value from connected devices and the data they generate. By 2020, research firm Gartner predicts that more than half of major new business processes and systems will incorporate IoT elements.

However, just where businesses are deploying these devices, and how they are using them depends on the vertical market. Here are 10 ways that enterprises are deploying IoT for real business value.

Read more at ZDNet

AI Isn’t Just for the Good Guys Anymore

Last summer at the Black Hat cybersecurity conference, the DARPA Cyber Grand Challenge pitted automated systems against one another, trying to find weaknesses in the others’ code and exploit them.

“This is a great example of how easily machines can find and exploit new vulnerabilities, something we’ll likely see increase and become more sophisticated over time,” said David Gibson, vice president of strategy and market development at Varonis Systems.

His company hasn’t seen any examples of hackers leveraging artificial intelligence technology or machine learning, but nobody adopts new technologies faster than the sin and hacking industries, he said.

“So it’s safe to assume that hackers are already using AI for their evil purposes,” he said.

Read more at CSO Online

How To Write and Use Custom Shell Functions and Libraries

In Linux, shell scripts help us in so many different ways including performing or even automating certain system administration tasks, creating simple command line tools and many more.

In this guide, we will show new Linux users where to reliably store custom shell scripts, explain how to write custom shell functions and libraries, use functions from libraries in other scripts.

Read the complete article at Tecmint

RethinkDB’s Realtime Cloud Database Lands at The Linux Foundation

The Cloud Native Computing Foundation today announced it has purchased the source code to RethinkDB, relicensed the code under Apache, and contributed it to The Linux Foundation. 

RethinkDB is an open source, NoSQL, distributed document-oriented database that was previously licensed under the GNU Affero General Public License, Version 3 (AGPLv3).

The software is already in production use today by hundreds of technology startups, consulting firms, and Fortune 500 companies, including NASA, GM, Jive, Platzi, the U.S. Department of Defense, Distractify, and Matters Media. But the AGPLv3 license was limiting the willingness of some companies to use and contribute to the software.

Its new Apache license enables anyone to use the software for any purpose without complicated requirements.

To learn more about the future of the RethinkDB project, we spoke with Mike Glukhovsky, who helps run developer relations at Stripe and cofounded RethinkDB in 2009. Here, Mike tells us more about RethinkDB and discusses the community’s goals going forward. See the CNCF blog for more information.

Linux.com: Now that you’ve found a home with The Linux Foundation, what does the RethinkDB community plan to focus on?

The RethinkDB community’s first goal is to ship RethinkDB 2.4, which represents a shift from a federated development process to a distributed, community-based approach. The release will bring new features to seven years of development effort and a robust database used by 200k+ developers today.

We plan to open source a number of internal tools, artwork, and unreleased features as we build a community process to drive future development forward. Future releases are also planned for Horizon, another project by the RethinkDB team that provides a realtime backend for JavaScript apps.

Linux.com: RethinkDB is praised for its ease-of-use, rich data model and ability to support extremely flexible querying capabilities. Please elaborate on why it is easy to use and how it supports “extremely flexible querying capabilities.”

RethinkDB dramatically reduces friction while rapidly prototyping and building applications. You can get started with a powerful built-in web UI and data explorer that allows you to start modeling and exploring your data without writing any application code.

RethinkDB’s query language, ReQL, is a powerful and expressive functional query language that embeds natively in your programming language of choice. ReQL includes powerful features not usually seen in document stores, like distributed joins, Hadoop-style map-reduce, built-in HTTP support, and realtime updates on distributed queries.

We designed RethinkDB to scale linearly out of the box: you can spin up a cluster with multiple replicas and shards across multi-datacenter environments within seconds. If database nodes go down, RethinkDB will automatically fail over and maintain operations in production environments.

Linux.com: Please be more specific about why RethinkDB is appealing now, given that as a company RethinkDB was unsuccessful in creating a sustainable business despite heavy investment and the business shutting down. What brought it back to life? And when (what year) did it catch its second wind?

Ultimately, RethinkDB succeeded in creating a broad community that embraced the open-source project, but that didn’t translate into a scalable business. Companies building open-source developer tools face a unique set of challenges; doubly so when building databases.

The company behind RethinkDB shut down in 2016. A number of dedicated core team and community members have been working diligently to establish the technical and community leadership we need to keep the project going forward. Our new home with the Linux Foundation offers the support and infrastructure we need to build a long-term community effort.

Linux.com: What role is the cloud playing in driving popularity of RethinkDB?

Most modern, cloud-based infrastructures rely on clusters of nodes running application servers, microservices, databases, caches, and queues. While these systems offer flexibility and power via programmable environments, they come with the extra burden of operating these clusters. Small and medium-sized teams lack the expertise to manage the added operational burden, and large teams face challenges when deploying across multiple data centers, ensuring availability at scale, and handling complex failure scenarios.

This environment has encouraged RethinkDB’s adoption because it balances the needs of developers and operations teams equally. Developers are rapidly adopting RethinkDB because of its powerful query language, clear semantics, friendly web interface and excellent documentation. Operations teams pick RethinkDB because it linearly scales across nodes with a minimum of effort, handles failover quickly and reliably, and provides complete control over cluster administration.

Looking forward, RethinkDB’s realtime streams on queries allow modern architectures to manage the complexity of data that is constantly being updated across services and to provide solutions for IoT, realtime marketplaces, collaborative web and mobile apps, and streaming analytics. The cloud has transformed how we build software services, and it has also amplified the volume of data and changed how we interact with it. RethinkDB is designed to help solve those problems.

Linux.com: While not a part of CNCF today, would you like to see the project join CNCF in the future?

We’ve worked with members of the CNCF throughout RethinkDB’s history, and have long respected the work they do with projects like Kubernetes, Fluentd, and Prometheus. The CNCF is helping advise us on how to establish RethinkDB as an independent open-source project, and we plan to engage an open conversation with our community on where the project should live long-term. This might very well be the CNCF, but our community deserves to discuss it first.

Linux.com: What is the best way to volunteer and get involved in RethinkDB’s open source future?

We’ve been working with a community of more than 900 users and contributors in our public Slack group (#open-rethinkdb) to plan and secure a long-term open-source future for RethinkDB. Volunteers can learn how to contribute to the open-source project here: https://rethinkdb.com/contribute

We also always accept a good pull request. 🙂

The RethinkDB software is available to download at https://rethinkdb.com/. Development occurs at https://github.com/rethinkdb/rethinkdb and work has been underway on the 2.4 release, which will be available shortly. Follow the RethinkDB community discussion at http://slack.rethinkdb.com/.