Microsoft today released its PowerShell scripting language and command-line shell as open source. The project joins .NET and the Chakra JavaScript engine as an MIT-licensed open source project hosted on GitHub.
Alpha version prebuilt packages of the open source version are available for CentOS, Ubuntu, and OS X, in addition, of course, to Windows. Additional platforms are promised in the future.
The OPNFV project will turn two this October and, says OPNFV Director Heather Kirksey, like any two-year-old, it’s eagerly figuring out the world and how to interact with it — “sometimes through the occasional skinned knee but also with the excitement of learning and accomplishing new things.”
OPNFV is an integrated open platform for facilitating NFV deployments, and in her upcoming talk at LinuxCon + ContainerCon — called “Learning to Swim Upstream: OPNFV’s Approach to Upstream Integration” — Kirksey will explain why the project chose this integrated approach and examine key lessons learned. In this interview, Kirksey provides a preview of her talk and describes some of the project’s current goals, challenges, and successes.
Heather Kirksey, OPNFV Director Linux.com: Can you briefly tell us the state of the OPNFV project?
Heather Kirksey: We recently had our second OPNFV Summit in Berlin, and it was a great experience with over 600 people across a number of different industry segments and open source communities sharing ideas and learning from each other. We’ve put out two releases to date, and expect our third one, Colorado, this fall. The process of doing these releases has taught us a great deal about the challenges of network transformation, the areas where a great deal of development is still needed, and those areas where we can have significant short-term impact.
Things that are probably top of mind for us right now are shoring up our infrastructure needs across our Community Testing Labs, “Pharos”, our CI/CD and automated testing capabilities, and our ability to bring new features in at a faster rate. We are focused on continuing to enable capabilities that are important to the network service environment, including security, IPv6 support, software-defined VPN, better platform performance, service function chaining, fault detection and resolution, and platform resilience.
We are also beginning to engage more on the application support on top of our baseline infrastructure. Things like management and orchestration, application on-boarding, service modeling, enhanced policy capabilities across infrastructure and applications, etc.
The end users and vendors involved in the network transformation that is NFV (not to mention software-defined networking or even cloud) are on a journey that is nothing less than the rearchitecture of their core asset — their networks, those networks that enable the connected world we inhabit today. It’s a complex journey that encompasses the reality of their legacy equipment and a bold vision of a much more dynamic and responsive world, and is intended to provide for future services no one has even envisioned yet.
Linux.com: You’ve said the OPNFV project differs from traditional projects in that its work is focused upstream and leverages existing code bases. Can you talk briefly about that and explain why you adopted this approach?
Heather: First of all, OPNFV developers definitely do write code, and if we find a gap that no one else is working on, there’s nothing to stop us from taking on something of that sort ourselves!
But certainly right now, we do focus on integration, testing, and providing feedback upstream to groups such as OpenStack, Open Daylight, ONOS, OVS, FD.io, KVM, the Linux kernel, etc. The main reason we do that is that there’s no reason to reinvent the wheel. There’s no way it makes sense for us to write our own SDN controller, a new hypervisor, a cloud management platform, and certainly not an operating system! One of the starting assumptions for us as a project is that many of the pieces necessary to assemble an end-to-end NFV platform were already out there, but no one had really tried to put them together, see how they worked together, or really identified the gaps between existing capabilities and the needs of NFV.
One of the really cool things about this systems integration perspective is that we get so much experience across so many different projects and code bases rather than living in a silo. It also helps us see things in action in some more real-world situations. For example in the two months leading up to our Brahmaputra release, we deployed an OpenStack data center over 3,000 times across multiple hardware vendors in different environments on bare metal and in nested virtualized scenarios. There are very few people on the planet who can claim that level of facility and experience with OpenStack.
Additionally, we are able to test features and capabilities at a system level. For example, we started getting early build drops of OpenDaylight Beryllium in order to test Service Function Chaining. It’s difficult to really test this sort of feature without a full platform of OpenStack + KVM + ODL + OVS + Tacker + Virtual Network Functions in order to make sure that a service function chain across VNFs would actually work. No one else was in a position to do that sort of testing and we were able to give a ton of feedback back to the ODL developers who were able to increase the stability of ODL before Beryllium was released.
We really want to get better at this last part — being able to test project code in more integrated, real-world use cases and work with the developers on those projects to make their code even better and better tested.
Linux.com: What are some of the difficulties you’ve faced because of this integrated approach and how are you dealing with those?
Heather: The challenges are those really of any system integrator — we are building an end-to-end platform and testing code that is “owned” by other organizations and we need to work through them to get bugs fixed and new capabilities implemented. The challenges really break down into a couple of areas:
Mindshare in the upstream group — all these organizations have their own priorities and deliverables and culture and introducing new use cases and new needs from a new community can be challenging.
Integration and testing challenges — many of these projects, although modularly designed, weren’t necessarily originally coded for the purpose of working with each other. We are definitely finding the pain points by testing an integrated platform at system level, and we certainly find bugs that are not uncovered by the individual testing each organization does on its own. We are also working on how we can give more relevant and timely feedback to upstream developers through third-party CI/CD integration.
Cross project coordination — in order to implement a particular capability, for example enhanced fault detection and resolution, it requires coordinated changes in multiple code bases and multiple APIs. Our Doctor project ended up having to touch OpenStack Neutron, OpenStack Nova, OpenStack Ceilometer, aodh, DPDK, and more to create the capabilities they were looking for. And that’s just one project within OPNFV! Telling a coherent story and managing the development work across all these groups requires both strong motivation and a great deal of skill.
Linux.com: What are the immediate problems the project is attempting to solve? What are some hurdles you’ll need to overcome to meet these short-term goals?
Heather: Helping our end users deploy new network applications, that is, new NFV products and services, is the fundamental need most of our work is designed to support. Whether that’s features like enhanced network management, IPv6 support, application orchestration, or service function chaining, that’s what we’re ultimately trying to enable.
In terms of specific short-term goals, continuing to refine our release process, shoring up our CI/CD needs, and community on-boarding are certainly top of mind.
In terms of larger and longer-term focus areas, we recently published a survey with Heavy Reading in conjunction with OPNFV Summit, which gave us great insight into the challenges our end-users see as the biggest pain points. Increased focus on security was number one, and we are currently undergoing the Linux Foundation Core Infrastructure Initiative’s security badging process. We are also hoping that some of our security projects are able to deliver into Colorado.
Additionally, Management and Orchestration of network functions came in at number two in the survey, and we have had a large number of projects, as well as a coordinating working group recently formed to address this hot button topic.
Finally, focusing on working better and more productively with the myriad of upstream projects will always be a core goal for us. The faster and better we can get them feedback, the better we can articulate our use cases, and the more system-level testing we subject all the code to, the better off we will all be.
Linux.com: What is the project’s greatest success so far?
Heather: Hands-down, our greatest success is our vibrant community. The telecom industry is used to working in standards bodies where competitors join forces toward a common aim, but the level of collaboration required to develop implementations in an open source project is even higher. People who want to see different solutions are supporting the broader community and helping all to move forward, realizing that we all benefit from the work. I’ve been completely delighted in the spirit of community that this group has fostered in its time so far. For an industry that is very new to the world of open source, we have definitely hit the ground running.
Look forward to three days and 175+ sessions of content covering the latest in containers, Linux, cloud, security, performance, virtualization, DevOps, networking, datacenter management and much more. You don’t want to miss this year’s event, which marks the 25th anniversary of Linux! Register nowbefore tickets sell out.
With more than 25 billion Internet connected things predicted to hit the market by 2020, the “Internet of Things” is evolving from a promise to an everyday reality. Whether it’s how we control our energy usage or secure our homes, smart devices are changing the world we live in and how we live.
IoT, like any disruptive technology shift, brings opportunities as well as challenges. Open source presents an opportunity for IoT to overcome interoperability barriers and innovate at an unprecedented rate. It provides a neutral forum for collaboration at scale and allows developers to contribute and advance software so that IoT products can get to market faster.
One key challenge is choice, and developers have a lot of it. For IoT to deliver on the promise of seamless connectivity, devices need a highly modular platform that can easily integrate with embedded devices. While Linux has proven itself time and again as the de facto operating system choice for embedded development, some IoT devices require a real-time operating system (RTOS) that addresses the very smallest of memory footprints.
To provide an open source solution that complements real-time Linux but keeps critical concerns like security and modularity top-of-mind, we created the Zephyr Project. Zephyr Project is a small, scalable, RTOS designed specifically for small-footprint IoT devices. It is also embedded with development tools and has a modular design so that developers can customize its capabilities and create IoT solutions that meet the needs of any device, regardless of architecture. This enables easier connectivity to the cloud as well as other IoT devices.
Recently the Zephyr Project announced Linaro as its newest member, joining the likes of Intel, NXP Semiconductors and Synopsys. As a global leader in open source development for the ARM ecosystem, Linaro will help drive Zephyr specifications and initiatives, and help the project realize its vision of becoming the premier multi-architecture open source RTOS for IoT.
The Zephyr Project comes at a critical time for the IoT small device development community. As an open source project, Zephyr unites the community to help make small, embedded devices “smarter,” while ensuring ubiquitous connectivity and security in small device infrastructure. It’s an exciting time for IoT, and we encourage anyone interested to join the effort.
The ever popularownCloud open source file-sharing and storage platform for building private clouds went through a shakeup not long ago. CTO and founder of ownCloud Frank Karlitschek resigned from the company and penned anopen letter, which pointed to possible friction created as ownCloud moved forward as a commercial entity as opposed to a solely community focused, open source project. A few months after that decision, though, Karlitschek revealed a very promising new cloud platform:Nextcloud.
Nextcloud is a fork of ownCloud, and there are strong signs that we can expect good things from this open platform. Although ownCloud isopen core, all of Nextcloud’s features are open source. The first release is based onownCloud 9, which arrived in March 2016. The bottom line is that the testing and hardening that made ownCloud a solid platform carries over to Nextcloud. It’s already a proven private infrastructure-as-a-service (IaaS) cloud platform. Nextcloud introduces many new features, too, including file drop capabilities and enterprise-class logging. The logging features enables administrators to generate compliance reports or auditing information and they can feed the logs into enterprise tools and solutions like Splunk.
Nextcloud is making moves that strongly differentiate it from ownCloud, and they are moves that could attract the DevOps community and enterprise IT departments. In fact, the Nextcloud site notes the following, regarding the instant carryover community that Nextcloud will benefit from:
“Started by the well known open source file sync and share developer Frank Karlitschek and joined by the most active contributors to his previous project, building on its mature code base, we offer a more reliable and sustainable solution for users and customers. We have developeda drop-in replacement for that legacy code base, providing the bug fixes and security hardening all users need and the Enterprise Subscription capabilities enterprise customers require, all fully open source.”
With Nextcloud, the company is providingenterprise support subscriptions, and good bridges to the cloud via mobile devices. Nextcloud recently announced an iOS app that lets iOS users gain instant access to Nextcloud-stored content. And, the company has also announced Nextcloud Android Client version 1.1.0 on theGoogle Play Store.
Focus on Security
Nextcloud is also focusing on security, which can be a sticky issue for open cloud platforms. Itis adding two-factor authentication and methods for blocking brute force hacking attacks. Nextcloud will also support the use of Google Authenticator and self-supported authentication via SMS. “We made a number of improvements to the security of the code base, hardening it against potential attacks, and fixed a number of bugs, making sure an upgrade doesn’t leave the installation in a broken state,”developers report.
What about applications that tie in with the Nextcloud platform? Nextcloud has partnered with Collabora Productivity to bring Collabora Online Development Edition (CODE) to Nextcloud users. This is a version of the LibreOffice productivity suite that caters to enterprise users. When it comes to offering productivity applications that can incorporate cloud storage and services, it puts Nextcloud on a level playing field with Microsoft’s Office 365 suite, and Google Docs.
Version 10 of Nextcloud’s platform is in beta testing now, and you can download the beta and access forum-based supporthere. You can also learn how to install Nextcloud on Ubuntu in this tutorial. Open source cloud platforms have been all the rage for the past several years, and even though Nextcloud is only a couple of months old, it comes from a proven cloud player in Frank Karlitschek, and it’s a story to watch.
Meanwhile, ownCloud is far from forgotten. “There is tremendous potential in ownCloud and it is an open source product protected by the AGPL license,” Karlitschek wrote in his open resignation letter. In fact, ownCloud 9.0 Enterprise Editionhas just arrived. It incorporates full federation, letting users on different servers share directories and files. If you’re interested in exploring ownCloud, you can take a guided video tour of the platformhere.
When 18F started, deploying government services into a public cloud was still fairly uncommon. However, over the last two years nearly everything 18F has built for our agency partners has been deployed into Amazon Web Services (AWS), including our platform-as-a-service cloud.gov. Meanwhile, other federal agencies have also started using commercialpublicclouds, some at a large scale.
Over that time, as a result of the success of implementing the federal cloud-first strategy, 18F’s AWS account has grown in size and complexity. We need a new approach to ensure it remains manageable. In this post, we’ll describe our plan for evolving our existing cloud deployment based on modern DevOps principles and practices. Future blog posts will discuss how we are executing each part of our strategy.
At DockerCon 2016, held in Seattle, USA, Aaron Grattafiori presented “The Golden Ticket: Docker and High Security Microservices”. Core recommendations for running secure container-based microservices included enabling User Namespaces, configuring application-specific AppArmor or SELinux, using an application-specific seccomp whitelist, hardening the host system (including running a minimal OS), restricting host access and considering network security.
Grattafiori, Technical Director at NCC Group and author of “Understanding and Hardening Linux Containers” (PDF), began the talk by introducing the principles of defense in depth, which consists of a presenting a layered defense, and shrinking attack surfaces and hardening those that remain. Although microservices may add overall complexity to a system architecture (particularly when operated at scale), the fact that they can be implemented to not present a single point of security failure provides an advantage over a typical monolithic application.
The principle of least privilege, e.g. not running an application process as root, is vitally important to securing a system.
The Electronic Frontier Foundation has praised new federal guidelines aimed at improving the sharing of federally developed software code but complained that the government’s 20 percent release goal does not go far enough.
The policy, announced by U.S. CIO Tony Scott on Aug. 8, seeks to makes federal source code more accessible while increasing sharing across government and reducing duplicative software purchases.
The policy calls for agencies to open 20 percent of their custom code for the duration of a three-year pilot project, including making that code available to the public.
OpenFlow and other software-defined networking controllers can discover and combat DDoS attacks, even from within your own network.
Attacks based on the distributed denial of service (DDoS) model are, unfortunately, common practice, often used to extort protection money or sweep unwanted services off the web. Currently, such attacks can reach bandwidths of 300GBps or more. Admins usually defend themselves by securing the external borders of their own networks and listening for unusual traffic signatures on the gateways, but sometimes they fight attacks even farther outside the network – on the Internet provider’s site – by diverting or blocking the attack before it overloads the line and paralyzes the victim’s services.
In the case of cloud solutions and traditional hosting providers, the attackers and their victims often reside on the same network. Thanks to virtualization, they could even share the same computer core. In this article, I show you how to identify such scenarios and fight them off with software-defined networking (SDN) technologies.
‘Please move to 4.7.1 now,’ the kernel’s lead maintainer says. If you’re using a version of Linux based on the 4.6 series of the kernel, the software’s lead maintainer has a message for you: It’s time to upgrade.
Greg Kroah-Hartman on Tuesday announced the arrival of Linux 4.6.7 and made it clear that it will be the last in the kernel’s 4.6 series. Version 4.7.1 made its debut on Tuesday as well, and that’s where the future lies, Kroah-Hartman said.
Big data and analytics are transforming Network Virtualization (NV) by taking advantage of new sources of data and providing analytics tools that can link to automation in software-defined networks (SDNs).
One of the things that IT organizations often fail to appreciate about network virtualization (NV) is the amount of visibility that can be gained into the overall IT environment. Network overlays typically provide analytics applications with a set of northbound application programming interfaces (APIs) to provide more data than was previously available operating only at the hardware level.