Home Blog Page 691

OCI Announces Two New Tools Projects

With ContainerCon Europe currently underway in Berlin, we want to share some of the great progress the Open Container Initiative (OCI) has made.

The OCI was launched with the express purpose of developing standards for the container format and runtime that will give everyone the ability to fully commit to container technologies today without worrying that their current choice of infrastructure, cloud provider or tooling will lock them in. 

Last month, the OCI formed two new tools projects: runtime tools and image tools. These projects are associated with the OCI runtime spec and image format spec and serve as repositories for testing tools:

  • Runtime Tools: Tools for testing container runtimes implementing the OCI runtime spec, including code that tests a runtime’s conformance to the OCI runtime spec
  • Image Tools: Tools for testing of container images implementing the OCI image spec, including code that validates a file’s conformance to the OCI image format spec

We’ve also made significant progress on the runtime spec and image format spec, which are essential to furthering the proliferation and adoption of containers as they give companies confidence in the ability to move their containers between clouds.

In order to encourage ongoing, consistent communication and consensus-building during the development process, the OCI release process requires at least three release candidates (rc) before declaring v1.0.  We are currently on the second release candidate for the runtime spec (v1.0.0 – rc2) and the first release candidate for the image spec (v1.0.0 – rc1).

As an open source project, any developer or end user can make contributions to the OCI, and we welcome feedback from the community on these release candidates and the new tools projects as we get closer to the official v1.0 release. 

Projects associated to the Open Container Initiative can be found at https://github.com/opencontainers and you can learn more about joining the OCI community  athttps://www.opencontainers.org/community.

This article originally appeared on the OCI website.

Transitioning from OpenStack Hobbyist to Professional

The hardest part of pivoting your career is proving that you are qualified in your new focus area. To land your first OpenStack job, you’ll want to prove you have a functional understanding of OpenStack basics, can navigate the resources to solve problems, and have recognized competency in your focus area.

“A functional understanding of OpenStack” means you know how to work in OpenStack––not just naming the projects in alphabetical order or giving an overview of its history. While you’ll want to read up on its origins and future roadmap, you’ll also want to jump in by using tools like DevStack or TryStack to explore.

Many people find find courses and training helpful for gaining that “functional understanding.” Some take training because they want to get up to speed quicker than they would on their own; some because they want to make sure they correctly learn the nuances of OpenStack; some because their learning styles are more accustomed to a classroom environment. Whatever the reason, there’s no shortage of OpenStack courses. You can visit the OpenStack Training Marketplace to scroll through courses and pick from online, in-person, self-paced, or designated hour courses. Course levels range from beginner to advanced. Stacker Tip: Trainings and workshops are included in full access passes at the OpenStack Summit. You can grab a remaining spot at OpenStack Summit Barcelona, or keep an eye out for early bird pricing to OpenStack Summit Boston in the upcoming months for the ultimate savings.     

The additional areas you’ll want to show competency in will depend on your focus area. If you’re a developer, be ready to demonstrate your Python skills and show your knowledge of orchestration tools like Ansible or Puppet. Participating in OpenStack code reviews is a great way to get involved in the community and display your abilities.

For administrators, the Certified OpenStack Administrator (COA) exam should be your first step. Developers shouldn’t write off admin certification though; in both cases, passing the COA proves you have mastery of the two standard client sets (Horizon and the command line client) and an understanding of the system you’re working on. Whether you’re self-taught or took a course, you’ll want to have your skills evaluated against a standardized set of criteria that’s recognizable to employers. Certification signals to employers where your skill level is, and shows that you’re ready to work on OpenStack professionally.  

The COA exam is the only certification offered by the OpenStack Foundation. The exam was created through the collaboration of a variety of ecosystem companies. What this means is that someone who has passed the COA can work on OpenStack with any organization regardless of the distribution they’ve chosen as opposed to vendor-provided certifications which test skills around their OpenStack service offering.

The COA is a functional, task-based test, rather than multiple choice where candidates are asked to perform tasks frequently seen in a professional administrator role. It mirrors a real-world work environment in which you are tasked with solving a problem, using documentation and other resources to find answers, and creating an effective solution.    

You can review the areas covered on the COA to determine if you’re ready. If you’re unsure about your preparation, there are training partners who specialize in preparing people for the COA. Look for the COA logo in the lower righthand corner of listings in the OpenStack Training Marketplace.

The OpenStack community is continuing to grow, evolve and help users create solutions powered by open source. There’s plenty of space for everyone at the OpenStack table, and no shortage of professional options. We encourage to visit openstack.org/join to find out more ways to get involved, and follow us on Twitter at @OpenStack.

Want to learn the basics of OpenStack? Take the new, free online course from The Linux Foundation and EdX. Register Now!

The OpenStack Summit is the most important gathering of IT leaders, telco operators, cloud administrators, app developers and OpenStack contributors building the future of cloud computing. Hear business cases and operational experience directly from users, learn about new products in the ecosystem and build your skills at OpenStack Summit, Oct. 25-28, 2016, in Barcelona, Spain. Register Now!

Making Sense of Cloud Native Applications, Platforms, Microservices, and More

As more and more of our infrastructure moves into the cloud, the proliferation of buzzwords, new terms, and new ways of doing things can be daunting. Fabio Chiodini, Principal System Engineer at EMC, spent some time helping us make sense of these concepts during his LinuxCon Europe talk, “Cloud Native Applications, Containers, Microservices, Platforms, CI-CD…Oh My!!”

Fabio started by talking about the companies that are transforming their industries. We naturally think about the “unicorn” companies from Silicon Valley:

  • Netflix in the $53 billion entertainment industry
  • Square in the $6 billion financial services industry
  • AirBnB in the $26 billion hotel industry
  • Tesla in the $34 billion automotive industry
  • Uber in the $50 billion transportation industry
  • Nest in the $3.2 billion industrial products industry

But, the transformation is not restricted to the new and emerging Silicon Valley companies, traditional enterprise customers are delivering software in similar ways:

  • Lockheed Martin using Spring Framework with Pivotal Cloud Foundry as a cloud native Platform
  • Kroger adopting DevOps practices with a Pivotal Cloud Foundry automated build pipeline
  • Mercedes-Benz working on connected cars and smart apps
  • Bosch creating an IoT suite

Fabio defines cloud native applications as “applications that do not require resilient infrastructure,” which is a more concise version of Duncan C. E. Winn’s definition from his Cloud Foundry book, “Cloud native is a term describing software designed to run and scale reliably and predictably on top of potentially unreliable cloud-based infrastructure. Cloud native applications are purposefully designed to be infrastructure unaware, meaning they are decoupled from infrastructure and free to move as required.” However, Fabio also admits that there are many different definitions depending on what aspect of cloud someone is focused on.

If you look at the application lifecycle, Fabio talks about how some of these concepts fit together starting with microservices in the design phase moving from monolithic tightly coupled applications to loosely coupled components where each on can be deployed in an automated fashion without waiting on other components. In the deploy phase, he talked about Continuous Integration (CI) / Continuous Deployment (CD) moving from infrequent releases to releasing early and often resulting in a higher quality of code as bugs are fixed and deployed to production more quickly rather than sitting unfixed between releases. In the manage phase, a DevOps mindset is helping people move from disconnected tools and opaque processes to one of shared responsibility with common incentives, tools, process and culture. 

He discussed how this results in “new requirements for IT to deploy and deliver applications reliably at scale,” putting the focus on flexibility over reliability. He broke this down into four cloud native platform requirements:

  • Programmability: “Infrastructure As Code”
  • Elasticity: make it easy to scale
  • Economics: affordability with standard servers and software
  • Strong Instrumentation And Telemetry: metrics and monitoring of the infrastructure layer

Fabio also included a few demos to illustrate these concepts, which can be found in his ProjectSpawnSwarmtc and ReceiverCF GitHub repositories.

There are clear and solid business needs for cloud native applications with many options and technologies available. A structured approach with a simplified, flexible infrastructure offers many advantages for companies moving to cloud native applications.

The Evolution of Open Source Networking at AT&T

For many years AT&T has been on the forefront of virtualizing a Tier 1 carrier network. They’ve done so in a very open fashion and are actively participating in, and driving, many open sources initiatives.  Their open initiatives include Domain 2.0, ECOMP, and CORD, all of which are driving innovation in the global service provider market.  Chris Rice, Sr. VP of Domain 2.0 Architecture and Design of AT&T, provided an overview of how AT&T got where they are today during his keynote address at the ODL Summit.

Providing a bit of history of this journey, Rice noted that today’s implementations and visions started years ago. One of the first steps was the creation of what he called a router farm, which was initiated because of the end of life of a router and there wasn’t a new router that could just take its place. The goal was to remove the static relationship between the edge router and the customer. Once this was done, AT&T could provide better resiliency to their customers, detect failures, do planned maintenance, and schedule backups. They could also move configurations from one router to another vendor’s router.   The result was faster and cheaper; however, “it just wasn’t as reusable as they wanted.” They learned the importance of separating services from the network and from the devices.

About three and a half  years ago, the greater community and ecosystem started to address many of the company’s key concerns. For example, Rice noted that Intel was continuing to improve the packet processing performance of their general purpose CPUs, resulting in a 100 times improvement over a 10-year period. What they concluded was that “next-generation carrier networks must be: cloud-based, model-driven, and software defined.” He acknowledged that today this might not sound exciting but three and a half years ago, it was insightful.   

ECOMP and VNF

During this time ECOMP (Enhanced Control, Orchestration, Management and Policy) was born.  Rice noted that ECOMP is “not a science project.”  It was fully vetted and has been used in production networks for the past 2 years. Today, ECOMP comprises more than 8.5 million lines of code and is growing every day.  Rice also noted that currently 5.7 percent of AT&T’s network is virtualized and the goal is for 30 percent to be virtualized by the end of 2016.  

The Layer 2/3 SDN controller within ECOMP is based on ODL and includes a Layer 4-7 VNF management and application controller. Rice noted that a key takeaway was that ODL can be used at all layers of the OSI Stack. The layer 4-7 application controller is used to initialize and configure virtual network functions (VNFs), automate the lifecycle of VNFs, and to correct and monitor faults and failures of application components. He pointed out that all of this is being achieved in a vendor and VNF agnostic mechanism.

Legos Not Snowflakes

Perhaps the biggest takeaway from Rice’s keynote was his comment that today’s VNFs are “snowflakes and we want Lego blocks not snowflakes.” Lego blocks come in different shapes and colors yet they all interoperate. And, if each VNF is unique, it requires a costly and time consuming “one off” integration effort.  With the Lego model, they can build a framework or a foundation once and add or replace interoperable VNFs as required.  

There are functions that all VNFs must support regardless of their specific functionality. Rice noted that AT&T must be able to configure, test, scale, start, stop, restart, and rebuild every VNF no matter what its specific function. The industry as a whole, he continued, must work to normalize VNFs to support the common operational framework, the Lego model.  To illustrate their success in this model, Rice discussed their work with the optical transport part of their network. Using their methodology, AT&T has achieved end-to-end multi-vendor interoperability and can configuring and reconfiguring multiple vendors ROADMs.

Rice finished his talk by highlighting “why ODL.” First, ODL controllers are platforms for innovation. The   innovation occurs in both the way networks are built and in the way services are designed. Second, programmable controllers, such as within ODL, must support a rich set of both Northbound/Southbound interfaces as well as East/West capabilities. Rice noted here that the support for what he referred to as “brownfield” protocols is a mandatory requirement. Third, Rice called on the ODL community to focus on reliability and on scale and specifically called out the need for geographical redundancy.

New Linux Kernel 4.8 — Plus a Kernel-Killing Bug

After nearly exactly two months, Linus Torvalds released kernel 4.8 into the wild on Sunday, October 2nd. Torvalds dubbed 4.8 Psychotic Stoned Sheep, probably inspired by the news that a flock of woolly ruminants ate some abandoned cannabis and, high as kites, run amok in rural Wales, striking terror into the hearts of the locals.

This has been one of the larger releases, with many patches being sent in before the first release candidate was published. However, Torvalds attributes many of the changes to the switch to a new documentation format — instead of using the DocBook, documentation must now be submitted in the Sphinx doc format.

But, apart from the shift in documentation formats, there are, of course, a slew of changes to look forward to in this release. And, although Torvalds repeatedly insisted that there was “nothing very scary” in most of his release candidate updates, some of the changes were pretty big.

Take, for example, the new improvements to open source video card drivers: AMD GPUs can now be overclocked using the free AMDGPU driver, and the kernel now supports mode-setting for the new Nvidia Pascal cards via the free Nouveau driver.

Another big change in kernel 4.8 is support for Raspberry Pi 3’s BCM2835 SoC. Up until now, Linux kernels had to be patched to work on the latest version of the Raspberry Pi. Debian did the patching themselves, hence Raspbian. Now, with support integrated natively into the Linux kernel, any distribution can be made to run on the Pi with rather easily, heavy graphic interfaces notwithstanding.

Maybe a less noticeable change, but that has real-world implications for end users, is the integration of FQ/codel with the mac80211 internal software queues. This may sound very esoteric, but it is the first step to solving a serious WiFi shortcoming that has dogged Linux wireless networks for years.

Known as the ath9k crypto fq bug, the issue is a side effect of the design of the 802.11 protocol and affects most Atheros-based network cards. What happens is this: If a transmitting “station” on the wireless network is slow, say running at 1 Mbps, while the rest transmit at 10 Mbps, the speed on all the network will gradually degrade and slow down to the speed (or lack thereof) of the slowest station. The integration of the FQ/codel with the mac80211 internal software queues is the way forward to finally resolving this problem.

Kernel Killing Bug

… At least that is what should have happened. But on the October 4, Torvalds sent a message to the kernel mailing list warning of a “kernel killing” bug in code submitted by Andrew Morton. Morton was trying to squash a minor bug, but his solution was apparently worse than the issue it was trying to solve. What made the problem catastrophic is that Morton submitted the patch including the dreaded BUG_ON() debugging function. The BUG_ON() function is triggered when something really bad happens in your code and it shuts down the process. Unfortunately, in this case, it also kills the kernel, forcing the machine to reboot.

Torvalds was annoyed with Morton, telling him to “please stop taking those kinds of patches!” However, he was also annoyed with the way the debugging mechanism itself, making him wonder if it was not worth removing “the idiotic BUG_ON() concept once and for all.

The distros that decide to run with the 4.8 kernel should only do so once the defective code is patched. And, at the moment of writing, Johannes Weiner is already working on the solution.

Also new in 4.8:

Michael Larabel at Phoronix has an extensive breakdown of all that’s new in 4.8.

A Guide to Building Trust in Teams and Organizations

My travels globally have given me a feeling for how best to work in many different contexts—like Latin America, West Africa, North Africa, and Southeast Asia, to name a few. And I’ve found that I can more easily adapt my work style in these countries if I focus on something that plays a role in all of them: trust….

I’ve found a way to measure trust, studied trust building, and developed a strategy for cultivating trust that’s worked for me over the years. I think it could work well in open organizations, where building trust is critical. Let me explain.

Read more at OpenSource.com

Introducing InfraKit, An Open Source Toolkit For Creating And Managing Declarative, Self-Healing Infrastructure

Docker’s mission is to build tools of mass innovation, starting with a programmable layer for the Internet that enables developers and IT operations teams to build and run distributed applications. As part of this mission, we have always endeavored to contribute software plumbing toolkits back to the community, following the UNIX philosophy of building small loosely coupled tools that are created to simply do one thing well. As Docker adoption has grown from 0 to 6 billion pulls, we have worked to address the needs of a growing and diverse set of distributed systems users. This work has led to the creation of many infrastructure plumbing components that have been contributed back to the community.

Read more at Docker blog

HPE, Dell & Cisco Lead Cloud Infrastructure Sales

In the second quarter of this year, Hewlett Packard Enterprise (HPE) topped all vendors in cloud IT infrastructure, followed by DellCisco, and EMC, according to an IDC study.

Fifth place was a five-way tie between LenovoNetAppIBMHuawei, and Inspur. IDC declares a statistical tie when there is less than a one percent difference in revenue among two or more vendors.

Read more at SDx Central

Cockpit – A Powerful Tool to Monitor and Administer Multiple Linux Servers Using a Web Browser

Cockpit is an easy-to-use, lightweight and simple yet powerful remote manager for GNU/Linux servers, it’s an interactive server administration user interface that offers a live Linux session via a web browser.

It can run on several Linux distributions including DebianUbuntuFedoraCentOSRHELArch Linuxamong others.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read complete article

Google Open-Sources Cartographer 3D Mapping Library

Google today said that it’s open-sourced Cartographer, a library for mapping movement in space in both 2D and 3D. the technology works with the open source Robot Operating System (ROS), which makes the software easier to deploy in software systems for robots, self-driving cars, and drones.

Cartographer is an implementation of simultaneous localization and mapping, better known by its acronym SLAM.

Read more at Venture Beat