Home Blog Page 513

How to Get Started with Kubernetes

Kubernetes, the product of work done internally at Google to solve that problem, provides a single framework for managing how containers are run across a whole cluster. The services it provides are generally lumped together under the catch-all term “orchestration,” but that covers a lot of territory: scheduling containers, service discovery between containers, load balancing across systems, rolling updates/rollbacks, high availability, and more.

In this guide we’ll walk through the basics of setting up Kubernetes and populating it with container-based applications. This isn’t intended to be an introduction to Kubernetes’s concepts, but rather a way to show how those concepts come together in simple examples of running Kubernetes.

Read more at InfoWorld

OpenStack: Driving the Future of the Open Cloud

As cloud computing continues to evolve, it’s clear that the OpenStack platform is guaranteeing a strong open source foundation for the cloud ecosystem. At the recent OpenStack Days conference in Melbourne, OpenStack Foundation Executive Director Jonathan Bryce noted that although the early stages of cloud technology emphasized public platforms such as AWS, Azure and Google, the latest stage is much more focused on private clouds.

According to the The OpenStack Foundation User Survey, organizations everywhere have moved beyond just kicking the tires and evaluating OpenStack to deploying the platform. In fact, the survey found that OpenStack deployments have grown 44 percent year-over-year. More than 50 percent of Fortune 100 companies are running the platform, and OpenStack is a global phenomenon. According to survey findings, five million cores of compute power, distributed across 80 countries, are powered by OpenStack.

The typical size of an OpenStack cloud increased over the past year as well. Thirty-seven percent of clouds have 1,000 or more cores, compared to 29 percent a year ago, and 3 percent of clouds have more than 100,000 cores. You can see the survey findings, which are based on responses from 2561 users, in this video overview.

The fact that OpenStack is built on open source is not lost on organizations deploying it. The OpenStack Foundation User Survey shows that avoiding vendor lock-in and accelerating the ability to innovate are the top reasons cited for OpenStack deployment. According to the survey, the highest number of OpenStack deployments fall within the Information Technology industry (56 percent), followed by telecommunications, academic/research, finance, retail/e-commerce, manufacturing/industrial, and government/defense.

The survey also found that most OpenStack deployments consist of on-premises private clouds (70 percent), with public cloud deployments at 12 percent.  Interestingly, containers remain the top emerging technology of interest to OpenStack users. And, 65 percent of organizations running OpenStack services inside containers use Docker runtime, while nearly 50 percent of those using containers to orchestrate apps on OpenStack use Kubernetes.

Organizations are building infrastructure around OpenStack, too. Survey results show that the median user runs 61–80 percent of their overall cloud infrastructure on OpenStack, while the typical large user (deployment with 1,000+ cores) reports running 81–100 percent of their total infrastructure on OpenStack.

It’s proven that OpenStack skills are in high-demand in the job market, and if you are seeking training and certification, opportunities abound. The OpenStack Foundation offers a Certified OpenStack Administrator (COA) exam. Developed in partnership with The Linux Foundation, the exam is performance-based and available anytime, anywhere. It allows professionals to demonstrate their OpenStack skills and helps employers gain confidence that new hires are ready to work.

The Linux Foundation also offers an OpenStack Administration Fundamentals course, which serves as preparation for the certification. The Foundation also offers comprehensive Linux training and other classes. You can explore options here.  Red Hat and Mirantis offer very popular OpenStack training options as well.

For a comprehensive look at trends in the open cloud, The Linux Foundation’s Guide to the Open Cloud report is a good place to start. The report covers not only OpenStack, but well-known projects like Docker and Xen Project, and up-and-comers such as Apache Mesos, CoreOS and Kubernetes.

Now updated for OpenStack Newton! Our Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Fabric 1.0: Hyperledger Releases First Production-Ready Blockchain Software

Open-source software isn’t so much built, it grows. And today, the open-source blockchain consortium Hyperledger has announced that its first production-ready solution for building applications, Fabric, has finished that process.

But even before the formal release of Fabric 1.0 today, hundreds of proofs-of-concept had been built. With contributions to the platform for building shared, distributed ledgers across a number of industries (coming from 159 different engineers in 28 organizations), no single company owns the platform, which is hosted by the Linux Foundation.

For those going forward with that work, the group’s executive director Brian Behlendorf indicated that production-grade functionality is just a download and a few tweaks away. Behledorf told CoinDesk:

“It’s not as easy as drop in and upgrade. But the intent is that anyplace where there were changes, that those changes will be justified.”

Read more at CoinDesk

How Linux Containers Have Evolved

In the past few years, containers have become a hot topic among not just developers, but also enterprises. This growing interest has caused an increased need for security improvements and hardening, and preparing for scaleability and interoperability. This has necessitated a lot of engineering, and here’s the story of how much of that engineering has happened at an enterprise level at Red Hat.

When I first met up with representatives from Docker Inc. (Docker.io) in the fall of 2013, we were looking at how to make Red Hat Enterprise Linux (RHEL) use Docker containers. (Part of the Docker project has since been rebranded as Moby.) We had several problems getting this technology into RHEL. The first big hurdle was getting a supported Copy On Write (COW) file system to handle container image layering. Red Hat ended up contributing a few COW implementations, including Device Mapperbtrfs, and the first version of OverlayFS. For RHEL, we defaulted to Device Mapper, although we are getting a lot closer on OverlayFS support.

The next major hurdle was on the tooling to launch the container. At that time, upstream docker was using LXC tools for launching containers, and we did not want to support LXC tools set in RHEL. Prior to working with upstream docker, I had been working with the libvirt team on a tool called virt-sandbox, which used libvirt-lxc for launching containers.

Read more at OpenSource.com

Cloud Infrastructure Spending to Reach $40B in 2017, IDC Says

IT infrastructure spending on products — servers, enterprise storage, and Ethernet switches — for cloud deployments will increase 12.4 percent, year over year in 2017 to $40.1 billion, according to an IDC report.

Public cloud data centers will account for the bulk of this infrastructure spending, or 60.7 percent. These will also grow at the fastest rate year over year: 13.8 percent.

Off-premises private cloud environments will represent 14.9 percent of overall spending and will grow 11.9 percent year over year. On-premises private clouds will account for 62.2 percent of spending on private cloud IT infrastructure and will grow 9.6 percent year over year.

Read more at SDxCentral

Unikernels Are Secure. Here Is Why.

There have been put forth various arguments for why unikernels are the better choice security wise and also some contradictory opinions on why they are a disaster. I believe that from a security perspective unikernels can offer a level of security that is unprecedented in mainstream computing.

A smaller codebase

Classic operating systems are nothing if not generic. They support everything and the kitchen sink. Since they ship in their compiled form and since users cannot be expected to compile functionality as it is needed, everything needs to come prebuilt and activated. Case in point; your Windows laptop might come with various services activated (bluetooth, file sharing, name resolution, and similar services). You might not use them but they are there. Go to some random security conference and these services will likely be the attack vector that is used to break into your laptop — even though you’ve never used them.

Unikernels use sophisticated build systems that analyze the code you’re using and only link in the code that is actually used. The unused code doesn’t make it into the image created and doesn’t pose a security risk. Typically, unikernel images are in the 500KB-32MB range. Our own load balancer appliances weigh in at around 2MB.

Read more at Unikernel

What’s the Difference Between SDN and NFV?

SDN, NFV & VNF are among the alphabet soup of terms in the networking industry that have emerged in recent years.

Software defined networking (SDN), network function virtualization (NFV) and the related virtual network functions (VNF) are important trends. But Forrester analyst Andre Kindness says vague terminology from vendors has created a complicated marketplace for end users evaluating next-generation networking technology. “Few I&O pros understand (these new acronyms), and this confusion has resulted in many making poor networking investments,” he says.

So what’s the difference between SDN, NFV and VNF?

Read more at Network World

Making the Most of an SRE Service Takeover – CRE Life Lessons

In Part 2 of this blog post we explained what an SRE team would want to learn about a service angling for SRE support, and what kind of improvements they want to see in the service before considering it for take-over. And in Part 1, we looked at why an SRE team would or wouldn’t choose to onboard a new application. Now, let’s look at what happens once the SREs agree to take on the pager.

Onboarding preparation

If a service entrance review determines that the service is suitable for SRE support, developers and the SRE team move into the “onboarding” phase, where they prepare for SREs to support the service.

While developers address the action items, the SRE team starts to familiarize itself with the service, building up service knowledge and familiarity with the existing monitoring tools, alerts and crisis procedures. This can be accomplished through several methods:

Read more at Google Cloud Platform Blog

New Kubernetes Online Course Now Open: Sign Up for Free

Want to learn more about Kubernetes? A new massive open online course (MOOC) — Introduction to Kubernetes (LFS158x) — is now available from The Linux Foundation and edX.

Get an in-depth primer on this powerful system for managing containerized applications in this free, self-paced course, which covers the architecture of the system, the problems it solves, and the model that it uses to handle containerized deployments and scaling. The course also includes technical instructions on how to deploy a standalone and multi-tier application.

Upon completion, you’ll have a solid understanding of Kubernetes and will be able to begin testing the new cloud native pattern to begin the cloud native journey.

In this course, you will learn:

  • The origin, architecture, primary components, and building blocks of Kubernetes

  • How to set up and access a Kubernetes cluster using Minikube

  • Ways to run applications on the deployed Kubernetes environment and access the deployed applications

  • Usefulness of Kubernetes communities and how to participate.

LFS158x is taught by Neependra Khare (@neependra), the Founder and Principal Consultant at CloudYuga Technology, which offers training and consulting services around container technologies such as Docker and Kubernetes.

Sign up for the free course now!

DevOps Fundamentals: High-Performing Organizations

This new series offers a preview of the DevOps Fundamentals: Implementing Continuous Delivery (LFS261) course from The Linux Foundation. The online, self-paced course, presented through short videos, provides basic knowledge of the process, patterns and tools used in building and managing a Continuous Integration/Continuous Delivery (CI/CD) pipeline. The included lab exercises provide the basic steps and configuration information for setting up a multiple language pipeline.

In this first article in the series, we’ll give a brief introduction to DevOps and talk about the habits of high-performance organizations. Later, we will get into the DevOps trinity: Continuous Integration, Continuous Delivery, and Continuous Deployment. You can watch the introductory video below:

High-performance organizations make work visible. They manage work in process (WIP). And, they manage flow, of course, which is the Continuous Delivery part. For successful DevOps flow, you have to foster collaborative environments. And the way you do that is through high-trust work environments, and then by learning how to embrace failure and making failure part of your habits and your culture.

The DevOps Survey, which is run by Puppet Labs and the IT Revolution that I work with, has worked out the real science of this. The results of the survey found that high-performing organizations were both faster and more resilient, and we saw this in four variables.

The first is that high-performing organizations tend to deploy 30 times more frequently than low-performing organizations. Second, they had 200 times shorter Lead Times. Third, they also had 60 times less failures — like change failures. And, the fourth variable is that their mean time to recover (MTTR) was a 166 times faster.

So, we see this kind of Continuous Delivery where you are fast and reliable, and you have deployment automation, and you version control everything. And, all this leads to low levels of deployment pain, higher levels of IT performance, higher throughput and stability, lower change failure rates, and higher levels of performance and productivity.

In fact, there is also some data showing that this approach reduces burnout, so it is really good stuff. In the next article, we’ll talk about the value stream and lay the groundwork for Continuous Integration.

Want to learn more? Access all the free sample chapter videos now! 

This course is written and presented by John Willis, Director of Ecosystem Development at Docker. John has worked in the IT management industry for more than 35 years.