Home Blog Page 513

The Changing Face of the Hybrid Cloud

Depending upon the event you use to start the clock, cloud computing is only a little more than 10 years old. Some terms and concepts around cloud computing that we take for granted today are newer still. The National Institute of Standards and Technology (NIST) document that defined now-familiar cloud terminology—such as Infrastructure-as-a-Service (IaaS)—was only published in 2011, although it widely circulated in draft form for a while before that.

Among other definitions in that document was one for hybrid cloud. Looking at how that term has shifted during the intervening years is instructive. Cloud-based infrastructures have moved beyond a relatively simplistic taxonomy. Also, it highlights how priorities familiar to adopters of open source software—such as flexibility, portability, and choice—have made their way to the hybrid cloud.

Read more at OpenSource.com

Dangerous Logic – De Morgan & Programming

Programmers are master logicians – well they sometimes are. Most of the time they are as useless at it as the average joe. The difference is that the average joe can avoid logic and hence the mistakes. How good are you at logical expressions and why exactly is Augustus De Morgan your best friend, logically speaking?

It is commonly held that programming is a logical subject.

Programmers are great at working out the logic of it all and expressing it clearly and succinctly, but logic is tough to get right.

IFs and Intervals

A logical expression is just something that works out to be true or false.

Generally you first meet logical expressions as part of learning about if statements. Most languages have a construct something like…

Read more at I Programmer

How to Get Started with Kubernetes

Kubernetes, the product of work done internally at Google to solve that problem, provides a single framework for managing how containers are run across a whole cluster. The services it provides are generally lumped together under the catch-all term “orchestration,” but that covers a lot of territory: scheduling containers, service discovery between containers, load balancing across systems, rolling updates/rollbacks, high availability, and more.

In this guide we’ll walk through the basics of setting up Kubernetes and populating it with container-based applications. This isn’t intended to be an introduction to Kubernetes’s concepts, but rather a way to show how those concepts come together in simple examples of running Kubernetes.

Read more at InfoWorld

OpenStack: Driving the Future of the Open Cloud

As cloud computing continues to evolve, it’s clear that the OpenStack platform is guaranteeing a strong open source foundation for the cloud ecosystem. At the recent OpenStack Days conference in Melbourne, OpenStack Foundation Executive Director Jonathan Bryce noted that although the early stages of cloud technology emphasized public platforms such as AWS, Azure and Google, the latest stage is much more focused on private clouds.

According to the The OpenStack Foundation User Survey, organizations everywhere have moved beyond just kicking the tires and evaluating OpenStack to deploying the platform. In fact, the survey found that OpenStack deployments have grown 44 percent year-over-year. More than 50 percent of Fortune 100 companies are running the platform, and OpenStack is a global phenomenon. According to survey findings, five million cores of compute power, distributed across 80 countries, are powered by OpenStack.

The typical size of an OpenStack cloud increased over the past year as well. Thirty-seven percent of clouds have 1,000 or more cores, compared to 29 percent a year ago, and 3 percent of clouds have more than 100,000 cores. You can see the survey findings, which are based on responses from 2561 users, in this video overview.

The fact that OpenStack is built on open source is not lost on organizations deploying it. The OpenStack Foundation User Survey shows that avoiding vendor lock-in and accelerating the ability to innovate are the top reasons cited for OpenStack deployment. According to the survey, the highest number of OpenStack deployments fall within the Information Technology industry (56 percent), followed by telecommunications, academic/research, finance, retail/e-commerce, manufacturing/industrial, and government/defense.

The survey also found that most OpenStack deployments consist of on-premises private clouds (70 percent), with public cloud deployments at 12 percent.  Interestingly, containers remain the top emerging technology of interest to OpenStack users. And, 65 percent of organizations running OpenStack services inside containers use Docker runtime, while nearly 50 percent of those using containers to orchestrate apps on OpenStack use Kubernetes.

Organizations are building infrastructure around OpenStack, too. Survey results show that the median user runs 61–80 percent of their overall cloud infrastructure on OpenStack, while the typical large user (deployment with 1,000+ cores) reports running 81–100 percent of their total infrastructure on OpenStack.

It’s proven that OpenStack skills are in high-demand in the job market, and if you are seeking training and certification, opportunities abound. The OpenStack Foundation offers a Certified OpenStack Administrator (COA) exam. Developed in partnership with The Linux Foundation, the exam is performance-based and available anytime, anywhere. It allows professionals to demonstrate their OpenStack skills and helps employers gain confidence that new hires are ready to work.

The Linux Foundation also offers an OpenStack Administration Fundamentals course, which serves as preparation for the certification. The Foundation also offers comprehensive Linux training and other classes. You can explore options here.  Red Hat and Mirantis offer very popular OpenStack training options as well.

For a comprehensive look at trends in the open cloud, The Linux Foundation’s Guide to the Open Cloud report is a good place to start. The report covers not only OpenStack, but well-known projects like Docker and Xen Project, and up-and-comers such as Apache Mesos, CoreOS and Kubernetes.

Now updated for OpenStack Newton! Our Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Fabric 1.0: Hyperledger Releases First Production-Ready Blockchain Software

Open-source software isn’t so much built, it grows. And today, the open-source blockchain consortium Hyperledger has announced that its first production-ready solution for building applications, Fabric, has finished that process.

But even before the formal release of Fabric 1.0 today, hundreds of proofs-of-concept had been built. With contributions to the platform for building shared, distributed ledgers across a number of industries (coming from 159 different engineers in 28 organizations), no single company owns the platform, which is hosted by the Linux Foundation.

For those going forward with that work, the group’s executive director Brian Behlendorf indicated that production-grade functionality is just a download and a few tweaks away. Behledorf told CoinDesk:

“It’s not as easy as drop in and upgrade. But the intent is that anyplace where there were changes, that those changes will be justified.”

Read more at CoinDesk

How Linux Containers Have Evolved

In the past few years, containers have become a hot topic among not just developers, but also enterprises. This growing interest has caused an increased need for security improvements and hardening, and preparing for scaleability and interoperability. This has necessitated a lot of engineering, and here’s the story of how much of that engineering has happened at an enterprise level at Red Hat.

When I first met up with representatives from Docker Inc. (Docker.io) in the fall of 2013, we were looking at how to make Red Hat Enterprise Linux (RHEL) use Docker containers. (Part of the Docker project has since been rebranded as Moby.) We had several problems getting this technology into RHEL. The first big hurdle was getting a supported Copy On Write (COW) file system to handle container image layering. Red Hat ended up contributing a few COW implementations, including Device Mapperbtrfs, and the first version of OverlayFS. For RHEL, we defaulted to Device Mapper, although we are getting a lot closer on OverlayFS support.

The next major hurdle was on the tooling to launch the container. At that time, upstream docker was using LXC tools for launching containers, and we did not want to support LXC tools set in RHEL. Prior to working with upstream docker, I had been working with the libvirt team on a tool called virt-sandbox, which used libvirt-lxc for launching containers.

Read more at OpenSource.com

Cloud Infrastructure Spending to Reach $40B in 2017, IDC Says

IT infrastructure spending on products — servers, enterprise storage, and Ethernet switches — for cloud deployments will increase 12.4 percent, year over year in 2017 to $40.1 billion, according to an IDC report.

Public cloud data centers will account for the bulk of this infrastructure spending, or 60.7 percent. These will also grow at the fastest rate year over year: 13.8 percent.

Off-premises private cloud environments will represent 14.9 percent of overall spending and will grow 11.9 percent year over year. On-premises private clouds will account for 62.2 percent of spending on private cloud IT infrastructure and will grow 9.6 percent year over year.

Read more at SDxCentral

Unikernels Are Secure. Here Is Why.

There have been put forth various arguments for why unikernels are the better choice security wise and also some contradictory opinions on why they are a disaster. I believe that from a security perspective unikernels can offer a level of security that is unprecedented in mainstream computing.

A smaller codebase

Classic operating systems are nothing if not generic. They support everything and the kitchen sink. Since they ship in their compiled form and since users cannot be expected to compile functionality as it is needed, everything needs to come prebuilt and activated. Case in point; your Windows laptop might come with various services activated (bluetooth, file sharing, name resolution, and similar services). You might not use them but they are there. Go to some random security conference and these services will likely be the attack vector that is used to break into your laptop — even though you’ve never used them.

Unikernels use sophisticated build systems that analyze the code you’re using and only link in the code that is actually used. The unused code doesn’t make it into the image created and doesn’t pose a security risk. Typically, unikernel images are in the 500KB-32MB range. Our own load balancer appliances weigh in at around 2MB.

Read more at Unikernel

What’s the Difference Between SDN and NFV?

SDN, NFV & VNF are among the alphabet soup of terms in the networking industry that have emerged in recent years.

Software defined networking (SDN), network function virtualization (NFV) and the related virtual network functions (VNF) are important trends. But Forrester analyst Andre Kindness says vague terminology from vendors has created a complicated marketplace for end users evaluating next-generation networking technology. “Few I&O pros understand (these new acronyms), and this confusion has resulted in many making poor networking investments,” he says.

So what’s the difference between SDN, NFV and VNF?

Read more at Network World

Making the Most of an SRE Service Takeover – CRE Life Lessons

In Part 2 of this blog post we explained what an SRE team would want to learn about a service angling for SRE support, and what kind of improvements they want to see in the service before considering it for take-over. And in Part 1, we looked at why an SRE team would or wouldn’t choose to onboard a new application. Now, let’s look at what happens once the SREs agree to take on the pager.

Onboarding preparation

If a service entrance review determines that the service is suitable for SRE support, developers and the SRE team move into the “onboarding” phase, where they prepare for SREs to support the service.

While developers address the action items, the SRE team starts to familiarize itself with the service, building up service knowledge and familiarity with the existing monitoring tools, alerts and crisis procedures. This can be accomplished through several methods:

Read more at Google Cloud Platform Blog