Home Blog Page 514

What’s the Difference Between SDN and NFV?

SDN, NFV & VNF are among the alphabet soup of terms in the networking industry that have emerged in recent years.

Software defined networking (SDN), network function virtualization (NFV) and the related virtual network functions (VNF) are important trends. But Forrester analyst Andre Kindness says vague terminology from vendors has created a complicated marketplace for end users evaluating next-generation networking technology. “Few I&O pros understand (these new acronyms), and this confusion has resulted in many making poor networking investments,” he says.

So what’s the difference between SDN, NFV and VNF?

Read more at Network World

Making the Most of an SRE Service Takeover – CRE Life Lessons

In Part 2 of this blog post we explained what an SRE team would want to learn about a service angling for SRE support, and what kind of improvements they want to see in the service before considering it for take-over. And in Part 1, we looked at why an SRE team would or wouldn’t choose to onboard a new application. Now, let’s look at what happens once the SREs agree to take on the pager.

Onboarding preparation

If a service entrance review determines that the service is suitable for SRE support, developers and the SRE team move into the “onboarding” phase, where they prepare for SREs to support the service.

While developers address the action items, the SRE team starts to familiarize itself with the service, building up service knowledge and familiarity with the existing monitoring tools, alerts and crisis procedures. This can be accomplished through several methods:

Read more at Google Cloud Platform Blog

New Kubernetes Online Course Now Open: Sign Up for Free

Want to learn more about Kubernetes? A new massive open online course (MOOC) — Introduction to Kubernetes (LFS158x) — is now available from The Linux Foundation and edX.

Get an in-depth primer on this powerful system for managing containerized applications in this free, self-paced course, which covers the architecture of the system, the problems it solves, and the model that it uses to handle containerized deployments and scaling. The course also includes technical instructions on how to deploy a standalone and multi-tier application.

Upon completion, you’ll have a solid understanding of Kubernetes and will be able to begin testing the new cloud native pattern to begin the cloud native journey.

In this course, you will learn:

  • The origin, architecture, primary components, and building blocks of Kubernetes

  • How to set up and access a Kubernetes cluster using Minikube

  • Ways to run applications on the deployed Kubernetes environment and access the deployed applications

  • Usefulness of Kubernetes communities and how to participate.

LFS158x is taught by Neependra Khare (@neependra), the Founder and Principal Consultant at CloudYuga Technology, which offers training and consulting services around container technologies such as Docker and Kubernetes.

Sign up for the free course now!

DevOps Fundamentals: High-Performing Organizations

This new series offers a preview of the DevOps Fundamentals: Implementing Continuous Delivery (LFS261) course from The Linux Foundation. The online, self-paced course, presented through short videos, provides basic knowledge of the process, patterns and tools used in building and managing a Continuous Integration/Continuous Delivery (CI/CD) pipeline. The included lab exercises provide the basic steps and configuration information for setting up a multiple language pipeline.

In this first article in the series, we’ll give a brief introduction to DevOps and talk about the habits of high-performance organizations. Later, we will get into the DevOps trinity: Continuous Integration, Continuous Delivery, and Continuous Deployment. You can watch the introductory video below:

High-performance organizations make work visible. They manage work in process (WIP). And, they manage flow, of course, which is the Continuous Delivery part. For successful DevOps flow, you have to foster collaborative environments. And the way you do that is through high-trust work environments, and then by learning how to embrace failure and making failure part of your habits and your culture.

The DevOps Survey, which is run by Puppet Labs and the IT Revolution that I work with, has worked out the real science of this. The results of the survey found that high-performing organizations were both faster and more resilient, and we saw this in four variables.

The first is that high-performing organizations tend to deploy 30 times more frequently than low-performing organizations. Second, they had 200 times shorter Lead Times. Third, they also had 60 times less failures — like change failures. And, the fourth variable is that their mean time to recover (MTTR) was a 166 times faster.

So, we see this kind of Continuous Delivery where you are fast and reliable, and you have deployment automation, and you version control everything. And, all this leads to low levels of deployment pain, higher levels of IT performance, higher throughput and stability, lower change failure rates, and higher levels of performance and productivity.

In fact, there is also some data showing that this approach reduces burnout, so it is really good stuff. In the next article, we’ll talk about the value stream and lay the groundwork for Continuous Integration.

Want to learn more? Access all the free sample chapter videos now! 

This course is written and presented by John Willis, Director of Ecosystem Development at Docker. John has worked in the IT management industry for more than 35 years.

Put Your IDE in a Container with Guacamole

Apache Guacamole is an incubating Apache project that enables X window applications to be exposed via HTML5 and accessed via a browser. This article shows how Guacamole can be run inside containers in an OpenShift Container Platform (OCP) cluster to enable Red Hat JBoss Developer Studio, the eclipse-based IDE for the JBoss middleware portfolio, to be accessed via a web browser. You’re probably thinking “Wait a minute… X windows applications in a container?” Yes, this is entirely possible and this post will show you how. Bear in mind that tools from organizations like CODENVY can provide a truly cloud-ready IDE. In this post, you’ll see how organizations that have an existing well-established IDE can rapidly provision developer environments where each developer only needs a browser. JBoss Developer Studio includes a rich set of integration tools and I’ll show how those can be added to a base installation to support middleware products like JBoss Fuse and JBoss Data Virtualization.

How does Apache Guacamole work?

Apache Guacamole consists of two main components, the Guacamole web application (known as the guacamole-client) and the Guacamole daemon (or guacd). An X windows application runs in an Xvnc environment with an in-memory only display. The guacd daemon acts as an Xvnc client, consuming the Xvnc events and sending them to the Tomcat guacamole-client web application where they are then rendered to the client browser as HTML5. Guacamole also supports user authentication, multiple sessions, and other features that this article only touches on. The Apache Guacamole website has more information.

Read more at OpenShift

How Google Turned Open Source Into A Key Differentiator For Its Cloud Platform

Open source software has come of its age. Today it’s impossible to think of a platform company that doesn’t have an open source strategy. Even Microsoft – a company that once compared open source to cancer – has embraced it fully. Of course, we have companies like CloudBees, Red Hat and Docker that built highly successful business models with OSS. But when it comes to cloud platform vendors, the story is slightly different.

Though cloud is built on the foundation of OSS, the top 3 vendors – AWS, Microsoft and Google – have a very different approach to it. AWS and Azure are the largest consumers of OSS. Amazon EC2, one of the most successful IaaS platforms, is built on top of Xen, the popular open source hypervisor. Amazon has turned almost every successful open source project into a commercially available managed service.

Read more at Forbes

LinuxKit and Docker Security

LinuxKit, which Docker announced back in April, is one of the newest tools to enter the Docker universe. Here’s what you need to know about what LinuxKit does and what it means for security.

LinuxKit: What and Why

Let’s start with the what and why of LinuxKit.

As you might expect, the LinuxKit story starts with Docker itself. Docker, of course, was originally designed to sit on top of the Linux kernel, and to make heavy use of Linux resources. It was from the start basically a system for virtualizing and abstracting those underlying resources.

Docker got its start not just as a container system, but also as a Linux container system. Since then, Docker has developed versions of its container management systems for other platforms, including widely used cloud service providers, as well as Windows and the Macintosh OS. Many of these platforms, however, either have considerable variation in the Linux features which are available, or do not natively supply a full set of Linux resources.

Read more at Twistlock

Hotspot Brings GUI to Linux Perf Data

KDAB, a German consulting firm that develops graphics and visualization tools, has released Hotspot 1.0, a GUI too for visualizing performance data generated by the Linux perf tool.

Perf analyzes system and application behaviors in Linux and generates a detailed report showing which calls, programs, disk I/O operations, or network events (just to name a few possibilities) are eating up most of the system’s time. Because Perf is a command-line tool, most of its output is static, and it can be a multi-step process to produce an interactive, explorable report from data provided by Perf.

Read more at InfoWorld

Linux Laptop Survey Reveals The Most Popular Linux Laptop Brands, Distros, & Other Details

Short Bytes: What are your expectations from your Linux-powered machines. A recently conducted Linux laptop survey throws light on various factors such as prices, compatibility issues, GPU, laptop brand, etc. which people take into consideration while buying a Linux laptop. It shows some people are willing to pay more if they get proper support.

Linux Laptop Survey was conducted by Phoronix which welcomes people to answer a bunch of questions about what things they put first while buying a laptop or does it matter if their machine came pre-loaded with some Linux distribution. The survey received more 30,000 responses in a time span of two weeks which can be taken as a considerable figure to make a conclusion about the general choice of the people.

Read more at FOSSBytes

It’s the End of Network Automation as We Know It (and I Feel Fine)

Network automation does not an automated network make. Today’s network engineers are frequently guilty of two indulgences. First, random acts of automation hacking. Second, pursuing aspirational visions of networking grandeur — complete with their literary adornments like “self-driving” and “intent-driven” — without a plan or a healthy automation practice to take them there.

Can a Middle Way be found, enabling engineers to set achievable goals, while attaining the broader vision of automated networks as code? Taking some inspiration from our software engineering brethren doing DevOps, I believe so.

Read more at The New Stack