Want to learn more about Kubernetes? A new massive open online course (MOOC) — Introduction to Kubernetes (LFS158x) — is now available from The Linux Foundation andedX.
Get an in-depth primer on this powerful system for managing containerized applications in this free, self-paced course, which covers the architecture of the system, the problems it solves, and the model that it uses to handle containerized deployments and scaling. The course also includes technical instructions on how to deploy a standalone and multi-tier application.
Upon completion, you’ll have a solid understanding of Kubernetes and will be able to begin testing the new cloud native pattern to begin the cloud native journey.
In this course, you will learn:
The origin, architecture, primary components, and building blocks of Kubernetes
How to set up and access a Kubernetes cluster using Minikube
Ways to run applications on the deployed Kubernetes environment and access the deployed applications
Usefulness of Kubernetes communities and how to participate.
LFS158x is taught by Neependra Khare (@neependra), the Founder and Principal Consultant at CloudYuga Technology, which offers training and consulting services around container technologies such as Docker and Kubernetes.
This new series offers a preview of the DevOps Fundamentals: Implementing Continuous Delivery (LFS261) course from The Linux Foundation. The online, self-paced course, presented through short videos, provides basic knowledge of the process, patterns and tools used in building and managing a Continuous Integration/Continuous Delivery (CI/CD) pipeline. The included lab exercises provide the basic steps and configuration information for setting up a multiple language pipeline.
In this first article in the series, we’ll give a brief introduction to DevOps and talk about the habits of high-performance organizations. Later, we will get into the DevOps trinity: Continuous Integration, Continuous Delivery, and Continuous Deployment. You can watch the introductory video below:
High-performance organizations make work visible. They manage work in process (WIP). And, they manage flow, of course, which is the Continuous Delivery part. For successful DevOps flow, you have to foster collaborative environments. And the way you do that is through high-trust work environments, and then by learning how to embrace failure and making failure part of your habits and your culture.
The DevOps Survey, which is run by Puppet Labs and the IT Revolution that I work with, has worked out the real science of this. The results of the survey found that high-performing organizations were both faster and more resilient, and we saw this in four variables.
The first is that high-performing organizations tend to deploy 30 times more frequently than low-performing organizations. Second, they had 200 times shorter Lead Times. Third, they also had 60 times less failures — like change failures. And, the fourth variable is that their mean time to recover (MTTR) was a 166 times faster.
So, we see this kind of Continuous Delivery where you are fast and reliable, and you have deployment automation, and you version control everything. And, all this leads to low levels of deployment pain, higher levels of IT performance, higher throughput and stability, lower change failure rates, and higher levels of performance and productivity.
In fact, there is also some data showing that this approach reduces burnout, so it is really good stuff. In the next article, we’ll talk about the value stream and lay the groundwork for Continuous Integration.
This course is written and presented by John Willis, Director of Ecosystem Development at Docker. John has worked in the IT management industry for more than 35 years.
Apache Guacamole is an incubating Apache project that enables X window applications to be exposed via HTML5 and accessed via a browser. This article shows how Guacamole can be run inside containers in an OpenShift Container Platform (OCP) cluster to enable Red Hat JBoss Developer Studio, the eclipse-based IDE for the JBoss middleware portfolio, to be accessed via a web browser. You’re probably thinking “Wait a minute… X windows applications in a container?” Yes, this is entirely possible and this post will show you how. Bear in mind that tools from organizations like CODENVY can provide a truly cloud-ready IDE. In this post, you’ll see how organizations that have an existing well-established IDE can rapidly provision developer environments where each developer only needs a browser. JBoss Developer Studio includes a rich set of integration tools and I’ll show how those can be added to a base installation to support middleware products like JBoss Fuse and JBoss Data Virtualization.
How does Apache Guacamole work?
Apache Guacamole consists of two main components, the Guacamole web application (known as the guacamole-client) and the Guacamole daemon (or guacd). An X windows application runs in an Xvnc environment with an in-memory only display. The guacd daemon acts as an Xvnc client, consuming the Xvnc events and sending them to the Tomcat guacamole-client web application where they are then rendered to the client browser as HTML5. Guacamole also supports user authentication, multiple sessions, and other features that this article only touches on. The Apache Guacamole website has more information.
Open source software has come of its age. Today it’s impossible to think of a platform company that doesn’t have an open source strategy. Even Microsoft – a company that once compared open source to cancer – has embraced it fully. Of course, we have companies like CloudBees, Red Hat and Docker that built highly successful business models with OSS. But when it comes to cloud platform vendors, the story is slightly different.
Though cloud is built on the foundation of OSS, the top 3 vendors – AWS, Microsoft and Google – have a very different approach to it. AWS and Azure are the largest consumers of OSS. Amazon EC2, one of the most successful IaaS platforms, is built on top of Xen, the popular open source hypervisor. Amazon has turned almost every successful open source project into a commercially available managed service.
LinuxKit, which Docker announced back in April, is one of the newest tools to enter the Docker universe. Here’s what you need to know about what LinuxKit does and what it means for security.
LinuxKit: What and Why
Let’s start with the what and why of LinuxKit.
As you might expect, the LinuxKit story starts with Docker itself. Docker, of course, was originally designed to sit on top of the Linux kernel, and to make heavy use of Linux resources. It was from the start basically a system for virtualizing and abstracting those underlying resources.
Docker got its start not just as a container system, but also as a Linux container system. Since then, Docker has developed versions of its container management systems for other platforms, including widely used cloud service providers, as well as Windows and the Macintosh OS. Many of these platforms, however, either have considerable variation in the Linux features which are available, or do not natively supply a full set of Linux resources.
KDAB, a German consulting firm that develops graphics and visualization tools, has released Hotspot 1.0, a GUI too for visualizing performance data generated by the Linux perf tool.
Perf analyzes system and application behaviors in Linux and generates a detailed report showing which calls, programs, disk I/O operations, or network events (just to name a few possibilities) are eating up most of the system’s time. Because Perf is a command-line tool, most of its output is static, and it can be a multi-step process to produce an interactive, explorable report from data provided by Perf.
Short Bytes: What are your expectations from your Linux-powered machines. A recently conducted Linux laptop survey throws light on various factors such as prices, compatibility issues, GPU, laptop brand, etc. which people take into consideration while buying a Linux laptop. It shows some people are willing to pay more if they get proper support.
A Linux Laptop Survey was conducted by Phoronix which welcomes people to answer a bunch of questions about what things they put first while buying a laptop or does it matter if their machine came pre-loaded with some Linux distribution. The survey received more 30,000 responses in a time span of two weeks which can be taken as a considerable figure to make a conclusion about the general choice of the people.
Network automation does not an automated network make. Today’s network engineers are frequently guilty of two indulgences. First, random acts of automation hacking. Second, pursuing aspirational visions of networking grandeur — complete with their literary adornments like “self-driving” and “intent-driven” — without a plan or a healthy automation practice to take them there.
Can a Middle Way be found, enabling engineers to set achievable goals, while attaining the broader vision of automated networks as code? Taking some inspiration from our software engineering brethren doing DevOps, I believe so.
Sometimes, while working on the command line, you arrive at a point where there’s too much text on the terminal screen, and none of that is relevant to you. So, in order to avoid distraction, you’d want to clear the terminal screen. Those new to the Linux command line may not know that there exists a dedicated command line utility that does this work for you.
In this tutorial, we will be discussing the basics of clear (the tool in question) as well as how to use it. But before we do that, it’s worth sharing that all examples/instructions mentioned in this tutorial have been tested on Ubuntu 16.04LTS.
In an earlier post, we’ve explained CPUTool for limiting and controlling CPU utilization of any process in Linux. It allows a system administrator to interrupt execution of a process (or process group) if the CPU/system load goes beyond a defined threshold. Here, we will learn how to use a similar tool called cpulimit.
Cpulimit is used to restrict the CPU usage of a process in the same way as CPUTool, however, it offers more usage options compared to its counterpart. One important difference is that cpulimit doesn’t manage system load unlike cputool.