Home Blog Page 497

3 Open Source Projects That Make Kubernetes Easier

From cluster state management to snapshots and DR, companion tools from Heptio, Kubed, and Kubicorn aim to fill the gaps in Kubernetes.

Clearly, Kubernetes is an elegant solution to an important problem. Kubernetes allows us to run containerized applications at scale without drowning in the details of balancing loads, networking containers, ensuring high availability for apps, or managing updates or rollbacks. So much complexity is hidden safely away. 

But using Kubernetes is not without its challenges. Getting up and running with Kubernetes takes some work, and many of the management and maintenance tasks around Kubernetes are downright thorny. 

Read more at InfoWorld

How to Build Clusters for Scientific Applications-as-a-Service

How can we make a workload easier on cloud? In a previous article we presented the lay of the land for HPC workload management in an OpenStack environment. A substantial part of the work done to date focuses on automating the creation of a software-defined workload management environment – SLURM-as-a-Service.

SLURM is only one narrow (but widely used) use case in a broad ecosystem of multi-node scientific application clusters: let’s not over-specialize. It raises the question of what is needed to make a generally useful, flexible system for creating Cluster-as-a-Service?

What do users really want?

A user of the system will not care how elegantly the infrastructure is orchestrated:

  • Users will want support for the science tools they need, and when new tools are needed, the users will want support for those too.
  • Users will want to get started with minimal effort. The learning curve they must climb to deploy tools needs to be shallow.
  • Users will want easy access to the datasets upon which their research is based.

Read more at OpenStack SuperUser

The Cloud Native Architect

One of the biggest learning curves I ever endured in that time was working in an operations team building what I will call a virtualisation platform. They called it infrastructure as code, I called it automating previously manual processes using development techniques. It opened my mind again to a completely new way of looking at development teams outside of the DevOps culture. Development techniques were relatively new in that team but the real value was driven through collaborative knowledge share. System administrators were developing code and developers were gaining knowledge of automating and building up the infrastructure & networking of a cloud platform. At this point, this was the first time I had seen this degree of two-way communication outside of teams already building on a PaaS. This truly opened my eyes to the challenges these operations team face when the code is thrown over the fence and they are expected to agree to run it in production with confidence.

Historically the responsibilities of a production system were formed from an aggregated view of collective specialisms. For example: Infrastructure architects, network architects, security architects, QA, developers; technical and solution architects. What I am getting at here is that each role had a more narrow minded focus and set of key responsibilities.

Read more at Medium

Linux Load Averages: Solving the Mystery

Load averages are an industry-critical metric – my company spends millions auto-scaling cloud instances based on them and other metrics – but on Linux there’s some mystery around them. Linux load averages track not just runnable tasks, but also tasks in the uninterruptible sleep state. Why? I’ve never seen an explanation. In this post I’ll solve this mystery, and summarize load averages as a reference for everyone trying to interpret them.

Linux load averages are “system load averages” that show the running thread (task) demand on the system as an average number of running plus waiting threads. This measures demand, which can be greater than what the system is currently processing. Most tools show three averages, for 1, 5, and 15 minutes:

$ uptime
 16:48:24 up  4:11,  1 user,  load average: 25.25, 23.40, 23.46

top - 16:48:42 up  4:12,  1 user,  load average: 25.25, 23.14, 23.37

$ cat /proc/loadavg 
25.72 23.19 23.35 42/3411 43603

Some interpretations:

  • If the averages are 0.0, then your system is idle.
  • If the 1 minute average is higher than the 5 or 15 minute averages, then load is increasing.
  • If the 1 minute average is lower than the 5 or 15 minute averages, then load is decreasing.
  • If they are higher than your CPU count, then you might have a performance problem (it depends).

Read more at Brendan Gregg’s Blog

How to Setup and Configure Hadoop CDH5 on Ubuntu 14.0.4

This document describes how to install and configure a Hadoop cluster on a single node on Ubuntu OS. Single machine Hadoop cluster is also called as Hadoop Pseudo-Distributed Mode. The steps and procedure given in this document to install Hadoop cluster are very simple and to the point, so that you can install Hadoop very easily and within some minutes of time. Once the installation is done you can play with Hadoop and its components like MapReduce for data processing and HDFS for data storage.

Install Hadoop Cluster on a Single Node on Ubuntu OS

1 Recommended Platform

I. Platform Requirements

Operating system: Ubuntu 14.04 or later, other Linux flavors like CentOS, Redhat, etc.
Hadoop: Cloudera Distribution for Apache Hadoop CDH5.x (you can use Apache Hadoop 2.x)

II. Configure & Setup Platform

If you are using Windows/Mac OS you can create virtual machine and install Ubuntu using VMWare Player, alternatively, you can create virtual machine and install Ubuntu using Oracle Virtual Box

2. Prerequisites

I. Install Java 8

a. Install Python Software Properties

To add the java repositories we need to download python-software-properties. To download and install python software properties run below command in terminal:

1
$ sudo apt-get install python-software-properties

NOTE: After you press “Enter”. It will ask for your password since we are using “sudo” command to provide root privileges for the installation. For any installation or configuration, we always need root privileges.

b. Add Repository

Now we will add a repository manually from where Ubuntu will install the Java. To add repository type the below command in terminal:

1
$ sudo add-apt-repository ppa:webupd8team/java

Now it will ask you to Press [Enter] to continue. Press “Enter”.

c. Update the source list

It is recommended to update the source list periodically. If you want to update, install a new package, always update the source list. The source list is a location from where Ubuntu can download and install the software. To update source list type the below command in terminal:

1
$ sudo apt-get update

When you run the above command Ubuntu updates its source list.

d. Install Java

Now we will download and install the Java. To download and install Java type the below command in terminal:

1
$ sudo apt-get install oracle-java8-installer

When you will press enter it will start downloading and installing Java.

To confirm Java installation has successfully completed or not and to check the version of your Java type the below command in terminal:

1
$ java –version

II. Configure SSH

SSH means secured shell which is used for the remote login. We can login to a remote machine using SSH. Now we need to configure password less SSH. Password-less SSH means without a password we can login to a remote machine. Password-less SSH setup is required for remote script invocation. Automatically remotely master will start the demons on slaves.

Read more at Data Flair

 

A Realistic Approach to Mixing Open Source Licenses

At the upcoming Open Source Summit in Los Angeles, Lars Kurth, director of Open Source Solutions at Citrix and chair of the Advisory Board of the Xen Project at The Linux Foundation, will be delivering a wealth of practical advice in two conference talks.

The first talk is “Mixed License FOSS Projects: Unintended Consequences, Worked Examples, Best Practices” and the second talk is “Live Patching, Virtual Machine Introspection and Vulnerability Management: A Primer and Practical Guide.”

Here, Kurth explains more about what he will be covering in these presentations.

There are thousands of open source licenses, and some are incompatible with each other. What are the issues about mixing licenses that you will be discussing in your talk?

Lars Kurth: License compatibility in general is fairly well understood. One of the areas I am focusing on in this talk, is that lots of open source projects start off as single license projects and over time additional licenses start to be incorporated into the codebase. This is true for many open source projects such as the Linux Kernel, QEMU, the Xen Project, the BSDs, and many others.

What most projects fail to do as new licenses are added is evolve best practice around mixing licenses as new licenses are added. This can lead to barriers for license and patent conscious adopters or contributors of a specific project.

For example, in the Xen Project, we came across the case where an organization’s IP lawyer wanted to know answers to questions like why certain licenses existed in the codebase, before they would approve code contributions by their employees. The whole process took more than half a year and was extremely painful, but it prompted us to introduce a set of Best Practices, so we would never have to go through such an exercise again.  

Are there any examples of mixing licenses that can lead to disaster?

Kurth: Thanks to FOSS license compliance tools, such as FOSSology, we rarely see FOSS stacks that contain incompatible licenses. What does happen, occasionally though, is that licensing constraints can limit the ambition of open source projects. A high-profile example was Hyperledger’s attempt to include Ethereum’s C++ client, which ultimately failed because it would have required re-licensing some of Ethereum’s runtime from GPL-3 to Apache 2.0.

In projects that contain components with multiple licenses, choosing a component’s license without appropriate foresight can lead to situations that requires re-licensing a component to implement a new feature. The risk of this happening is high, when code refactoring is required. The only way to avoid this is to treat the license of a component like an architectural system property.  Because re-licensing a component in projects with multiple licenses can’t be excluded, I will walk through a worked example on how to do this in my talk.

As Open Source is taking over the world, what risks do you see of mixing licenses? What else can people look forward to in your talk?

Kurth: This talk will highlight some risks and solutions to the problem of adding additional licenses as projects grow. One thing that I also think be really interesting is the risk of mixing GPLv2 code with GPLv2 or later code (GPLv2+). Linux for example contains 14 percent of code licensed under GPLv2+. Many projects unintentionally or even unknowingly do this without understanding potential consequences. To find out more, come and see the talk!

Let’s focus a bit on your other presentation. Why is live patching important?

Kurth: The importance of live patching of the Xen Project Hypervisor came to light when several cloud providers including AWS, Rackspace, and IBM SoftLayer had to reboot large numbers of servers in their fleets in late 2014 and again early in 2015 due to two Xen Project vulnerabilities. Applying security patches is not an easy task when thousands of servers running critical business applications require a reboot after a patch has been applied.

Hypervisor reboots inconvenience cloud customers, which is why the Xen Project developed Hypervisor Live Patching first released in June 2016. This enables Xen users to apply security patches in real-time, without reboots, power cycles, or workload migrations.

What will you be talking about in regard to live patching?

Kurth: The process of patching a live hypervisor or kernel is not an easy task. What happens is a little bit like open heart surgery: The patient is the hypervisor and/or kernel and precision and care are needed to get things right. To do this safely requires expertise and appropriate build and test capabilities. In this talk, I will cover how Hypervisor Live Patching works, and how Live Patches are built and tested. I will also show how XenServer combines Linux and Xen Project Hypervisor Live Patching in a combined and easy-to use solution.

Besides Live Patching, we will also give a brief overview of the Xen Project Vulnerability Management process and how it was impacted by the introduction of Live Patching. In addition, we will briefly introduce Virtual Machine Introspection, which has been shown to detect and protect users from 0-day vulnerabilities such as EternalBlue.

Check out the full schedule for Open Source Summit here. Linux.com readers save on registration with discount code LINUXRD5. Register now!

Running LinuxKit on AWS Platform Made Easy

Soon after DockerCon 2017, I wrote a blog post on how to get started with LinuxKit for Google Cloud Platform. Since then I have been closely keeping eye on the latest features, enablements & releases of LinuxKit. Under this blog post, I bring up a simplified approach to get LinuxKit OS instance running on top of Amazon Web Services(AWS) Platform.

Here we go..

Steps:

  1. Install AWS CLI on macOS(Using Homebrew)
  2. Installing LinuxKit & Moby Tool(Using Homebrew)
  3. Configuring AWS S3 bucket
  4. Building a RAW image with Moby tool
  5. Configuring VM Import Service Role
  6. Upload the aws.raw Image to remote AWS S3 bucket using LinuxKit
  7. Run the LinuxKit OS as EC2 Instance 

Read more at Collabnix

MQTT for IoT Communication

MQTT stands for Message Queue Telemetry Transport. As its name suggests, it’s a protocol for transporting messages between two points. Sure, we’ve got Messenger and Skype for that; but what makes MQTT so special is its super lightweight architecture, which is ideal for scenarios where bandwidth is not optimal.

The MQTT high-level architecture is primarily divided into two parts – a broker and a client.

Read more at DZone

Agile2017: What the Agile Development Model Needs To Do Next

It’s more than 16 years old now, but Agile still struggles to achieve broad enterprise adoption. Here’s what Agile2017 speakers and attendees are suggesting for the future. More than 16 years after the Agile Manifesto was written, “Agile is still hard,” admitted Tricia Broderick, the chair of Agile2017 in Orlando, Fla.

Just released data from a survey of more than 150 managers by CA Technologies underscores that fact — only 12% say their entire organization is on a path to achieving an Agile development model, even while 70% say they know it’s the process that can help them be organized and respond faster.

Read more at TechTarget

Container Networking Challenges the Focus of Tigera Calico Update

Tigera is adding new features to its Calico container networking product in an attempt to ease Kubernetes-based management and hit enterprise-grade needs.

The boldly named Essentials for Kubernetes product is the firm’s first commercial packaged platform. The product is specifically targeted at management of the container networking space, which includes a set of interfaces for adding and removing containers from a network.

Tigera is targeting a handful of connectivity platforms, including Container Networking Interface (CNI), its own Calico offer, Flannel, and Istio. CNI was initially proposed by CoreOS to define a common interface between network plugins and container execution. It has limited responsibility over network connectivity of containers, and it removes allocated resources when the container is deleted.

Read more at SDxCentral