Home Blog Page 1595

IBM’s Mike Day: KVM More Visible Through Collaboration

About a year ago IBM doubled down on its commitment to the open source cloud, announcing that all of its cloud offerings would be built on OpenStack and renewing its investments in KVM, the Linux-based kernel virtual machine. Since then, both projects have undergone major changes, including the move last fall of KVM and the Open Virtualization Alliance (OVA) to become a Linux Foundation Collaborative Project.

mike day speaker photo“The industry is much more aware of KVM,” says Mike Day, distinguished engineer and chief virtualization architect at IBM. “The Linux Foundation and the OVA have made a real difference.”

Here, Day discusses what’s new with the open cloud and KVM and gives us a sneak preview of his upcoming keynote at the Linux Foundation Collaboration Summit. Day will partner with HP’s Distinguished Technologist Monty Taylor to present “KVM, OpenStack and the Open Cloud,” and will teach an on-site training course, “Introduction to Linux KVM Virtualization.” Those who enroll for the course will automatically be registered for the invitation-only summit to be held on March 26-28 in Napa, Calif.

Linux.com: What is the state of the open cloud today? What’s going well for IBM and the industry in general, and what needs more work?

Mike Day: OpenStack has a lot of momentum, obviously. Together with CloudStack it is providing an open infrastructure that matches up well with Amazon and the others. IBM is in the midst of re-engineering its products and services around OpenStack; initial results are positive but we have a huge amount of work yet to do. OpenStack itself is still gaining new features at a rapid pace. This, and the immaturity or absence of consumer-oriented validation suites is attenuating deployment of OpenStack. Right now sophisticated IT departments are deploying or piloting it. The fact that vendors and Linux distributors are providing stable releases of OpenStack with certification will prove helpful.

How about the state of the KVM hypervisor?

Day: KVM is gaining deployments at a steady pace. It has a small market share but wields influence disproportionately to its market share. I attribute KVM’s influence on other hypervisors and systems software to its performance and the fact that KVM is consistently out early with new or advanced features. Now it is a multi-platform hypervisor, with upstream support for ARM, s390, and PowerPC.

How has the project changed since the Open Virtualization Alliance joined the Linux Foundation as a collaborative project last October?

Day: The industry is much more aware of KVM. The days are mostly gone when people assumed you could only run a Linux guest over KVM. There are more management, backup, and security products that support KVM. KVM has more high-profile reference customers as well. The Linux Foundation and the OVA have made a real difference.

When we talked around this time last year you said that one of the challenges facing KVM was building better configuration defaults for ease of setup and maintenance. How is that changing?

Day: The default configuration parameters are steadily improving. They are much better than when we last spoke. Right now in upstream there is a reworking of hot-plug and pci configuration, for example. In addition, there are more management tools that make configuration easier. Kimchi (https://github.com/kimchi-project/kimchi) and oVirt (http://oVirt.org) are a couple of good examples. 

You’re also teaching a class at Collaboration Summit that will cover how to couple KVM with such tools as oVirt, libvirt and OpenStack to create an entire open source virtual IT infrastructure. Who should attend this class and why?

Day: I will cover some theory and background, but mostly this will be a hands-on class focused on using KVM to run guests. Attendees will bring a Linux laptop and use it to run Lab exercises in the class. We will explore different ways to use KVM from command-line utilities to graphical management tools. It will be fast-paced and I think useful to those who are interested in KVM, as well as those who are using KVM and want to learn more about it. If you attend this class you should learn some new methods for using KVM and also understand how they work and why they are useful.

Distribution Release: Salix 14.1 “Xfce”

George Vlahavas has announced the release of Salix 14.1 “Xfce” edition, a Slackware-based distribution featuring the Xfce 4.10 desktop environment: “After a long development period Salix Xfce 14.1 is ready. There have been many and important changes since our last release. One of them is that now the….

Read more at DistroWatch

Are There Enough Users for Linux Mint Debian Edition to Survive?

The Linux Mint blog is reporting that Linux Mint Debian Edition 201403 has been released. LMDE is a semi-rolling distro that is based on Debian Testing. It is a good alternative for those who want the features of Linux Mint without having to use Ubuntu as its base…

Linux Mint Debian Edition is one of my favorite distributions. I highly recommend checking it out if you want a blend of Linux Mint’s features with the goodness of Debian. Don’t get me wrong, the Ubuntu versions are also still quite good but there are folks out there who prefer not to have anything to do with Ubuntu, so LMDE is a great alternative for them.

Read more at IT World.

Intel Snaps Up Smartwatch Maker for $100m: Report

Intel has reportedly purchased smartwatch maker Basis Science to become part of the firm’s arsenal in the wearable device industry.

Penguins, Robots, $387,000: The Story of Finding the Secret Recipe to Get Kids to Love Coding

Hello Ruby, a Kickstarter-funded book to help teach kids programming, has raised hundreds of thousands of dollars – showing that getting young people into coding is a matter of feeding their imagination.

Microsoft TechDays 2014 Showcases Work with Open Source Technologies

Posted by Fred Aatz
Director of Interoperability, Microsoft France

Microsoft TechDays 2014, the largest annual tech event in France dedicated to developers, IT professionals and business leaders, recently brought together 19,000 attendees with 60,000 online participants for three packed days of sessions.

Microsoft Tech Days 2014

This year open source technologies were at the heart of the event, with several sessions dedicated to how Microsoft and open source solutions are working well together. Presentations ranges from an overview of Microsoft’s support for open source software on Windows Azure in the cloud to working sessions focused on Node.js.

…(read more)

Can Open Source and The Linux Foundation Jump Start The Internet of Things?

When you consider what kinds of technology revolutions might arise over the next seven to 10 years, does The Internet of Things come to mind? Probably not, but you’ve no doubt heard of the effort to give just about everything an IP address, a cloud connection, or some kind of bridge to the Internet, allowing you to control everything from when your plants get watered to heating and plumbing in your house. In December of last year, The Linux Foundation announced its Allseen Alliance initiative, billed as “the broadest cross-industry consortium to date to advance adoption and innovation in the ‘Internet of Everything’ in homes and industry.”

Since then, a lot of interesting thoughts have appeared about the impact of The Internet of Things, and how open source technology may be the key glue to bring it to fruition.

 

Read more at Ostatic

Video Acceleration Takes The Backseat On Chrome For Linux

Due to notorious Linux graphics drivers, Google developers working on Chrome/Chromium aren’t looking to enable hardware video acceleration by default anytime soon. The problem ultimately comes down to poor Linux graphics drivers…

Read more at Phoronix

Red Hat’s Dynamic Kernel Patching Project

It seems that Red Hat, too, has a project working on patching running kernels. “kpatch allows you to patch a Linux kernel without rebooting or restarting any processes. This enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or scheduled reboot windows. It gives more control over uptime without sacrificing security or stability.” It looks closer to ksplice than to SUSE’s kGraft in that it patches out entire functions at a time.

Read more at LWN

SUSE Labs Director Talks Live Kernel Patching with kGraft

SUSE Labs last month announced details of its kGraft research project to enable live patching of the Linux kernel. The solution has its benefits, including reduced need for downtime and easier downtime scheduling. It also has some drawbacks, outlined below by Vojtech Pavlik, Director of SUSE Labs.

Vojtech Pavlik, Director of SUSE LabsThe code, set to be released in March, doesn’t patch kernel code in-place but rather uses an ftrace-like approach to replace whole functions in the Linux kernel with fixed variants, said Pavlik. SUSE then plans to submit it to the Linux kernel community for upstream integration.

“SUSE wants to make sure the code we show is one that passes the quality required for Linux kernel patch submissions,” Pavlik said, “and that it works well and its merits can be judged and discussed.”

In this Q&A, Pavlik goes into more detail on SUSE’s live kernel patching project; how the kGraft patch integrates with the Linux kernel; how it compares with other live-patching solutions; how developers will be able to use the upcoming release; and the project’s interaction with the kernel community for upstream acceptance.

Pavlik will also lead a technical session on kGraft at the upcoming Linux Foundation Collaboration Summit, March 26-28 in Napa, Ca. (Request an invitation.)

Linux.com: First, what is live kernel patching and why is it necessary?

Vojtech Pavlik: Downtime is expensive. Even scheduled downtime is expensive. The common solution for that is redundant systems, but making them redundant enough to allow for scheduled downtime without losing the redundancy for unplanned failures is expensive. Live kernel patching reduces the need for scheduled downtime and allows for easier planning of scheduled downtime by allowing critical fixes to be applied ahead of the downtime window, hence reducing costs.

How does kGraft work? Where does it integrate with the kernel and how does it execute?

Pavlik: kGraft works by replacing whole functions in the Linux kernel with fixed variants; it is not about patching code in-place. kGraft itself is a modification to the Linux kernel that uses parts of several existing Linux technologies, combining them to achieve its purpose.

First, a patch module that contains all the new functions and some initialization code that registers with the kGraft code in kernel is loaded. Since it contains the new functions as regular code, the kernel module loader links them to any functions they may be calling inside the kernel.

Then, kGraft uses an ftrace-like approach to replace existing functions with their fixed instances by inserting a long jump at the beginning of each function that needs to be replaced. Ftrace uses a clever method based on inserting a breakpoint opcode (INT3) into the patched code first, only then replacing the rest of the bytes by the jump address and removing the breakpoint and replacing it with long jump opcode. Inter-processor non-maskable interrupts are used throughout the process to flush speculative decoding queues of other CPUs in the system. This allows switching to the new function without ever stopping the kernel, not even for a very short moment. The interruptions by IPI NMIs can be measured in microseconds.

However, these steps alone would not be good enough: since the functions would be replaced non-atomically, a new fixed function in one part of the kernel could still be calling an old function elsewhere or vice versa. If the semantics of the function interfaces changed in the patch, chaos would ensue.

Thus, until all functions are replaced, kGraft uses an approach based on trampolines and similar to RCU (read-copy-update), to ensure a consistent view of the world to each userspace thread, kernel thread or kernel interrupt. This way, an old function always would call another old function and a new function always a new one. Once the patching is done, trampolines are removed and the code can operate at full speed without performance impact other than an extra long jump for each patched function.

How is kGraft different from other live patching solutions?

Pavlik: Unlike other Linux kernel live patching technologies, the kGraft patch module is compiled from a regular source code file, which eases review by a human developer. That source code is automatically generated from the source patch, the running kernel’s source code and running kernel’s debuginfo. And since with kGraft the resulting new code is a part of a regular kernel module, kGraft doesn’t require a complex instruction decoder or custom linking code and can rely on the in-kernel linker to link the module with existing kernel functions.

kGraft doesn’t require stopping the whole system while it is doing the patching. Other technologies may need to call stop_machine(), potentially several times, to be able to patch code; or they may require a checkpoint-kexec-restore on the whole system, stopping it for many seconds. Stopping the kernel, however, may be a deal breaker for low-latency applications.

kGraft does have some limitations, including that a kernel needs to include kGraft prior to it being patched. In other words, kGraft cannot patch an unknown third-party kernel. kGraft also doesn’t specifically handle situations where one compiler is used to compile the old kernel and another compiler is used for compiling the patch. I believe these limitations aren’t restricting to enthusiast users or Linux distributors that would want to use it – all that is required is that the build environment is stable and predictable.

What will potential users be able to accomplish with the first release, slated for March?

Pavlik: Users will be able to take a source patch, almost entirely automatically convert it into a patch module source code, compile it into a patch kernel module and load it, applying the fix to the running kernel. At that point, kGraft will be able to patch regular kernel code, as well as code used in kernel thread and interrupt contexts. What is missing currently is the full automation of the patch-to-module conversion. The complexity of patches that kGraft will be able to handle at this point is limited. In the future, we will expand on both ease of use and the scope of patches that can be converted and applied fully automatically.

Why is it necessary to integrate real-time patching into the Linux kernel?

Pavlik: Live patching is a technology that inevitably is depending on low-level kernel internals to do its work. As such, integration into the upstream Linux kernel ensures its long-term viability and allows for collaboration of competing contributors on its enhancement. SUSE strives to get all its enhancements to open source software integrated into upstream projects as contributions to open source community development. kGraft is not an exception.

Have you already been working with the kernel community on this? What has been their response?

Pavlik: “Release early, release often” is the open source motto. And while SUSE stands behind this idea, kGraft until recently has purely been a research project. Another motto frequently used on the Linux Kernel Mailing List is “Show me the code.” SUSE wants to make sure the code we show is one that passes the quality required for Linux kernel patch submissions, and that it works well and its merits can be judged and discussed. Hence the scheduled March release; that will be the starting point.

Will the live patch work equally well in a cloud environment as on bare metal?

Pavlik: Yes, virtualization that’s used in cloud environments is no impediment to using kGraft. The value of reduced downtime of individual services isn’t diminished by the fact that they are running inside a cloud. While kGraft will initially run on the x86-64 architecture only, its simple design makes extending it to work on other architectures – including ARM, IBM POWER or IBM System z – easy. That would allow it to work on a vast array of hardware, from cell phones to mainframes.

Vojtech Pavlik is a Director of SUSE Labs, a global team within R&D at SUSE focusing on furthering core Linux technologies. In his kernel developer past, he has worked on USB support in Linux and created the Linux Input subsystem. He enjoys solving interesting problems that Linux faces, recently proposing the MOK concept as a solution for UEFI Secure Boot on Linux.