Home Blog Page 345

Xen Project Hypervisor: Virtualization and Power Management are Coalescing into an Energy-Aware Hypervisor

Power management in the Xen Project Hypervisor historically targets server applications to improve power consumption and heat management in data centers reducing electricity and cooling costs. In the embedded space, the Xen Project Hypervisor faces very different applications, architectures and power-related requirements, which focus on battery life, heat, and size.

Although the same fundamental principles of power management apply, the power management infrastructure in the Xen Project Hypervisor requires new interfaces, methods, and policies tailored to embedded architectures and applications. This post recaps Xen Project power management, how the requirements change in the embedded space, and how this change may unite the hypervisor and power manager functions.   

Evolution of Xen Project Power Management on x86

Time-sharing of computer resources by different virtual machines (VMs) was the precursor to scheduling and virtualization. Sharing of time using workload estimates was both a good and simple enough proxy for energy sharing. As in all main OSes, energy and power management in the Xen Project Hypervisor came as an afterthought.

Intel and AMD developed the first forms of power management for the Xen Project with the x86_64 architecture. Initially, the Xen Project used the `hlt’ instruction for CPU idling and didn’t have any support for deeper sleep states. Then, support for suspend-to-RAM, also known as ACPI S3, was introduced. It was entirely driven by Dom0 and meant to support manual machine suspensions by the user, for instance when the lid is closed on a laptop. It was not intended to reduce power utilization under normal circumstances. As a result, power saving was minimal and limited to the effects of `hlt’.

Finally, Intel introduced support for cpu-freq in the Xen Project in 2007. This was the first non-trivial form of power management for the Xen Project. Cpu-freq decreases the CPU frequency at runtime to reduce power consumption when the CPU is only lightly utilized. Again, cpu-freq was entirely driven by Dom0: the hypervisor allowed Dom0 to control the frequency of the underlying physical CPUs.

Not only was this a backward approach from the Xen architecture point of view, but this approach was severely limiting. Dom0 didn’t have a full view of the system to make the right decisions. In addition, it required one virtual CPU in Dom0 for each physical CPU and to pin each Dom0 virtual CPU to a different physical CPU. It was not a viable option in the long run.

To address this issue, cpu-freq was re-architected, moving the cpu-freq driver to the hypervisor. Thus, Xen Project became able to change CPU frequency and make power saving decisions by itself, solving these issues.

Intel and AMD introduced support for deep sleep states around the same time of the cpu-freq redesign. The Xen Project Hypervisor added the ability to idle physical CPUs beyond the simple `hlt’ instruction. Deeper sleep states, also known as ACPI C-states, have better power savings properties, but come with higher latency cost. The deeper the sleep state, the more power is saved, the longer it takes to resume normal operation. The decision to enter a sleep state is based on two variables: time and energy. However, scheduling and idling remain separate activities by large margins. As an example, the scheduler has very limited influence on the choice of the particular sleep state.

Xen Project Power Management on Arm

The first Xen release with Arm support was Xen 4.3 in 2013, but the Xen power management has not been actively addressed until very recently. One of the reasons may be the dominance of proprietary and in-house hypervisors for Arm in the embedded space and the overwhelming prevalence of x86 for servers. Due to the Xen Project’s maturity, its open source model and wide deployment, it is frequently used today in a variety of Arm-based applications. The power management support for the Xen Project hypervisor on Arm is becoming essential, in particular in the embedded world.

In our next blog post, we will cover architectural choices for Xen on Arm in the embedded world and use cases on how to make this work.

Xen Power Management for Embedded Applications

Embedded applications require the same OS isolation and security capabilities that  motivated the development of server virtualization, but come with a wider variety of multicore architectures, guest OSes, and virtual to physical hardware mappings. Moreover, most embedded designs are highly sensitive to deteriorations in performance, memory size, power efficiency and wakeup latency that often come with hypervisors. As the embedded devices are increasingly cooler, quieter, smaller and battery powered, efficient power management emerges as a vital hurdle for the successful adoption of hypervisors in the embedded community.

Standard non-virtualized embedded devices manage power at two levels: the platform and the OS level. At the platform level, the platform manager is typically executing on dedicated on-chip or on-board processors and microcontrollers. It is monitoring and controlling the energy consumption of the CPUs, the peripherals, the CPU clusters and all board level components by changing the frequencies, voltages, and functional states of the hardware. However, it has no intrinsic knowledge about the running applications, which is necessary for making the right decisions to save power.

This knowledge is provided by the OS, or, in some cases, directly by the application software itself. The Power State Coordination Interface (PSCI) and the Extensible Energy Management Interface (EEMI) are used to coordinate the power events between the platform manager, the OSes, and the processing clusters. Whereas PSCI coordinates the power events among the CPUs of a single processor cluster, EEMI is responsible for the peripherals and the power interaction between multiple clusters.

Contrary to the ACPI based power management for x86 architectures typical for desktops and servers, PSCI and EEMI allow for much more direct control and enable precise power management of virtual clusters. In embedded systems, every micro Joule counts, so the precision in terms of timing and scope of power management actions is essential.     

When a virtualization layer is inserted between the OSes and the platform manager, it effectively enables additional virtual clusters, which come with virtual CPUs, virtual peripherals, and even physical peripherals with device passthrough. The EEMI power coordination of the virtual clusters can execute in the platform manager, hypervisor or both.  If the platform manager is selected, the power management can be made very precise, but at the expense of firmware memory bloating, as it needs to manage not only the fixed physical clusters but also the dynamically created virtual clusters.

Additionally, the platform manager requires stronger processing capabilities to optimally manage power, especially if it takes the cluster and system loads into consideration. As platform managers typically reside in low power domains, both memory space, and processing power are in short supply.

The hypervisor usually executes on powerful CPU clusters, so has enough memory and processing power at its disposal. It is also well informed about the partitioning and load of the virtual clusters, making it the ideal place to manage power. However, for proper power management, the hypervisor also requires an accurate energy model of the underlying physical clusters. Similar to the energy-aware scheduler in Linux, the hypervisor must coalesce the sharing of time and energy to manage power properly. In this case, the OS-based power management is effectively transformed into the hypervisor-based power management.

The Hypervisor and Power Manager Coalesce

Most embedded designs consist of multiple physical clusters or subsystems that are frequently put into inactive low-power states to save energy, such as sleep, suspend, hibernate or power-off suspend. Typical examples are the application, real-time video, or accelerator clusters that own multiple CPUs and share the system memory, peripherals, board level components, and the energy source. If all the clusters enter low-power states, their respective hypervisors are inactive, and the always-on platform manager has to take over the sole responsibility for system power management. Once the clusters become active again, the power management is passed back to the respective hypervisors. In order to secure optimum power management, the hypervisors and the power manager have to act as one, ultimately coalescing into a distributed system software covering both performance and power management.

A good example of a design in action indicative of such evolution is the power management support for the Xilinx Zynq UltraScale+ MPSoC. The Xen hypervisor running in the Application Processing Unit (APU) and the power manager in the Power Management Unit (PMU) have already evolved into a tight bundle around EEMI based power management and shall further evolve with the upcoming EEMI clock support.

The next blog in this series will cover the suspend-to-RAM feature for the Xen Project Hypervisor targeting the Xilinx Zynq UltraScale+ MPSoC, which lays the foundation for full-scale power management on Arm architectures.

Authors:

Vojin Zivojnovic, CEO and Co-Founder at AGGIOS

Stefano Stabellini, Principal Engineer at Xilinx and Xen Project Maintainer

Docker Guide: Dockerizing Python Django Application

Docker is an open-source project that provides an open platform for developers and sysadmins to build, package, and run applications anywhere as a lightweight container. Docker automates the deployment of applications inside software containers.

Django is a web application framework written in python that follows the MVC (Model-View-Controller) architecture. It is available for free and released under an open source license. It is fast and designed to help developers get their application online as quickly as possible.

In this tutorial, I will show you step-by-step how to create a docker image for an existing Django application project in Ubuntu 16.04. We will learn about dockerizing a python Django application, and then deploy the application as a container to the docker environment using a docker-compose script.

In order to deploy our python Django application, we need additional docker images. We need an nginx docker image for the web server and PostgreSQL image for the database.

Read more at HowToForge

Last Chance to Speak at Hyperledger Global Forum | Deadline is This Friday

Hyperledger Global Forum is the premier event showcasing the real uses of distributed ledger technologies for businesses and how these innovative technologies run live in production networks today. Hyperledger Global Forum unites the industry’s most respected thought leaders, domain experts, and key maintainers behind popular frameworks and tools like Hyperledger Fabric, Sawtooth, Indy, Iroha, Composer, Explorer, and more.

The Hyperledger Global Forum agenda will include both technical and enterprise tracks on everything from Distributed Ledger Technologies to Smart Contracts 101; roadmaps for Hyperledger projects; cross-industry keynotes and panels on use-cases in development, and much more. Hyperledger Global Forum will also facilitate social networking for the community to bond.

Learn more about submitting a proposal, review suggested technical and business topics, and see sample submissions. The deadline to submit proposals is Friday, July 13, so apply today!

Submit Now >>

Not submitting a session, but plan to attend? Register now and save before ticket prices increase on September 30.

This article originally appeared at Hyperledger

How to Easily Purge Unwanted Repositories in Linux

After a year of so of working with Ubuntu Linux (or derivative such as Elementary OS), I almost always find myself with a number of repositories from software I may have installed and removed or never really needed in the first place. That means /etc/apt/sources.d can get pretty crowded and the apt update process becomes a bit sluggish. Or worse, repositories can become broken, bringing apt update to halt. Because of this, I try hard to keep those repositories to a minimum. One way to do this is to simply open a terminal window and comb through that directory (deleting any unnecessary .list file).

Sure, you can install the third-party ppa-purge tool, but with that you must know the official name of the repository. I don’t know about you, but after installing a PPA, the official name escapes me moments later. Fortunately, there’s an easier way—one that’s already built into the distribution. Those who would rather deal with the command line as little as possible will find this tool incredibly easy to use.

Let me show you how to remove repositories from your Linux distribution, with the help of a user-friendly GUI.

Read more at Tech Republic

Cloud Computing in HPC Surges

According to the two leading analyst firms covering the high performance computing market, the use of the cloud for HPC workloads is looking a lot more attractive to users these days.

Intersect360 offered the most upbeat assessment in this regard, noting that cloud spending by HPC customers grew by a whopping 44 percent from 2016 to 2017, calling it a “breakout year” for this product category. According to the company’s market data, that put cloud-based spending at around $1.1 billion for 2017. And even though that represents only about three percent of total HPC revenue for the year, it’s a high-water mark for cloud computing in this space.

The big jump in cloud spending was driven by a number of different factors, according the Intersect360 folks, including “increasing facilities costs for hosting HPC, maturation of application licensing models, increased availability of high-performance cloud resources, and a spike in requirements for machine learning applications.”

Read more at Top500

Open Collaboration in Practice at Open Source Summit

A key goal in my career is growing the understanding and best practice of how communities, and open source communities in particular, can work well together. There is a lot of nuance to this work, and the best way to build a corpus of best practice is to bring people together to share ideas and experience.

In service of this, last year I reached out to The Linux Foundation about putting together an event focused on these “people” elements of Open Source such as community management, collaborative workflow, governance, managing conflict, and more. It was called the Open Community Conference, which took place at the Open Source Summit events in Los Angeles and Prague, and everything went swimmingly.

This train though, has to keep moving, and we realized that the scope of the event needed broadening. What about legal, compliance, standards, and other similar topics? They needed a home, and this event seemed like a logical place to house them. So, in a roaring display of rebranding, we renamed the event to the Open Collaboration Conference. It happens again at the Open Source Summit, this year in Vancouver from August 29-31 and then in Edinburgh from October 22-24, 2018.

The upcoming event in Vancouver is looking fantastic. Just like last year, we had a raft of submissions, so thanks everyone for making my job (rightly) difficult for choosing the final set of talks.

Featured Talks

Unsurprisingly, we have some really remarkable speakers, from a raft of different organizations, backgrounds, and disciplines. This includes:

Oh, and I will be speaking, too, delivering a new presentation called “Building Effective Community Leaders: A Guide.” It will cover key principles of leadership and how to bake them into your community, company, or other organization.

In addition to this, don’t forget the fantastic networking, evening events, and other goodness that will be jammed into an exciting few days. As usual, this all takes place at the Open Source Summit, and you view the whole schedule and learn more about how to join us at https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/.

Finally, I will be there for the full event. If you want to have a meeting, drop me an email to jono@jonobacon.com.

Sign up to receive updates on Open Source Summit North America:

How to Use dd in Linux Without Destroying your Disk

Whether you’re trying to rescue data from a dying storage drive, backing up archives to remote storage, or making a perfect copy of an active partition somewhere else, you’ll need to know how to safely and reliably copy drives and filesystems. Fortunately, dd is a simple and powerful image-copying tool that’s been around, well, pretty much forever. And in all that time, nothing’s come along that does the job better.

Using dd, on the other hand, can make perfect byte-for-byte images of, well, just about anything digital. But before you start flinging partitions from one end of the earth to the other, I should mention that there’s some truth to that old Unix admin joke: “dd stands for disk destroyer.” If you type even one wrong character in a dd command, you can instantly and permanently wipe out an entire drive of valuable data. And yes, spelling counts.

Remember: Before pressing that Enter key to invoke dd, pause and think very carefully!

Read more at OpenSource.com

Early Uses of Blockchain Will Barely Be Visible, Says Hyperledger’s Brian Behlendorf

The blockchain revolution is coming, but you might not see it. That’s the view of Brian Behlendorf, executive director of the Linux Foundation’s Hyperledger  Project.

Speaking at the TC Sessions: Blockchain event in Zug, Switzerland, Behlendorf explained that much of the innovation that the introduction of blockchains are primed to happen behind this the scenes unbeknownst to most.

“For a lot of consumers, you’re not going to realize when the bank or a web form at a government website or when you go to LinkedIn and start seeing green check marks against people’s claims that they attended this university — which are all behind-the-scenes that will likely involve blockchain,” Behlendorf told interviewer John Biggs.

“This is a revolution in storage and networking and consumers.”

Read more at TechCrunch

How Open Source Can Transform the Way a Company’s Developers Work Together

Open source has been a tech mainstay for decades in large part, as Tilde co-founder and JavaScript veteran Yehuda Katz has argued, because it “gives engineers the power to collaborate across …companies without involving [business development].”

“The benefits of this workaround are extraordinary and underappreciated,” Katz continued. But open source offers something just as extraordinary and even more underappreciated, something that edX community lead John Mark Walker recently pointed out on Twitter.

Namely, what open source does to collaboration among engineers inside the same company.

According to Walker, “one of the little known secrets is that [open source] allows eng[ineering] teams in the same company to collab[orate] without management getting in the way.” 

Read more at TechRepublic

Anatomy of a Linux DNS Lookup – Part I

Since I work a lot with clustered VMs, I’ve ended up spending a lot of time trying to figure out how DNS lookups work. I applied ‘fixes’ to my problems from StackOverflow without really understanding why they work (or don’t work) for some time.

Eventually I got fed up with this and decided to figure out how it all hangs together. I couldn’t find a complete guide for this anywhere online, and talking to colleagues they didn’t know of any (or really what happens in detail)

So I’m writing the guide myself.

The first thing to grasp is that there is no single method of getting a DNS lookup done on Linux. It’s not a core system call with a clean interface.

There is, however, a standard C library call called which many programs use: getaddrinfo. But not all applications use this!

Let’s just take two simple standard programs: ping and host:

Read more at ZwischenZugs