Home Blog Page 341

What Are ‘Mature’ Stateful Applications?

BlueK8s is a new open source Kubernetes initiative from ‘big data workloads’ company BlueData — the project’s direction leads us to learn a little about which direction containerised cloud-centric applications are growing.

The first open project in the BlueK8s initiative is Kubernetes Director (aka KubeDirector), for deploying and managing distributed ‘stateful applications’ with Kubernetes.

Apps can be stateful or stateless….

A stateful app is a program that saves client data from the activities of one session for use in the next session — the data that is saved is called the application’s state.

Typically, stateless applications are microservices or containerised applications that have no need for long-running [data] persistence and aren’t required to store data.

Read more at TechTarget

What Serverless Architecture Actually Means, and Where Servers Enter the Picture

Serverless architecture is not, despite its name, the elimination of servers from distributed applications. Serverless architecture refers to a kind of illusion, originally made for the sake of developers whose software will be hosted in the public cloud, but which extends to the way people eventually use that software. Its main objective is to make it easier for a software developer to compose code, intended to run on a cloud platform, that performs a clearly-defined job.

If all the jobs on the cloud were, in a sense, “aware” of one another and could leverage each other’s help when they needed it, then the whole business of whose servers are hosting them could become trivial, perhaps irrelevant. And not having to know those details might make these jobs easier for developers to program. Conceivably, much of the work involved in attaining a desired result, might already have been done.

What does serverless mean for us at [Amazon] AWS?” asked Chris Munns, senior developer advocate for serverless at AWS, during a session at the re:Invent 2017 conference. “There’s no servers to manage or provision at all. This includes nothing that would be bare metal, nothing that’s virtual, nothing that’s a container — anything that involves you managing a host, patching a host, or dealing with anything on an operating system level, is not something you should have to do in the serverless world.”

Read more at ZDNet

Tips for Success with Open Source Certification

In today’s technology arena, open source is pervasive. The 2018 Open Source Jobs Report found that hiring open source talent is a priority for 83 percent of hiring managers, and half are looking for candidates holding certifications. And yet, 87 percent of hiring managers also cite difficulty in finding the right open source skills and expertise. This article is the second in a weekly series on the growing importance of open source certification.

In the first article, we focused on why certification matters now more than ever. Here, we’ll focus on the kinds of certifications that are making a difference, and what is involved in completing necessary training and passing the performance-based exams that lead to certification, with tips from Clyde Seepersad, General Manager of Training and Certification at The Linux Foundation.

Performance-based exams

So, what are the details on getting certified and what are the differences between major types of certification? Most types of open source credentials and certification that you can obtain are performance-based. In many cases, trainees are required to demonstrate their skills directly from the command line.

“You’re going to be asked to do something live on the system, and then at the end, we’re going to evaluate that system to see if you were successful in accomplishing the task,” said Seepersad. This approach obviously differs from multiple choice exams and other tests where candidate answers are put in front of you. Often, certification programs involve online self-paced courses, so you can learn at your own speed, but the exams can be tough and require demonstration of expertise. That’s part of why the certifications that they lead to are valuable.

Certification options

Many people are familiar with the certifications offered by The Linux Foundation, including the Linux Foundation Certified System Administrator (LFCS) and Linux Foundation Certified Engineer (LFCE) certifications. The Linux Foundation intentionally maintains separation between its training and certification programs and uses an independent proctoring solution to monitor candidates. It also requires that all certifications be renewed every two years, which gives potential employers confidence that skills are current and have been recently demonstrated.

“Note that there are no prerequisites,” Seepersad said. “What that means is that if you’re an experienced Linux engineer, and you think the LFCE, the certified engineer credential, is the right one for you…, you’re allowed to do what we call ‘challenge the exams.’ If you think you’re ready for the LFCE, you can sign up for the LFCE without having to have gone through and taken and passed the LFCS.”

Seepersad noted that the LFCS credential is great for people starting their careers, and the LFCE credential is valuable for many people who have experience with Linux such as volunteer experience, and now want to demonstrate the breadth and depth of their skills for employers. He also said that the LFCS and LFCE coursework prepares trainees to work with various Linux distributions. Other certification options, such as the Kubernetes Fundamentals and Essentials of OpenStack Administration courses and exams, have also made a difference for many people, as cloud adoption has increased around the world.

Seepersad added that certification can make a difference if you are seeking a promotion. “Being able show that you’re over the bar in terms of certification at the engineer level can be a great way to get yourself into the consideration set for that next promotion,” he said.

Tips for Success

In terms of practical advice for taking an exam, Seepersad offered a number of tips:

  • Set the date, and don’t procrastinate.

  • Look through the online exam descriptions and get any training needed to be able to show fluency with the required skill sets.

  • Practice on a live Linux system. This can involve downloading a free terminal emulator or other software and actually performing tasks that you will be tested on.

Seepersad also noted some common mistakes that people make when taking their exams. These include spending too long on a small set of questions, wasting too much time looking through documentation and reference tools, and applying changes without testing them in the work environment.

With open source certification playing an increasingly important role in securing a rewarding career, stay tuned for more certification details in this article series, including how to prepare for certification.

Learn more about Linux training and certification.

Xen Project Hypervisor Power Management: Suspend-to-RAM on Arm Architectures

About a year ago, we started a project to lay the foundation for full-scale power management for applications involving the Xen Project Hypervisor on Arm architectures. We intend to make Xen on Arm’s power management the open source reference design for other Arm hypervisors in need of power management capabilities.

Looking at Previous Examples for Initial Approach

We looked at the older ACPI-based power management for Xen on x86, which features CPU idling (cpu-idle), CPU frequency scaling (cpu-freq), and suspend-to-RAM. We also looked at the PSCI platform management and pass-through capabilities of Xen on Arm, which already existed, but did not have any power management support. We decided to take a different path compared to x86 because we could not rely on ACPI for Arm, which is not widespread in the Arm embedded community. Xen on Arm already used PSCI for booting secondary CPUs, system shutdown, restart and other miscellaneous platform functions; thus, we decided to follow the trend, and base our implementation on PSCI.

Among the typical power management features, such as cpu-idle, cpu-freq, suspend-to-RAM, hibernate and others, we concluded that suspend-to-RAM would be the one best suited for our initial targets, systems-on-chips (SoCs). Most SoCs allow the CPU voltage domain to be completely powered off while the processor subsystem is suspended, and the state preserved in the RAM self-refresh mode, thereby significantly cutting the power consumption, often down to just tens of milliwatts.

Our Design Approach

Our solution provides a framework that is well suited for embedded applications. In our suspend-to-RAM approach, each unprivileged guest is given a chance to suspend on its own and to configure its own wake-up devices. At the same time, the privileged guest (Dom0) is considered to be a decision maker for the whole system: it can trigger the suspend of Xen, regardless of the states of the unprivileged guests.

These two features allow for different Xen embedded configurations and use-cases. They make it possible to freeze an unprivileged guest due to an ongoing suspend procedure, or to inform it about the suspend intent, giving it a chance to cooperate and suspend itself. These features are the foundation for higher level coordination mechanisms and use-case specific policies.

Solution Support

Our solution relies on the PSCI interface to allow guests to suspend themselves, and to enable the hypervisor to suspend the physical system. It further makes use of EEMI to enable guest notifications when the suspend-to-RAM procedure is initiated. EEMI stands for Embedded Energy Management Interface, and it is used to communicate with the power management controller on Xilinx devices. On the Xilinx Zynq UltraScale+ MPSoC we were able to suspend the whole application subsystem with Linux and Xen and put the MPSoC into its deep-sleep state, where it consumes only 35 mW. Resuming from this state is triggered by a wake-up interrupt that can be owned by either Dom0 or an unprivileged guest.

After the successful implementation of suspend-to-ram, the logical next step is to introduce CPU frequency scaling and CPU idling based on the aggregate load and performance requirements of all VMs.

While an individual VM may be aware of its own performance need, its utilization level, and the resulting CPU load, this information only applies to the virtual CPUs assigned to the guest. Since the VMs are not aware of the virtual to physical CPU mappings, while also lacking awareness of all the other VMs and their performance needs, a VM is not in a position to make suitable decisions regarding the power and performance states of the SoC.

The hypervisor, on the other hand, is scheduling the virtual CPUs and needs to be aware of their utilization of the physical CPUs. Having this visibility, the hypervisor is well suited to make power management decisions concerning the frequency and idle states of the physical CPUs. In our vision, the hypervisor scheduler will become energy aware and allocate energy consumption slots to guests, rather than time slots.

Currently, our work is focused on testing the new Xen suspend-to-RAM feature on Xilinx Zynq UltraScale+ MPSoC. We are calling the Xen Project developers to join the Xen power management activity and implement and test the new feature on other Arm architectures, so we accelerate the upstreaming effort and the accompanying cleanup.  

Authors

Mirela Grujic, Principal Engineer at AGGIOS

Davorin Mista, VP Engineering and Co-Founder at AGGIOS

Stefano Stabellini, Principal Engineer at Xilinx and Xen Project Maintainer

Vojin Zivojnovic, CEO and Co-Founder at AGGIOS

 

Tickets Make Operations Unnecessarily Miserable

IT Operations has always been difficult. There is always too much work to do, not enough time to do it, and frequent interrupts. Moreover, there is the relentless pressure from executives who hold the view that everything takes too long, breaks too often, and costs too much.

In search of improvement, we have repeatedly bet on new tools to improve our work. We’ve cycled through new platforms (e.g., Virtualization, Cloud, Docker, Kubernetes) and new automation (e.g., Puppet, Chef, Ansible). While each comes with its own merits, has the stress and overload on operations fundamentally changed?

Enterprises have also spent the past two decades liberally applying Management frameworks like ITIL and COBIT. Would an average operations engineer say things have gotten better or worse?

In the midst of all of this, there is conventional wisdom that rarely gets questioned.

The first of these is the idea that grouping people by functional role should be the primary driver for org structure. I discussed the problem with this idea extensively in a previous post on silos.

Read more at Rundeck

Converting and Manipulating Image Files on the Linux Command Line

Most of us probably know how wonderful a tool Gimp is for editing images, but have you ever thought about manipulating image files on the command line? If not, let me introduce you to the convert command. It easily coverts files from one image format to another and allows you to perform many other image manipulation tasks, as well — and in a lot less time than it would take to make these changes uses desktop tools.

Let’s look at some simple examples of how you can make it work for you.

Converting files by image type

Coverting an image from one format to another is extremely easy with the convert command. Just use a convert command like the one in this example:

$ convert arrow.jpg arrow.png

The arrow.png image should look the same as the original arrow.jpg file, but the file will have the specified file extension and be different in size. 

Read more at Network World

Debian 9.5 Released: “Rock Solid” GNU/Linux Distro Arrives With Spectre v2 Fix

Following the fourth point release of Debian 9 “stretch” in March, the developers of the popular GNU/Linux distro have shipped the latest update to its stable distribution. For those who don’t know, Debian 9 is an LTS version that’ll remain supported for 5 years.

As one would expect, this point release doesn’t bring any set of new features and keeps focusing on improving an already stable experience by delivering security patches and bug fixes. In case you’re looking for an option that brings new features, you can check out the recently released Linux Mint 19.

Coming back to Debian 9.5, all the security patches shipping with the release have already been published in the form of security advisories, and their references can be found in the official release post.

To be precise, Debian 9.5 was released with 100 security updates and 91 bug fixes spread across different packages.

Read more at FOSSBytes

How Open Source Became The Default Business Model For Software

Since its inception in 1998, open source has become the de-facto standard for software development and proven itself as a viable business model. While making source code freely available for redistribution and modification may seem counterintuitive, the success of companies like Red Hat and Canonical are proof that an open source model can turn a profit.

Investment from multinational, enterprise companies like Google, Facebook, and Adobe, points to the growing value of open source and its longevity. It should come as no surprise: at the heart of open source is fast-paced innovation in the form of collaboration and knowledge sharing. When everyone is encouraged to work together, the rate of progress is greatly increased.

Read more at Forbes

Users, Groups and Other Linux Beasts: Part 2

In this ongoing tour of Linux, we’ve looked at how to manipulate folders/directories, and now we’re continuing our discussion of permissions, users and groups, which are necessary to establish who can manipulate which files and directories. Last time, we showed how to create new users, and now we’re going to dive right back in:

You can create new groups and then add users to them at will with the groupadd command. For example, using:

sudo groupadd photos

will create the photos group.

You’ll need to create a directory hanging off the root directory:

sudo mkdir /photos

If you run ls -l /, one of the lines will be:

drwxr-xr-x 1 root root 0 jun 26 21:14 photos

The first root in the output is the user owner and the second root is the group owner.

To transfer the ownership of the /photos directory to the photos group, use

chgrp photos /photos

The chgrp command typically takes two parameters, the first parameter is the group that will take ownership of the file or directory and the second is the file or directory you want to give over to the the group.

Next, run ls -l / and you’ll see the line has changed to:

drwxr-xr-x  1 root photos  0 jun 26 21:14 photos

You have successfully transferred the ownership of your new directory over to the photos group.

Then, add your own user and the guest user to the photos group:

sudo usermod <your username here> -a -G photos
sudo usermod guest -a -G photos

You may have to log out and log back in to see the changes, but, when you do, running groups will show photos as one of the groups you belong to.

A couple of things to point out about the usermod command shown above. First: Be careful not to use the -g option instead of -G. The -g option changes your primary group and could lock you out of your stuff if you use it by accident. -G, on the other hand, adds you to the groups listed and doesn’t mess with the primary group. If you want to add your user to more groups than one, list them one after another, separated by commas, no spaces, after -G:

sudo usermod <your username> -a -G photos,pizza,spaceforce

Second: Be careful not to forget the -a parameter. The -a parameter stands for append and attaches the list of groups you pass to -G to the ones you already belong to. This means that, if you don’t include -a, the list of groups you already belong to, will be overwritten, again locking you out from stuff you need.

Neither of these are catastrophic problems, but it will mean you will have to add your user back manually to all the groups you belonged to, which can be a pain, especially if you have lost access to the sudo and wheel group.

Permits, Please!

There is still one more thing to do before you can copy images to the /photos directory. Notice how, when you did ls -l / above, permissions for that folder came back as drwxr-xr-x.

If you read the article I recommended at the beginning of this post, you’ll know that the first d indicates that the entry in the file system is a directory, and then you have three sets of three characters (rwx, r-x, r-x) that indicate the permissions for the user owner (rwx) of the directory, then the group owner (r-x), and finally the rest of the users (r-x). This means that the only person who has write permissions so far, that is, the only person who can copy or create files in the /photos directory, is the root user.

But that article I mentioned also tells you how to change the permissions for a directory or file:

sudo chmod g+w /photos

Running ls -l / after that will give you /photos permissions as drwxrwxr-x which is what you want: group members can now write into the directory.

Now you can try and copy an image or, indeed, any other file to the directory and it should go through without a problem:

cp image.jpg /photos

The guest user will also be able to read and write from the directory. They will also be able to read and write to it, and even move or delete files created by other users within the shared directory.

Conclusion

The permissions and privileges system in Linux has been honed over decades. inherited as it is from the old Unix systems of yore. As such, it works very well and is well thought out. Becoming familiar with it is essential for any Linux sysadmin. In fact, you can’t do much admining at all unless you understand it. But, it’s not that hard.

Next time, we’ll be dive into files and see the different ways of creating, manipulating, and destroying them in creative ways. Always fun, that last one.

See you then!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

How Developers Can Get Involved with Open Source Networking

Linux Foundation open networking leader describes the challenges and advantages of working across communities.

There have always been integration challenges with open source software, whether in pulling together Linux distributions or in mating program subsystems developed by geographically distributed communities. However, today we’re seeing those challenges writ large with the rise of large ecosystems of projects in areas such as networking and cloud-native computing.

Integration was one topic of my conversation with Heather Kirksey, the VP of Community and Ecosystem Development at the Linux Foundation, recorded for the Cloudy Chat podcast. We also talked about modularity and how developers can get involved with open source networking. For the past three years, Kirksey has directed the Linux Foundation’s Open Platform for Network Functions Virtualization (OPNFV), which is now part of the LF Networking Fund that’s working to improve collaboration and efficiency across open source networking projects.

“One of the challenges we have right now is that we have brought together a bunch of formerly discrete networking communities,” says Kirksey.

Read more at OpenSource.com