Home Blog Page 3

Linux commands: How to manipulate process priority 

Make your Linux processes play nice with each other.
Read More at Enable Sysadmin

Using Podman and Docker Compose

Podman 3.0 now supports Docker Compose to orchestrate containers.
Read More at Enable Sysadmin

Cut your Cloud Computing Costs by Half with Unikraft

A novel modular unikernel allows for extreme tailoring of your operating system to your application’s needs. A proof of concept, built on Unikraft, a Xen Project subproject, shows up to 50% efficiency improvements than standard Linux on AWS EC2. 

Cloud computing has revolutionized the way we think about IT infrastructure: Another web server? More database capacity? Resources for your artificial intelligence use case? Just spin-up another instance, and you are good to go. Virtualization and containers have allowed us to deploy services without worrying about physical hardware constraints. As a result, most companies heavily rely on micro-services, which are individual servers highly specialized to perform a specific task.

The problem is that general-purpose operating systems such as Linux struggle to keep pace with this growing trend towards specialization. The status quo is that most microservices are built on top of a complete Linux kernel and distribution. It is as if you wanted to enable individual air travel with only one passenger seat per aircraft but kept the powerful engines of a jumbo jet. The result of having a proliferation of general-purpose OSes in the cloud are bloated instances, that feast on memory and processing power while uselessly burning electrical energy as well as your infrastructure budget.

Figure 1. Linux kernel components have strong inter-dependencies making it difficult to remove or replace them.

Despite this, putting Linux and other monolithic OSes on a diet is far from trivial. Removing unneeded components from the Linux kernel is a tedious endeavor due to the interdependencies among subsystems of the kernel: Figure 1 above illustrates a large number of such inter-dependencies a line denotes a dependency and a blue number the amount of such dependencies between two components. 

An alternative is to build so-called unikernels, images tailored to specific applications and often built on much smaller kernels. Unikernels have shown great promise and performance numbers (e.g., boot times of a few milliseconds, memory consumption when running off-the-shelf applications such as nginx of only a few MBs, and high throughput). However, their Achilles heel has been that they often require substantial expert work to create them and that at least part of the work has to be redone for each additional application. These issues, coupled with the fact that most unikernel projects don’t have a rich set of tools and ecosystem (e.g., Kubernetes integration, debugging and monitoring tools, etc.), resulting in 1GB Linux instances for jobs that might be as easy as delivering static web pages. 

Unikraft: A Revolutionary Way Forward

Unikraft is on a mission to change that. In stark contrast to other unikernel projects, Unikraft, a  Xen Project subproject, has developed a truly modular unikernel common code base from which building tailored made (uni)kernels is orders of magnitude faster than in the past.

 “Without Unikraft, you have to choose between unikernel projects that only work for a specific language or application, or projects that aim to support POSIX but do so while sacrificing performance and thus defeating the purpose of using unikernels in the first place”, says Felipe Huici, one of the Unikraft team’s core contributors. “

Unikraft aims to run a large set of off-the-shelf application and languages (C/C++, Python, Go, Ruby, Lua, and WASM are supported, with Rust and Java on the way), but still allows for easy customization and even removal of unneeded kernel parts; also, it provides a set of rich, performance-oriented APIs that allows further customization by plugging the application at different levels of the stack for even higher performance.” 

A sample of such APIs are shown in Figure 2 below.

Figure 2. Rhea architecture (APIs in black boxes) enables specialization by allowing apps to plug into APIs at different levels and to choose from multiple API implementations.

Unikraft already supports more than 130 syscalls in terms of POSIX compatibility, and the number is continuously increasing. While this is certainly short of the 300+ that Linux supports, it turns out that only a subset of these are needed for running most of the major server applications. This, and ongoing efforts to support standard frameworks such as Kubernetes and Prometheus make Unikraft an enticing proposition and mark the coming of age of unikernels into the mainstream.

Unikraft Goes to the Cloud

But what’s really in it for end-users? To demonstrate the power and efficiency of Unikraft, the team created an experimental Unikraft AWS EC2 image running nginx, currently the world’s most popular web server. “We’ve built a Unikraft nginx image and compared it to a ngnix running on an off-the-shelf Debian image to compare the performance of the two when serving static web pages. We’ve been more than pleased with the results” says Huici. “On Unikraft, nginx could handle twice the number of requests per second compared to the Debian instance. Or you could take a less performant AWS EC2 instance at half the price and get the same job done. Further, Unikraft needed about a sixth the amount of memory to run”.  The throughput results can be seen in Figure 3 below.

So far, this is only a proof of concept, but Huici and the Unikraft team are moving quickly. “We are currently working on a system to make the process of creating a Unikraft image as easy as online-shopping” – this includes analyzing the applications which are meant to run on top of it and providing a ready-to-use operating system that has everything the specific use case needs, and nothing more. “Why should we waste money, resources, and pollute the environment running software in the background that is not needed for a specific service?”

About the author: Simon Kuenzer is the Project Lead and Maintainer of Unikraft, which is part of the Xen Project at the Linux Foundation.

Predictions 2021: Open Networking & Edge

As we wrap up 2020, I wanted to take a moment to look at where the industry is headed and what we’ve learned this year. 

Telecom & Cloud ‘Plumbing’ based on 5G Open Source will drive accelerated investments from top markets (Government, Manufacturing, and Enterprises) 

This broad acceptance of open networking stacks shows the true power of what is possible when fat, fast, and functional features are at your fingertips. See information on ONAP’s Guilin release, EdgeX Foundry’s Hanoi release, and this recent post from FierceTelecom.

The Last piece of the “open” puzzle will fall in place: Radio Access Network (RAN)

The final closed architecture in the 148- year- old Telecom industry — the RAN — is finally open!  2021 will bring the first build-outs of open RAN technology in close collaboration with Edge and Core. Visit the O-RAN Software Community for more information. 

Remote Work” will continue to be the greatest positive distraction, especially within the open source community

LFN and LFE saw about 25-40% Growth in Developers and Contribution during 2020, and we expect the pace to pick up to almost 50% as more vertical industries embrace open source technologies. See Software Defined Vertical Industries: Transformation Through Open Source, a Linux Foundation white paper. 

“Futures” (aka bells and whistle features & future-looking capabilities) will give way to “functioning blueprints”  

Open source interoperability, compliance & verification for rapid deployment becomes the highest priority in 2021 beyond software. See the latest Blueprints from LF Edge’s Akraino project, as well as information on OPNFV + CNTT’s latest integrations.

AI/ML technologies become mainstream 

Closed loop control in an Intelligent Network paves the way for Intent-based Networking, and Predictive Maintenance emerges as a top use case in Edge using AI/ML.  What do you expect 2021 will bring to the open networking and edge table?

What did I miss? I would love to have your comments on LinkedIn.

About the Author: Arpit Joshipura is General Manager, Networking, Edge & IoT at the Linux Foundation.

How to record your Linux terminal using asciinema

How to record your Linux terminal using asciinema

Asciinema might be the application you’ve been looking for to demonstrate a skill or process that you want your colleagues or students to learn on-demand.
tcarriga
Fri, 12/11/2020 at 10:06pm

Image

Image by Rudy and Peter Skitterians from Pixabay

In my line of work, as well as in many hands-on technical positions, there are times when recording your work is necessary. Sometimes, it’s an advanced form of note-taking; other times, it’s a quick and easy way to send someone junior a how-to. You could even record your terminal for “insurance” if you are the paranoid type. Either way, there is no denying that terminal recording software is a neat and practical tool to have in your arsenal.

Topics:  
Linux  
Command line utilities  
Read More at Enable Sysadmin

SELinux troubleshooting and pitfalls

SELinux troubleshooting and pitfalls

SELinux can be challenging to troubleshoot, but by understanding the components of the service, you can handle whatever challenges it throws your way.
Alex Callejas
Thu, 12/10/2020 at 6:46pm

Image

Image by MustangJoe from Pixabay

You can’t let your failures define you. You have to let your failures teach you ― Barack Obama

One of the great battles, especially with third-party solution providers, is maintaining the security of our servers. In many cases, the challenge is the request to disable SELinux so that an application can run smoothly. Fortunately, that is occurring less and less.

In most of these cases, an analysis is enough to find the right troubleshooting or workaround.

Topics:  
Linux  
Linux Administration  
Security  
Read More at Enable Sysadmin

5 reasons why you should develop a Linux container strategy

If you’ve shunned containers in the past, these five advantages will make you rethink containerization.
Read More at Enable Sysadmin

The 7 most used Linux namespaces

Check out this brief overview of what the seven most used Linux namespaces are.
Read More at Enable Sysadmin

What actions do you take when patching goes wrong?

Find out how to handle situations when patching your Linux systems doesn’t go as planned.
Read More at Enable Sysadmin

Continuous Delivery in the Age of Microservices and COVID-19

The goal of continuous delivery (CD) is to produce high-quality software rapidly. While the emergence of microservices and cloud-native technology has brought huge benefits in scalability, it has added a layer of complexity to this approach. Security is another big challenge. In this discussion with Tracy Miranda, Executive Director of the Continuous Delivery Foundation, we talked about some of the pain points the organizations face when bolstering their CD practices and how the Foundation is helping to address them.

Swapnil Bhartiya: How would you define continuous delivery? Also, what about the CI part of it because when we talk about it, we always say CI/CD?

Tracy Miranda: We define continuous delivery as a software engineering approach in which teams work in short cycles and they ensure that the code is always released at any point in time. Now, traditionally, people tend to speak a lot about continuous integration and continuous delivery (CI/CD). Continuous integration is when developers regularly commit at least once a day to a mainline and keep that main line up to date. But I see continuous delivery as really this umbrella of all the practices you need to keep that software ready to be released at any time. That includes continuous integration, security features, testing and so on. It’s a general set of practices.

Swapnil Bhartiya: CI/CD is a solved problem and there are many open-source projects around it. What role is the Foundation playing in this space?

Tracy Miranda: We know a lot about continuous delivery today and we appreciate that it is really important because it makes such a difference to every business today — not just software companies, but also banks and the healthcare industry. However, the adoption of continuous delivery practices is super low. Many people think they’re doing it, but maybe they’re doing some continuous integration and they haven’t quite figured out how to get through automation.

To top it off, what makes things even more complicated is we’ve seen the rise of microservices and cloud-native technology. While these give us huge benefits in terms of scalability and easy to work on separate parts of the application, they have also increased challenges, like a proliferation of environments and teams having to contend with all these different parts that make up an application.

The Continuous Delivery Foundation is there to help support teams and organizations in the adoption of these practices both from the sense of taking advantage of open source projects in the space and democratizing the best practices. We have a very recent working group that’s spun up to help anyone in this space get better at delivering software.

Swapnil Bhartiya: Security is becoming a serious concern and no longer an after-thought. In most cases, we see that companies were compromised not because of some zero-day, but because they didn’t apply the patch to a known vulnerability. When you have billions of deployments of your applications, it becomes challenging. Talk about the role CD plays in improving security.

Tracy Miranda: Security is a top concern. I think there are lots of different elements to this. On one hand, we talk a lot about shift-left of security. We need to make sure the security professionals and the folks focused on security are tightly involved with the rest of the team. So, there are no silos. People don’t regard security as someone else’s problem. Security starts with the developers.

As an industry, I think it’s really important that we work together to solve industry-level problems such as applying patches that are already available. It’s more or less an outreach problem. We need to be better at telling people to keep their systems updated. We need to cut through the noise of all the different messaging they’re hearing. I think that’s another example where something like the Continuous Delivery Foundation can make a difference in addressing these broad industry problems.

Swapnil Bhartiya: You also mentioned microservices as a challenge for companies. What is being done around solving the problem of continuous delivery for microservices?

Tracy Miranda: That’s a great question. We definitely have the big split of folks who are used to delivering a monolith and they have their existing setup, all geared towards supporting that. Then, there is an increasing number of folks who are trying to take advantage of microservices and all its implications. One of the hot topics that’s emerged for us is configuration management. How we think about this is earlier, the scope of your application was very well defined. With microservices, the definition of an application changes — it’s a set of microservices. How do we talk about which version of each microservice goes into a specific app? If we are continuously pushing code and integrating that, how are those different versions changing relative to each other? How are we testing that all together? So, we’ve definitely think configuration management is a really hot topic and people are looking at tooling in the space. I think we have a couple of interesting projects that might be coming in the pipeline to CDF that will specifically help to drive visibility into this space and give people better tooling to manage all the dependencies around microservices.

Swapnil Bhartiya: There are so many projects and open-source tools for CD, which may also lead to a problem of interoperability.  How big is it a concern for the Foundation and what are you doing to increase interoperability within these tools?

Tracy Miranda: Interoperability is one of those problems where if you’re just working in your own organization, sometimes, it’s not really a problem until it’s time to adopt a new tool or add something into your workflow. If we step back and look at the industry as a whole and take a look across the whole landscape, at the moment, it’s hugely fragmented. There’s a lot of tools doing similar things. It’s very difficult for people to move from different CI tools or different pipeline orchestration tools without having to go through a lot of pain to figure out how to do that. Providers have to implement plugins for different systems. It’s a waste of time and it slows down innovation when we could be moving up the stack.

I think where we are today, there’s a greater appreciation from end users who are saying “We want to simplify this. We want to find better ways for tools to interoperate.” At CDF, one of the very first special interest groups we had was an interoperability working group. This is a set of like-minded folks who got together and said, “As an industry, we should be better and we can be better. We need to figure that out.” It’s a really good group of folks that build the projects like Jenkins X, Tekton, and Spinnaker. We’ve also got a lot of end-user members represented like Ericsson and eBay to make sure that as the problems are being solved, they apply to real-world use cases.

It’s an open group and people are welcome to join these conversations. At the moment, there is a discussion on standardizing interfaces or metadata. Why can’t we have a standardized way to express all the metadata around a release or all the metadata around a set of testing results? I am really excited about what this group is doing and look forward to if they can really achieve this very difficult goal and bring some consolidation around the tooling.

Swapnil Bhartiya: One last question before we wrap this up: how is COVID-19 affecting continuous delivery?

Tracy Miranda: It has definitely increased. We have seen some surveys that show that the adoption of continuous delivery is increasing. The pandemic has emphasized the need to be more resilient and to adapt quickly. Most organizations are going to evolve to be very distributed. Continuous delivery practices enable all those things. The companies who are already doing these practices have a significant advantage in times like these. I think one of the benefits we have as a Foundation is that open source has always been about collaboration at scale and in a distributed way. So, we’re hoping we can take all those lessons and marry open-source practices to continuous delivery practices and make it easier for everybody to adopt them. It shouldn’t be something elite that only a few companies could do. It should be something that’s possible and achievable for every company and every organization out there.