Home Blog Page 606

Inside Real-Time Linux

Real-time Linux (RTL), a form of mainline Linux enabled with PREEMPT_RT, has come a long way in the past decade. Some 80 percent of the deterministic PREEMPT_RT patch is now available in the mainline kernel itself. Yet, backers of the strongest alternative to the single-kernel RTL on Linux — the dual-kernel Xenomai — continue to claim a vast superiority in reduced latency. In an Embedded Linux Conference Europe presentation in October, Jan Altenberg rebutted these claims while offering an overview of the real-time topic.

Altenberg, of German embedded development firm Linutronix, does not deny that dual-kernel approaches such as Xenomai and RTAI offer lower latency. However, he reveals new Linutronix benchmarks that purport to show that the differences are not as great as claimed, especially in real-world scenarios. Less controversially, he argues that RTL is much easier to develop for and maintain.

Before we delve into the eternal Xenomai vs. RTL debate, note that in October 2015, the Open Source Automation Development Lab (OSADL) handed control of the RTL project over to The Linux Foundation, which hosts Linux.com. In addition, Linutronix is a key contributor to the RTL project and hosts its x86 maintainer.

The advance of RTL is one of several reasons Linux has stolen market share from real-time operating systems (RTOSes) over the past decade. RTOSes appear more frequently on microcontrollers than applications processors, and it’s easier to do real-time on single-purpose devices that lack advanced userland OSes such as Linux.

Altenberg began his presentation by clearing up some common misconceptions about real-time (or realtime) deterministic kernel schemes. “Real-time is not about fast execution or performance,” Altenberg told his ELCE audience. “It’s basically about determinism and timing guarantees. Real-time gives you a guarantee that something will execute within a given time frame. You don’t want to be as fast as possible, but as fast as specified.”

Developers tend to use real-time when a tardy response to a given execution time leads to a serious error condition, especially when it could lead to people getting hurt. That’s why real-time is still largely driven by the factory automation industry and is increasingly showing up in cars, trains, and planes. It’s not always a life-and-death situation, however — financial services companies use RTL for high-frequency trading.

Requirements for real-time include deterministic timing behavior, preemption, priority inheritance, and priority ceiling, said Altenberg. “The most important requirement is that a high-priority task always needs to be able to preempt a low-priority task.”

Altenberg strongly recommended against using the term “soft real-time” to describe lightweight real-time solutions. “You can be deterministic or not, but there’s nothing in between.”

Dual-kernel Real-time

Dual-kernel schemes like Xenomai and RTAI deploy a microkernel running in parallel with a separate Linux kernel, while single kernel schemes like RTL make Linux itself capable of real-time. “With dual-kernel, Linux can get some runtime when priority real-time applications aren’t running on the microkernel,” said Altenberg. “The problem is that someone needs to maintain the microkernel and support it on new hardware. This is a huge effort, and the development communities are not very big. Also, because Linux is not running directly on the hardware, you need a hardware abstraction layer (HAL). With two things to maintain, you’re usually a step behind mainline Linux development.”

The challenge with RTL, and the reason it has taken so long to emerge, is that “to make Linux real-time you have to basically touch every file in the kernel,” said Altenberg. Yet, most of that work is already done and baked into mainline, and developers don’t need to maintain a microkernel or HAL.

Altenberg went on to explain the differences between the RTAI and Xenomai. “With RTAI, you write a kernel module that is scheduled by a microkernel. It’s like kernel development — really hard to get into it and hard to debug.”

RTAI development can be further complicated because industrial customers often want to include closed source code along with GPL kernel code. “You have to decide which parts you can put into userland and which you put into the kernel with real-time approaches,” said Altenberg.

RTAI also supports fewer hardware platforms than RTL, especially beyond x86. The dual-kernel Xenomai, which has eclipsed RTAI as the dominant dual-kernel approach, has wider OS support than RTAI. More importantly, it offers “a proper solution for doing real-time in userspace,” said Altenberg. “To do this, they implemented the concept of skins — an emulation layer for the APIs of different RTOSes, such as POSIX. This lets you reuse a subset of existing code from some RTOSes.”

With Xenomai, however, you still need to maintain a separate microkernel and HAL. Limited development tools are another problem. “As with RTAI, you can’t use the standard C library,” said Altenberg. “You need special tools and libraries. Even for POSIX, you must link to the POSIX skin, which is much more complicated.” With either platform, he added, it’s hard to scale the microkernels beyond 8 to 16 CPUs to the big server clusters used in financial services.

Sleeping Spinlocks

The dominant single-kernel solution is RTL, based on PREEMPT.RT, which was primarily developed by Thomas Gleixner and Ingo Molnár more than a decade ago. PREEMPT.RT reworks the kernel’s “spinlock” locking primitives to maximize the preemptible sections inside the Linux kernel. (PREEMPT.RT was originally called the Sleeping Spinlocks Patch.)

Instead of running interrupt handlers in hard interrupt context, PREEMPT.RT runs them in kernel threads. “When an interrupt arrives, you don’t run the interrupt handler code,” said Altenberg. “You just wake up the corresponding kernel thread, which runs the handler. This has two advantages: The kernel thread becomes interruptible, and it shows up in the process list with a PID. So you can put a low priority on non-important interrupts and a higher priority on important userland tasks.”

Because about 80 percent of PREEMPT.RT is already in mainline, any Linux developer can take advantage of PREEMPT.RT-originated kernel components such as timers, interrupt handlers, tracing infrastructure, and priority inheritance. “When they made Linux real-time, everything became preemptible, so we found a lot of race conditions and locking problems,” said Altenberg. “We fixed these and pushed them back into mainline to improve the stability of Linux in general.”

Because RTL is primarily mainline Linux, “PREEMPT.RT is widely accepted and has a huge community,” said Altenberg. “If you write a real-time application, you don’t need to know much about PREEMPT.RT. You don’t need any special libraries or APIs, just a standard C library, a Linux driver, and POSIX app.”

You still need to run a patch to use PREEMPT.RT, which is updated in every second Linux version. However, within two years, the remaining 20 percent of PREEMPT.RT should make it into Linux, so you “won’t need a patch.”

Finally, Altenberg revealed the results of his Xenomai vs. RTL latency tests. “There are a lot of papers that claim that Xenomai and RTAI are way faster on latency than PREEMPT.RT,” said Altenberg. “But I figured out that most of the time PREEMPT.RT was poorly configured. So we brought in both a Xenomai expert and a PREEMPT.RT expert, and let them configure their own platforms.”

While Xenomai performed better on most tests, and offered far less jitter, the differences were not as great as the 300 to 400 percent latency superiority claimed by some Xenomai boosters, said Altenberg. When tests were performed on userspace tasks — which Altenberg says is the most real-world, and therefore the most important, test — the worst-case reaction was about 90 to 95 microseconds for both Xenomai and RTL/PREEMPT.RT, he claimed.

When they isolated a single CPU in the dual Cortex-A9 system for handling the interrupt in question, which Altenberg says is fairly common, PREEMPT.RT performed slightly better, coming in around 80 microseconds. (For more details, check out the video about 33 minutes in.)

Altenberg acknowledges that his 12-hour test is the bare minimum, compared to OSADL’s two- to three-year tests, and that it is “not a mathematical proof.” In any case, he suggests that RTL deserves a handicap considering its easier development process. “In my opinion, it’s not fair to compare a full-featured Linux system with a microkernel,” he concluded.  

For more details, watch the complete presentation below:

Embedded Linux Conference + OpenIoT Summit North America will be held on February 21 – 23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.


Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>

Remix OS: Is This the Droid You Were Looking For?

Ever wanted to try Android on your PC but there weren’t any really usable projects? Now you can. Remix OS is an Android based operating system that’s designed to offer a full-fledged desktop PC-like experience. The developers have done a lot of  work to implement many desktop-centric features such as multi-window multi-tasking. It offers a very familiar interface inspired by Windows, so the learning curve is not that steep. If you have used Android before, you will find yourself at home.

Remix OS is being developed by Jide Technologies, a company founded by three ex-Googlers, “with a mission to unlock the potential of Android in order to accelerate a new age of computing,” reads the “about us” page.

How to install and use Remix OS

I have good and bad news for you. The good news is that if you happen to have a Windows PC, or you dual boot your Linux system with Windows, you can easily install Remix OS on your PC alongside Windows and dual boot between Windows and Remix OS. The bad news is that the official installation tool only supports Windows, so Linux users can’t install it on their hard drive, as far as I know, and will have to settle down with live mode of Remix OS.

Install Remix OS on hard drive

There are two ways of installing Remix OS on your system: on a hard drive or on a USB drive. For some strange reason, hard drive installation can only be done on a machine with Windows on it. It reminds me of Ubuntu Wubi where you can install Ubuntu inside Windows. Download the official Remix OS for PC and unzip the folder. There are only two files of interest: ISO image of Remix OS and .exe installation tool. Run the installation tool and select Remix OS to be installed on your C drive.

Don’t worry; it will not format the drive, it will simply install it alongside Windows. Once the installation is finished, reboot your system and choose Remix OS or Windows from the boot menu. If your system supports secure boot, please disable secure boot from the BIOS settings.

Install Remix OS on USB Flash Drive

If you want to install Remix OS on a system that doesn’t have Windows installed on it, strangely there is no way you can install it on the hard drive. Quite strange actually. Your only option is to install it on a USB drive and run it from there. Remix OS offers two modes when you run it from a USB drive: Resident mode and Live OS mode.

Resident mode basically installs it on the USB drive and all of your installed apps, files, data, configurations are preserved on the drive. Live mode wipes everything clean after the session; nothing is saved on the drive. Once again, while you can create a bootable USB drive of Remix OS from a Linux machine, the ‘Resident mode’ doesn’t work, it gets stuck at splash screen. However, live mode works just fine.

If you want to use Resident mode, you will have to use the Windows tool. Plug in your 3.0 USB drive (it must have 8GB, or more capacity, and it must be 3.0 as Remix OS site says slower USB drives won’t boot in Resident mode). Then, open the Windows installation tool. Choose ‘USB’ from the drop down menu of the target device instead of HDD, browse the ISO image, and click on OK.

Once the image is written to the drive, plug it to a PC where you want to use it. Make sure to turn off ‘secure boot’ and enable ‘legacy mode’ from BIOS settings. The boot screen will show three options for Remix OS: Resident Mode, Live mode, and verbose. The Resident mode will use USB as the persistent storage device and use it to save installed apps, data, and settings. Live mode, as the name implies, will not save files to the drive. I recommend Resonant mode. The first boot will take some time at it prepares the USB drive for Remix OS.

Create Remix OS USB drive from Linux and macOS

If you are running Linux (which should be the case if you are reading this story), then you can create a bootable USB drive for Remix OS using the ‘dd’ command. Download the Remix OS zip file from the link above. Unzip the downloaded folder

unzip path_of_downloaded_zip

Now plug in the USB drive with more than 8GB capacity and find the block device name of the drive:

lsblk

Once you get the name of the USB device (to find the name, unplug the device and run the ‘lsblk’ command and the plug the device and run it again, the new entry is the USB drive).

Now write the image to the device:

sudo dd if=/path-of-remixos-iso of=/usb_drive bs=1M

Example:

sudo dd if=/home/swapnil/Downloads/release_Remix_OS_for_PC_

   Android_M_64bit_B2016112101/Remix_OS_for_PC_Android_M_

   64bit_B2016112101.iso of=/dev/disk3 bs=1M

Once written, plug the USB drive into target PC and boot the system. Choose ‘Live’ session from the boot menu. I have not been able to run Resident mode from the drive that I wrote using the ‘dd’ command; it worked only on the USB drive that was flashed using the official Windows tool.

The flip side of using the live mode is that no changes, installed applications, or configurations will be saved between sessions. You will start from scratch every time you boot. So, if you do want to use Remix OS on your PC, Windows tool is the only option.

Getting started with Remix OS

Once you boot into Remix OS, there are a few steps before you use the OS. The first step is to choose your language, user agreement, wireless configuration, and then select whether you want to activate Google services, which I recommend activating if you plan to install applications from Google Play Store.

Once you are booted into the brand new Remix OS, you may want to click on the Play Activator icon on the desktop to make sure that Google Play services are activated. Then Open Google Play Store app and log into your Gmail account. Start installing apps that you need. You may see some third-party app stores there, I heavily discourage you from logging into those any installing any apps outside the Google Play Store; never install any application from outside Google Play Store.

I tested it on my brand new Dell XPS 13 Kaby Lake and everything from WiFi, Bluetooth, touch screen, and audio worked out of the box. The only problem was that everything looked tiny on the HiDPI monitor. To fix that, I went to Settings > Experimental and set zoom level to 2. It restarted the session and everything looked great.

Who would want Remix OS?

If you love Android (and who doesn’t love Android) and want to use it as a full-fledged OS on your desktop then Remix OS is for you. You will get access to millions of Android apps, along with Microsoft Office, Adobe Photoshop, Lightroom, and many such applications.

What I wish were better

In spite of an Android-based operating system, Remix OS is not a fully open source project. It’s essentially a proprietary product with some Android open source code that is already available on GitHub. I wish developers would follow the trend and open source the project. Another gripe I have with Remix OS is that it does not support Linux. I don’t know about Windows users, but I am quite certain there are a lot of Linux users like me who would want to dual boot with Remix OS.

In the end, it’s a great project that holds great potential. If you have not tried it yet, please do!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

MapR Platform Offers Persistent Data Access for Containerized Applications

This is the container age. The growing use of services like Docker is transforming the way that software is being handled within enterprises. However, this rise in container utilization does throw present problems for enterprise CIOs when it comes to rolling out applications in production.

One of the biggest issues is scaling applications to meet business demands. While, in theory, containers are able to handle enterprise applications, in practice, they are often hampered by day-to-day disruptions: network failures, server breakdowns or even scheduled maintenance, consequently, organizations have tended to play safe and use containers for stateless web applications, rather than try to overcome these storage issues, indeed some analysts have warned companies to be wary of using stateful application.

Read more at The New Stack

The Most Popular JavaScript Front-End Tools

Choosing a development tool based on its popularity isn’t a bad idea. Popular tools are usually more stable, and they often have more resources and community support than less popular tools. Developer satisfaction is another key indicator of a good tool, and for the JavaScript ecosystem, I’m going to show you some significant research on both of these criteria.

The list that follows contains all of the main tooling categories for a modern JavaScript developer. It includes the most popular tools for each category according to developer popularity and user satisfaction.

Read more at TechBeacon

Singularity – Containers for Science, Reproducibility, and HPC

In this video from the 2017 HPC Advisory Council Stanford Conference, Greg Kurtzer from LBNL presents: Singularity: Containers for Science, reproducibility, and HPC.

“Explore how Singularity liberates non-privileged users and host resources (such as interconnects, resource managers, file systems, accelerators …) allowing users to take full control to set-up and run in their native environments. This talk explores Singularity how it combines software packaging models with minimalistic containers to create very lightweight application bundles 

Read more at insideHPC

GlusterFS Storage Pools

Software-defined storage, which until recently was the preserve of large storage solution vendors, can be implemented today with open source and free software. As a bonus, you can look forward to additional features that are missing in hardware-based solutions. GlusterFS puts you in a position to create a scalable, virtualized storage pool made up of regular storage systems grouped to form a network RAID and with different methods of defining a volume to describe how the data is distributed across the individual storage systems.

Regardless of which volume type you choose, GlusterFS creates a common storage array from the individual storage resources and provides it to clients in a single namespace…

Read more at ADMIN Magazine

TDD and Code Quality

Every time I see an article claiming that TDD improves code quality, a part of me cries. It’s not that I don’t think it can be true. It’s because it’s not necessarily true, and those articles rarely bother to provide a satisfying explanation. Here’s my try.

TDD and Better Design

There’s a lot of talking about TDD making your codebase more modular and less coupled, and making the designs you produce being better in general. I think this might be true, mainly because of two factors: the act of design and characteristics of testable code.

Read more at DZone

Blocking of International Spam Botnets with a Postfix Plugin

This article contains an analysis and solution for blocking of international SPAM botnets with on postfix mail servers by using a postfwd plugin which analyses the sasl connects by country.  

One of the most important and hardest tasks for every company that provides mail services is staying out of the mail blacklists. If a mail domain appears in one of the mail domain blacklists, other mail servers will stop accepting and relaying its e-mails. This will practically ban the domain from the majority of mail providers and prohibits that the provider’s customers can send e-mails. Tere is only one thing that a mail provider can do afterwards: ask the blacklist providers for removal from the list or change the IP addresses and domain names of its mail servers.

Read more at HowtoForge

The 7 Elements of an Open Source Management Program: Teams and Tools

The following is adapted from Open Source Compliance in the Enterprise by Ibrahim Haddad, PhD.

A successful open source management program has seven essential elements that provide a structure around all aspects of open source software. In the previous article, we gave an overview of the strategy and process behind open source management. This time we’ll discuss two more essential elements: staffing on the open source compliance team and the tools they use to automate and audit open source code.

Compliance Teams

The open source compliance team is a cross-disciplinary group consisting of various individuals tasked with the mission of ensuring open source compliance. There are actually a pair of teams involved in achieving compliance: the core team and the extended team.

  • The core team, often called the Open Source Review Board (OSRB), consists of representatives from engineering and product teams, one or more legal counsel, and the Compliance Officer.

  • The extended team consists of various individuals across multiple departments that contribute on an ongoing basis to the compliance efforts: Documentation, Supply Chain, Corporate Development, IT, Localization and the Open Source Executive Committee (OSEC). However, unlike the core team, members of the extended team are only working on compliance on a part-time basis, based on tasks they receive from the OSRB.

Various individuals and teams within an organization help ensure open source compliance.

Tools

Open source compliance teams use several tools to automate and facilitate the auditing of source code and the discovery of open source code and its licenses. Such tools include:

A compliance project management tool to manage the compliance project and track tasks and resources.

A software inventory tool to keep track of every single software component, version, and product that uses it, and other related information.

• A source code and license identification tool to help identify the origin and license of the source code included in the build system.

A linkage analysis tool to identify the interactions of any given C/C++ software component with other software components used in the product. This tool will allow you to discover linkages between source code packages that do not conform to company policy. The goal is to determine if any open source obligations extend to proprietary or third party software components. If a linkage issue is found, a bug ticket is assigned to Engineering with a description of the issue in addition to a proposal on how to solve the issue.

A source code peer review tool to review the changes introduced to the original source code before disclosure as part of meeting license obligations.

A bill of material (BOM) difference tool to identify the changes introduced to the BOM of any given product given two different builds. This tool is very helpful in guiding incremental compliance efforts.

Next time we’ll cover another key element of any open source management program: education. Employees must possess a good understanding of policies governing the use of open source software. Open source compliance training — formal or informal — raises awareness of open source policies and strategies and builds a common understanding within the organization.

Open Source Compliance

Read the previous article in this series:

The 7 Elements of an Open Source Management Program: Strategy and Process

Read the next articles in this series:

How and Why to do Open Source Compliance Training at Your Company

Basic Rules to Streamline Open Source Compliance For Software Development

Keynote: OpenTracing and Containers: Depth, Breadth, and the Future of Tracing – Ben Sigelman

Ben Sigelman shows how OpenTracing can deliver zero-touch, black-box instrumentation of distributed applications via orchestration systems like Kubernetes, and why that could change the way we all reason about distributed computation.