Developers: Prepare Your Drivers for Real-Time Linux

5286

Although Real-Time Linux (RT Linux) has been a staple at Embedded Linux Conferences for years — here’s a story on the RT presentations in 2007 — many developers have viewed the technology to be peripheral to their own embedded projects. Yet as RT, enabled via the PREEMPT_RT patch, prepares to be fully integrated into the mainline kernel, a wider circle of developers should pay attention. In particular, Linux device driver authors will need to ensure that their drivers play nice with RT-enabled kernels.

Julia Cartwright speaking at Embedded Linux Conference.

At the recent Embedded Linux Conference in Portland, National Instruments software engineer Julia Cartwright, an acting maintainer on a stable release of the RT patch, gave a well-attended presentation called “What Every Driver Developer Should Know about RT.” Cartwright started with an overview of RT, which helps provide guarantees for user task execution for embedded applications that require a high level of determinism. She then described the classes of driver-related problems that can have a detrimental impact to RT, as well as potential resolutions.

One of the challenges of any real-time operating system is that most target applications have two types of tasks: those with real-time requirements and latency sensitivity, and those for non-time critical tasks such as disk monitoring, throughput, or I/O. “The two classes of tasks need to run together and maybe communicate with one another with mixed criticality,” explained Cartwright. “You must resolve two different degrees of time sensitivity.”

One solution is to split the tasks by using two different hardware platforms. “You could have an Arm Cortex-R, FPGA, or PLD based board for super time-critical stuff, and then a Cortex-A series board with Linux,” said Cartwright. “This offers the best isolation, but it raises the per unit costs, and it’s hard to communicate between the domains.”

Another approach is to use the virtualization approach provided by Xenomai, the other major Linux-based solution aside from PREEMPT_RT. “Xenomai follows a hypervisor, co-kernel approach using a hypervisor or AMP solution to separate an RTOS from Linux,” said Cartwright. “However, there’s still a tradeoff in limited communications between the two systems.”

RT Linux’s PREEMPT_RT, meanwhile, enables the two systems to share a kernel, scheduler, device stack, and subsystems. In addition, Linux IPC mechanisms can be used to communicate between the two. “There’s not as much isolation, but much greater communication and usability,” said Cartwright.

One challenge with RT is that “because the drivers are shared between the real-time and non real-time systems, they can misbehave,” said Cartwright. A lot of the bugs we’re finding in RT come from device drivers.”

In RT, the time between when an event occurs – such as a timer firing an interrupt or an I/O device requesting service – and the time when the real-time task executes is called the delta. “RT systems try to characterize and bound this in some meaningful way,” explained Cartwright. “A cylic test takes a time stamp, then sleeps for a set time such as 10ms, and then takes a time stamp when the thread wakes up. The difference between the time stamps, which is the amount of time the thread slept, is called the delta.”

This delta can be broken down into two phases. The first is the irq_dispatch, which is the time it takes between the hardware firing and the dispatch occurring up until the time the thread scheduler is instructed that the thread needs to run. The second phase is scheduling latency, the time between when the scheduler has been made aware that a high priority task needs to run to the moment when the CPU is given the task to execute.

irq_dispatch latency

When using mainline Linux without RT extensions, irq_dispatch latency can be considerable. “Say you have one thread executing in user mode, and an external interrupt such as a network event fires that you don’t care about in your real time app,” said Cartwright. “But the CPU is going to vector off to into hard interrupt context and start executing the handler associated with that network device. If during that interrupt handler duration, a high priority event fires, it’s not able to be scheduled on the CPU until the low priority interrupt is done executing.”

The delta between the internal event firing and the external event “is a direct contributor to irq_dispatch latency,” said Cartwright. “Without an RT patch it would be a mess to define bounds on this because the bound would be the bound of the longest running interrupt handler in the system.”

RT avoids this latency by forcing irq threads. “There’s very little code that we execute in a hard interrupt context – just little shims that wake up the threads that are going to execute your handler,” said Cartwright. “You may have a low priority task running, and perhaps also a medium priority task that the irq fires, but only a small portion of time is spent waking up the associated handlers for the threads.”

RT also provides other guarantees. For example, because interrupt handlers are now running in a thread, it can be preempted. “If a high priority, real-time critical interrupt fires, that thread can be scheduled immediately, which reduces the irq_dispatch latency,” said Cartwright.

Cartwright said that most drivers require no modification to participate in forced irq threading. In fact, “Thread irq actually exists in mainline now. You can boot a kernel and pass the thread irq parameter and it will thread all your interrupts. RT will add a forced enablement.”

Yet there are a few cases when this causes problems. “If your drivers are invoked in the process of delivering an interrupt dispatch, you can’t be threaded,” said Cartwright. “This can happen with irqchip implementations, which should not be threaded. Another issue may arise if the driver is invoked by the scheduler.”

Other glitches can emerge when “explicitly disabling interrupts using local_irq_disable or local_irq_save. Cartwright recommended against using such commands in drivers. As an alternative to local_irq_disable, Cartwright suggested using spinlocks or local locks, a new feature that will soon be proposed for mainline. In a separate presentation at ELC 2018, called Maintaining a Real Time Stable Kernel, Linux kernel developer Steven Rostedt goes into greater depth on local locks.

Cartwright finished up her discussion of irq_dispatch latency issues by discussing some rare, hardware related MMIO issues that can occur. Once when she accidentally pulled the Ethernet cable during testing, it caused buffering in the interconnect, which screwed up the interrupts and was a pain to fix. “To solve it we had to follow each write by a readback, which prevents write stacking,” she said. The ultimate solution? “Take the drugs away from the hardware people.”

Scheduling latency

There appear to be fewer driver problems related to the second latency phase – scheduling – which is the time from when you execute a real-time thread to the time when the irq thread is scheduled. One example stems from the use of preempt_disable, when prevents a higher priority thread to be scheduled upon return from interrupt because preemption has been disabled.

“The only reason for a device driver to use preempt_disable is if you need to synchronize with the act of scheduling itself, which can happen with cpufreq and cpuidle,” said Cartwright. “Use local locks instead.”

On mainline Linux, spinlock-protected critical sections are implicitly executed with preemption disabled, which can similarly lead to latency problems. “With RT we solve this by making spinlock preemptible in critical sections,” said Cartwright. “We turn them into pi-aware mutexes and disable migration. When a spinlock is held by a thread it can be preempted by a higher priority thread to bring in the outer bound.”

Most drivers require no changes in order to have their spinlock critical sections preemptible, said Cartwright. However, if a driver is involved in interrupt dispatch or scheduling, they must use raw_spin_lock(), and all critical sections must be made minimal and bounded.

You can watch the complete presentation below:

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.