LFN Community publishes white paper highlighting cybersecurity efforts Telecom, Cloud and Enterprise align with 5G Super Blue Print across ONAP, Anuket, EMCO, Magma, ORAN-SC and more projects as Enterprise eBFP project, L3AF, is inducted into LF NetworkingATOS, GenXComm, Keysight Technologies and Telaverge Communications join LFN as Silver members
SAN FRANCISCO, April 12, 2022 – LF Networking, which facilitates collaboration and operational excellence across open source networking projects, today announced continued momentum focused on re-aggregation, with updates to security, 5G blueprints, and the addition of four new Silver members: ATOS, GenXComm, Keysight Technologies, and Telaverge Communications.
“As the LF Networking community rolls into its fourth year as an umbrella project organization, we are pleased to see robust efforts focused on securing 5G across multiple project & foundations as we welcome even more industry-leading organizations to the project,” said Arpit Joshipura, general manager, Networking, Edge and IoT, the Linux Foundation. “It’s the robust and diverse set of member companies that enable LFN’s collaborative innovation into the future of 5G and networking.”
5G Super Blue Print Ecosystem Expands
The community is making progress with the 5G Super Blueprint, a community-driven integration/illustration of multiple open source initiatives, projects, and vendors coming together to show use cases demonstrating implementation architectures for end users. The 5G Super Blueprint is now integrated across even more projects––including Magma (1.6), EMCO, and Anuket––building open source components applicable to a variety of industry use cases. Preliminary scoping for future integrations with the O-RAN Software Community have begun, setting the stage for end-to-end open source interoperability from the core through the RAN and future compliance activities.
Meanwhile, the L3AF project has been inducted into the LF Networking umbrella, as membership expands further across the ecosystem with new Silver members.
L3AF is an open source project, developed by Walmart, housing cutting-edge solutions in the realm of eBPF (a revolutionary technology that allows us to run sandboxed programs in an operating system kernel) that provides complete life-cycle management of eBPF programs with the help of an advanced control plane that has been written in Golang. The control plane orchestrates and composes independent eBPF programs across the network infrastructure to solve crucial business problems. L3AF’s eBPF programs include load-balancing, rate limiting, traffic mirroring, flow exporter, packet manipulation, performance tuning, and many more. L3AF joined the Linux Foundation in fall of 2021 and has now been inducted into the LF Networking project umbrella.
New LFN Silver members include:
ATOS, a multi-vendor end-to-end system integrator in both IT and telecom network space; specialized in multi-cloud solutions, edge and MEC, 5G-enabled applications with an AI/ML focus, cybersecurity, and decarbonization. GenXComm Inc.’s mission is to deliver limitless computing power, fast connectivity, and on-demand intelligence to every location on EarthKeysight Technologies, Inc. is a leading technology company that delivers advanced design and validation solutions to help accelerate innovation to connect and secure the worldTelaverge Communications is the leader in complete Network Test Automation Orchestration and Digital Transformation products (Regal for Containers and Cloud) designed for enterprises, operators and OEM’s. Telaverge’s open source based private LTE and 5G cores are pre-integrated with Regal for zero touch testing and deployment.
Highlighting its security efforts to help secure open source networking against cybersecurity attacks, the community published a white paper titled “Securing Open Source 5G from End to End” that is now available for download.
“A unique advantage of developing software in the open is more eyes on the code; when it comes to security, that translates to large groups of experts who can propose improvements and enhancements in a faster, more scalable fashion– and that is true for LFN,” said Amy Zwarico, vice chair of the ONAP Security subcommittee. “Community collaboration via security working groups and sub-committees to address secure software development practices, SBOMs, DDoS mitigation and other threats are just some of the steps LFN is taking to create code that can be trusted to run our networks.”
At a time when the United States White House has issued multiple Executive Orders to address cybersecurity and supply chain attacks, the LFN community continues to take steps to ensure open source networking is secure. The group is publishing a white paper to outline its security strategies, including the formation of security-focused committees and subcommittees; development and adoption of security Software Bill of Materials (SBOM); OpenSSF badging; and use of the LFX Platform’s Security Dashboard to enable developers to identify and resolve vulnerabilities quickly and easily; and more. Download the white paper for more information.
Upcoming Events
The LF Networking developer community will host the LFN Developer & Testing Forum this Spring, taking place June 13-16, in Porto, Portugal. Registration for that event is open, with more details to come.
Open Networking & Edge (ONE) Summit North America will take place November 15-16 in Seattle, Wash. The event will be followed by a two-day LFN Developer & Testing Forum (Nov 17-18) in the same venue. The Open Networking & Edge Summit is the industry’s premier open networking and edge computing event focused on end to end solutions powered by open source in the Telco, Cloud, and Enterprise verticals. Attendees will learn how to leverage open source ecosystems and gain new insights for digital transformation. More information will be available soon.
Support from new members
“The mission of Atos is to support our customers throughout a multitude of industry sectors on their edge-to-cloud journey. We help telecom customers leverage cloud synergies between their IT and their network, and introduce new edge computing and 5G MEC services. We are excited about ONAP and other programs of the LFN, as they facilitate exactly these synergies in a growing market.”
“Keysight is pleased to join LF Networking as a silver member and contribute to an ecosystem with the common goal of advancing technology and innovation built on open source software and standards,” said Kalyan Sundhar, vice president of Edge-to-Core Networks at Keysight Technologies. “Keysight leverages open source standards for end-to-end network harmonization produced by the LF Networking community to enable this ecosystem to cost-effectively accelerate protocol and performance design validation.”
About the Linux Foundation
Founded in 2000, the Linux Foundation is supported by more than 2,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit linuxfoundation.org.
The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.
The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page:www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.
Many years ago, when I first began with Linux, installing applications and keeping a system up to date was not an easy feat. In fact, if you wanted to tackle either task you were bound for the command line. For some new users this left their machines outdated or without applications they needed. Of course, at the time, most everyone trying their hand at Linux knew they were getting into something that would require some work. That was simply the way it was. Fortunately times and Linux have changed. Now Linux is exponentially more user friendly – to the point where so much is automatic and point and click – that today’s Linux hardly resembles yesterday’s Linux.
But even though Linux has evolved into the user-friendly operating system it is, there are still some systems that are fundamentally different than their Windows counterparts. So it is always best to understand those systems in order to be able to properly use those system. Within the confines of this article you will learn how to keep your Linux system up to date. In the process you might also learn how to install an application or two.
There is one thing to understand about updating Linux: Not every distribution handles this process in the same fashion. In fact, some distributions are distinctly different down to the type of file types they use for package management.
Ubuntu and Debian use .deb
Fedora, SuSE, and Mandriva use .rpm
Slackware uses .tgz archives which contain pre-built binaries
And of course there is also installing from source or pre-compiled .bin or .package files.
As you can see there are number of possible systems (and the above list is not even close to being all-inclusive). So to make the task of covering this topic less epic, I will cover the Ubuntu and Fedora systems. I will touch on both the GUI as well as the command line tools for handling system updates.
Ubuntu Linux
Ubuntu Linux has become one of the most popular of all the Linux distributions. And through the process of updating a system, you should be able to tell exactly why this is the case. Ubuntu is very user friendly. Ubuntu uses two different tools for system update:
apt-get: Command line tool.
Update Manager: GUI tool.
The Update Manger is a nearly 100% automatic tool. With this tool you will not have to routinely check to see if there are updates available. Instead you will know updates are available because the Update Manager will open on your desktop (see Figure 1) as soon as the updates depending upon their type:
If you want to manually check for updates, you can do this by clicking the Administration sub-menu of the System menu and then selecting the Update Manager entry. When the Update Manager opens click the Check button to see if there are updates available.
Figure 1 shows a listing of updates for a Ubuntu 9.10 installation. As you can see there are both Important Security Updates as well as Recommended Update. If you want to get information about a particular update you can select the update and then click on the Description of update dropdown.
In order to update the packages follow these steps:
Check the updates you want to install. By default all updates are selected.
Click the Install Updates button.
Enter your user (sudo) password.
Click OK.
The updates will proceed and you can continue on with  your work. Now some updates may require either you to log out of your desktop and log back in, or to reboot the machine. There are is a new tool in development (Ksplice) that allow even the update of a kernel to not require a reboot.
Once all of the updates are complete the Update Manage main window will return reporting that Your system is up to date.
Now let’s take a look at the command line tools for updating your system. The Ubuntu package management system is called apt. Apt is a very powerful tool that can completely manage your systems packages via command line. Using the command line tool has one drawback – in order to check to see if you have updates, you have to run it manually. Let’s take a look at how to update your system with the help of Apt. Follow these steps:
Open up a terminal window.
Issue the command sudo apt-get upgrade.
Enter your user’s password.
Look over the list of available updates (see Figure 2) and decide if you want to go through with the entire upgrade.
To accept all updates click the ‘y’ key (no quotes) and hit Enter.
Watch as the update happens.
That’s it. Your system is now up to date. Let’s take a look at how the same process happens on Fedora (Fedora 12 to be exact).
Fedora Linux
Fedora is a direct descendant of Red Hat Linux, so it is the beneficiary of the Red Hat Package Management system (rpm). Like Ubuntu, Fedora can be upgraded by:
yum: Command line tool.
GNOME (or KDE) PackageKit: GUI tool.
Depending upon your desktop, you will either use the GNOME or the KDE front-end for PackageKit. In order to open up this tool you simply go to the Administration sub-menu of the System menu and select the Software Update entry.  When the tool opens (see Figure 3) you will see the list of updates. To get information about a particular update all you need to do is to select a specific package and the information will be displayed in the bottom pane.
To go ahead with the update click the Install Updates button. As the process happens a progress bar will indicate where GNOME (or KDE) PackageKit is in the steps. The steps are:
When the process is complete, GNOME (or KDE) PackageKit will report that your system is update. Click the OK button when prompted.
Now let’s take a look at upgrading Fedora via the command line. As stated earlier, this is done with the help of the yum command. In order to take care of this, follow these steps:
Open up a terminal window (Do this by going to the System Tools sub-menu of the Applications menu and select Terminal).
Enter the su command to change to the super user.
Type your super user password and hit Enter.
Issue the command yum update and yum will check to see what packages are available for update.
Look through the listing of updates (see Figure 4).
If you want to go through with the update enter ‘y’ (no quotes) and hit Enter.
Sit back and watch the updates happen.
Exit out of the root user command prompt by typing “exit” (no quotes) and hitting Enter.
Close the terminal when complete.
Your Fedora system is now up to date.
Final Thoughts
Granted only two distributions were touched on here, but this should illustrate how easily a Linux installation is updated. Although the tools might not be universal, the concepts are. Whether you are using Ubuntu, OpenSuSE, Slackware, Fedora, Mandriva, or anything in-between, the above illustrations should help you through updating just about any Linux distribution. And hopefully this tutorial helps to show you just how user-friendly the Linux operating system has become.
The telecommunications industry is the backbone of today’s increasingly-digital economies, but it faces a difficult new challenge in evolving to meet modern infrastructure practices. How did telecommunications get itself into this situation? Because the risks of incidents or downtime are so severe, the industry has focused almost exclusively on system designs that minimize risk and maximize reliability. That’s fantastic for mission-critical services, whether public air traffic control or private high-speed banking, but it emphasizes stability over productivity and the adoption of new technologies that might make their operations more resilient and performant.
Telecommunications is playing catch-up on cloud native technology, and the downstream effects are starting to show. These organizations are now behind the times on the de facto choices for enterprise and IT, which means they’re less likely to recruit the top-tier engineering talent they need. In increasingly competitive landscapes, they need to escalate productivity and deploy new telephony platforms to market faster, not get quagmired in old custom solutions built in-house.
To make that leap from internally-trusted to industry-trusted tooling, telecommunications organizations need confidence that they’re on track to properly evolve their virtual network function (VNF) infrastructure to enable cloud native functions using Kubernetes. That’s where CNCF aims to help.
Enter the CNF Test Suite for telecommunications
A cloud native network function (CNF) is an application that implements or facilitates network functionality in a cloud native way, developed using standardized principles and consisting of at least one microservice.
And the CNF Test Suite (cncf/cnf-testsuite) is an open source test suite for telcos to know exactly how cloud native their CNFs are. It’s designed for telecommunications developers and network operators, building with Kubernetes and other cloud native technology, to validate how well they’re following cloud native principles and best practices, like immutable infrastructure, declarative APIs, and a “repeatable deployment process.”
The CNCF is bringing together the Telecom User Group (TUG) and the Cloud Native Network Function Working Group (CNF WG) to implement the CNF Test Suite, which helps telco developers and ops teams build faster feedback loops thanks to the suite’s flexible testing and optimized execution time. Because it can be integrated into any CI/CD pipeline, whether in development or pre-production checks, or run as a standalone test for a single CNF, telecommunications development teams get at-a-glance understanding of how their new deployments align with the cloud native ecosystem, including CNCF-hosted projects, technologies, and concepts.
It’s a powerful answer to a difficult question: How cloud native are we?
The CNF Test Suite leverages 10 CNCF-hosted projects and several open source tools. A modified version of CoreDNS is used as an example CNF for end users to get familiar with the test suite in five steps, and Prometheus is utilized in an observability test to check the best practice for CNFs to actively expose metrics. And it packages other upstream tools, like OPA Gatekeeper, Helm linter, and Promtool to make installation, configuration, and versioning repeatable. The CNF Test Suite team is also grateful to contributions from Kyverno on security tests, LitmusChaos for resilience tests, and Kubescope for security policies.
The minimal install for the CNF Test Suite requires only a running Kubernetes cluster, kubectl, curl, and helm, and even supports running CNF tests on air-gapped machines or those who might need to self-host the image repositories. Once installed, you can use an example CNF or bring your own—all you need is to supply the .yml file and run `cnf-testsuite all` to run all the available tests. There’s even a quick five-step process for deploying the suite and getting recommendations in less than 15 minutes.
What the CNF Test Suite covers and why
At the start of 2022, the CNF Test Suite can run approximately 60 workload tests, which are segmented into 7 different categories.
Compatibility, Installability & Upgradability: CNFs should work with any Certified Kubernetes product and any CNI-compatible network that meet their functionality requirements while using standard, in-band deployment tools such as Helm (version 3) charts. The CNF Test Suite checks whether the CNF can be horizontally and vertically scaled using `kubectl` to ensure it can leverage Kubernetes’ built-in functionality.
Microservice: The CNF should be developed and delivered as a microservice for improved agility, or the development time required between deployments. Agile organizations can deploy new features more frequently or allow multiple teams to safely deploy patches based on their functional area, like fixing security vulnerabilities, without having to sync with other teams first.
State: A cloud native infrastructure should be immutable, environmentally-agnostic, and resilient to node failure, which means properly managing configuration, persistent data, and state. A CNF’s configuration should be stateless, stored in a custom resource definition or a separate database over local storage, with any persistent data managed by StatefulSets. Separate stateful and stateless information makes for infrastructure that’s easily reproduced, consistent, disposable, and always deployed in a repeatable way.
Reliability, Resilience & Availability: Reliability in telco infrastructure is the same as standard IT—it needs to be highly secure and reliable and support ultra-low latencies. Cloud native best practices try to reduce mean time between failure (MTBF) by relying on redundant subcomponents with higher serviceability (mean time to recover (MTTR)), and then testing those assumptions through chaos engineering and self-healing configurations. The Test Suite uses a type of chaos testing to ensure CNFs are resilient to the inevitable failures of public cloud environments or issues on an orchestrator level, such as what happens when pods are unexpectedly deleted or run out of computing resources. These tests ensure CNFs meet the telco industry’s standards for reliability on non-carrier-grade shared cloud hardware/software platforms.
Observability & Diagnostics: Each piece of production cloud native infrastructure must make its internal states observable through metrics, tracing, and logging. The CNF Test suite looks for compatibility with Fluentd, Jaeger, Promtool, Prometheus, and OpenMetrics, which help DevOps or SRE teams maintain, debug, and gather insights about the health of their production environments, which must be versioned, maintained in source control, and altered only through deployment pipelines.
Security: Cloud native security requires attention from experts at the operating system, container runtime, orchestration, application, and cloud platform levels. While many of these fall outside the scope of the CNF Test Suite, it still validates whether containers are isolated from one another and the host, do not allow privilege escalation, have defined resource limits, and are verified against common CVEs.
Configuration: Teams should manage a CNF’s configuration in a declarative manner—using ConfigMaps, Operators, or other declarative interfaces—to design the desired outcome, not how to achieve said outcome. Declarative configuration doesn’t have to be executed to be understood, making it far less prone to error than imperative configuration or even the most well-maintained sequences of `kubectl` commands.
After deploying numerous tests in each category, the CNF Test Suite outputs flexible scoring and suggestions for remediation for each category (or one category if you chose that in the CLI), giving you practical next steps on improving your CNF to better follow cloud native best practices. It’s a powerful—and still growing—solution for the telecommunications industry to embrace the cloud native in a way that’s controllable, observable, and validated by all the expertise under the CNCF umbrella.
What’s next for the CNF Test Suite?
The Test Suite initiative will continue to work closely with the Telecom User Group (TUG) and the Cloud Native Network Function Working Group (CNF WG), collecting feedback based on real-world use cases and evolving the project. As the CNF WG publishes more recommended practices for cloud native telcos, the CNF Test Suite team will add more tests to validate each.
In fact, v0.26.0, released on February 25, 2022, includes six new workload tests, bug fixes, and improved documentation around platform tests. If you’d like to get involved and shape the future of the CNF Test Suite, there are already several ways to provide feedback or contribute code, documentation, or example CNFs:
Visit the CNF Test Suite on GitHub
Continue the conversation on Slack (#cnf-testsuite-dev)
Attend CNF Test Suite Contributor calls on Thursdays at 15:15 UTC
Join the CNF Working Group meetings on Mondays at 16:00 UTC
Looking ahead: The CNF Certification Program
The CNF Test Suite is just the first exciting step in the upcoming Cloud Native Network Function (CNF) Certification Program. We’re looking forward to making the CNF Test Suite the de facto tool for network equipment providers and CNF development teams to prove—and then certify—that they’re adopting cloud native best practices in new products and services.
The wins for the telecommunications industry are clear:
Providers get verification that their cloud native applications and architectures adhere to cloud native best practices.
Their customers get verification that the cloud native services or networks they’re procuring are actually cloud native.
And they both get even better reliability, reduced risk, and lowered capital/operating costs.
We’re planning on supporting any product that runs in a certified Kubernetes environment to make sure organizations build CNFs that are compatible with any major public cloud providers or on-premises environments. We haven’t yet published the certification requirements, but they will be similar to the k8s-conformance process, where you can submit results via pull request and receive updates on your certification process over email.
As the CNF Certification Program develops, both the TUG and CNF-WG will engage with organizations that use the Test Suite heavily to make improvements and stay up-to-date on the latest cloud native best practices. We’re excited to see how the telecommunications industry evolves by adopting more cloud native principles, like loosely-coupled systems and immutability, and gathering proof of their hard work via the CNF Test Suite. That’s how we ensure a complex and essential industry makes the right next steps away toward the best technology infrastructure has to offer—without sacrificing an inch on reliability.
To take the next steps with the CNF Test Suite and prepare your organization for the upcoming CNF Certification Program, schedule a personalized CNF Test Suite demo or attend Cloud Native Telco Day, a co-located Event at KubeCon + CloudNativeCon Europe 2022 on May 16, 2022.
For this three-part series, we implemented a ‘pedal to the metal’ GPIO driven, flashing of a LED, in the context of a Linux kernel module for the NVIDIA Jetson Nano development board (kernel-based v4.9.294, arm64) in my favorite programming language … Ada!
Part 3. Practical Ada binding to the C kernel APIs.
You can find the whole project published at https://github.com/ohenley/adacore_jetson. It is known to build and run properly. All instructions to be up and running in 5 minutes are included in the accompanying front-facing README.md. Do not hesitate to fill a GitHub issue if you find any problem.
Disclaimer: This text is meant to appeal to both Ada and non-Ada coders. Therefore I try to strike a balance between code story simplicity, didactic tractability, and features density. As I said to a colleague, this is the text I would have liked to cross before starting this experiment.
Binding 101
The binding thickness
Our code boundary to the Linux kernel C methods lies in kernel.ads. For an optional “adaptation” opportunity, kernel.adb exists before breaking into the concrete C binding. Take printk (printf equivalent in kernel space) for example. In C, you would call printk(“hello\n”). Ada strings are not null-terminated, they are an array of characters. To make sure the passed Ada string stays valid on the C side, you expose specification signatures .ads that make sense when programming from an Ada point of view and “adapt” in body implementation .adb before calling directly into the binding. Strictly speaking, our exposed Ada Printk would qualify as a “thick” binding even though the adaptation layer is minimal. This is in opposition to a “thin” binding which is really a one-to-one mapping on the C signature as implemented by Printk_C.
-- kernel.ads
procedure Printk (S : String); -- only this is visible for clients of kernel
-- kernel.adb
procedure Printk_C (S : String) with -- considered a thin binding
Import => true,
Convention => C,
External_Name => "printk";
procedure Printk (S : String) is -- considered a thick binding
begin
Printk_C (S & Ascii.Lf & Ascii.Nul); -- because we ‘mangle’ for Ada comfort
end;
The wrapper function
Binding to a wrapped C macro or static inline is often convenient, potentially makes you inherit fixes, upgrades happening inside/under the macro implementation and are, depending on the context, potentially more portable. create_singlethread_workqueue used in printk_wq.c as found in Part 1 makes a perfect example. Our driver has a C home in main.c. You create a C wrapping function calling the macro.
Sometimes a macro called on the C side creates stuff, in place, which you end up needing on the Ada side. You can probably always bind to this resource but I find it often impedes code story. Take DECLARE_DELAYED_WORK(dw, delayed_work_cb) for example. From an outside point of view, it implicitly creates struct delayed_work dw in place.
Using this macro, the only way I found to get a hold of dw from Ada without crashing (returning dw from a wrapper never worked) was to globally call DECLARE_DELAYED_WORK(n, f) in main.c and then bind only to dw. Having to maintain this from C, making it magically appear in Ada felt “breadboard wiring” to me. In the code repository, you will find that we fully reconstructed this macro under the procedure of the same name Declare_Delayed_Work.
The pointer shortcut
Most published Ada to C bindings implement full definition parity. This is an ideal situation in most cases but it also comes with complexity, may generate many 3rd party files, sometimes buried deep, out-of-sync definitions, etc. What can you do when complete bindings are missing or you just want to move lean and fast? If you are making a prototype, you want minimal dependencies, the binding part is peripheral eg. you may only need a quick native window API. You get the point.
Depending on the context you do not always need the full type definitions to get going. Anytime you are strictly dealing with a handle pointer (not owning the memory), you can take a shortcut. Let’s bind to gpio_get_value to illustrate. Again, I follow and layout all C signatures found in the kernel sources leading to concrete stuff, where we can bind.
Inspecting the C definitions we find that gpiod_get_raw_value and gpio_to_desc are our available functions for binding. We note gpio_to_desc uses a transient pointer of type gpio_desc *. Because we do not touch or own a full gpio_desc instance we can happily skip defining it in full (and any dependent leads eg. gpio_device).
By declaring type Gpio_Desc_Acc is new System.Address; we create an equivalent to gpio_desc *. After all, a C pointer is a named system address. We now have everything we need to build our Ada version of gpio_get_value.
-- kernel.ads
package Ic renames Interfaces.C;
function Gpio_Get_Value (Gpio : Ic.Unsigned) return Ic.Int; -- only this is visible for clients of kernel
-- kernel.adb
type Gpio_Desc_Acc is new System.Address; -- shortcut
function Gpio_To_Desc_C (Gpio : Ic.Unsigned) return Gpio_Desc_Acc with
Import => True,
Convention => C,
External_Name => "gpio_to_desc";
function Gpiod_Get_Raw_Value_C (Desc : Gpio_Desc_Acc) return Ic.Int with
Import => True,
Convention => C,
External_Name => "gpiod_get_raw_value";
function Gpio_Get_Value (Gpio : Ic.Unsigned) return Ic.Int is
Desc : Gpio_Desc_Acc := Gpio_To_Desc_C (Gpio);
begin
return Gpiod_Get_Raw_Value_C (Desc);
end;
The Raw bindings, “100% Ada”
In most production contexts we cannot recommend reconstructing unbindable kernel API calls in Ada. Wrapping the C macro or static inline is definitely easier, safer, portable and maintainable. The following goes full blown Ada for the sake of illustrating some interesting nuts and bolts and to show that it is always possible.
Flags, first take
Given the will power you can always reconstruct the targeted macro or static inline in Ada. Let’s come back to create_singlethread_workqueue. If you take the time to expand its macro using GCC this is what you get.
WQ_MAX_ACTIVE, WQ_MAX_UNBOUND_PER_CPU, WQ_DFL_ACTIVE are constants, not flags, so we keep them out.
The enum is anonymous, let’s give it a proper named type.
__WQ pattern is probably a convention but at the same times usage is mixed, eg. WQ_UNBOUND | __WQ_ORDERED, so let’s flatten all this.
Because we do not use these flags elsewhere in our code base, the occasion is perfect to show that in Ada we can keep all this modeling local to our unique function using it.
-- kernel.ads
package Ic renames Interfaces.C;
type Wq_Struct_Access is new System.Address; -- shortcut
type Lock_Class_Key_Access is new System.Address; -- shortcut
Null_Lock : Lock_Class_Key_Access :=
Lock_Class_Key_Access (System.Null_Address); -- typed ((void *)0) equiv.
-- kernel.adb
type Bool is (NO, YES) with Size => 1; -- enum holding on 1 bit
for Bool use (NO => 0, YES => 1); -- "represented" by 0, 1 too
function Alloc_Workqueue_Key_C ...
External_Name => "__alloc_workqueue_key"; -- thin binding
function Create_Singlethread_Wq (Name : String) return Wq_Struct_Access is
type Workqueue_Flags is record
...
WQ_POWER_EFFICIENT : Bool;
WQ_DRAINING : Bool;
...
end record with Size => Ic.Unsigned'Size;
for Workqueue_Flags use record
...
WQ_POWER_EFFICIENT at 0 range 7 .. 7;
WQ_DRAINING at 0 range 16 .. 16;
...
end record;
Flags : Workqueue_Flags := (WQ_UNBOUND => YES,
WQ_ORDERED => YES,
WQ_ORDERED_EXPLICIT => YES,
WQ_LEGACY => YES,
WQ_MEM_RECLAIM => YES,
Others => NO);
Wq_Flags : Ic.Unsigned with Address => Flags'Address;
begin
return Alloc_Workqueue_Key_C ("%s", Wq_Flags, 1, Null_Lock, "", Name);
end;
In C, each flag is implicitly encoded as an integer literal, bit swapped by an amount. Because __alloc_workqueue_key signature expects flags encoded as an unsigned int It should be reasonable to use Ic.Unsigned’Size, to hold a Workqueue_Flags.
We build the representation of Workqueue_Flags type similar to what we learned in Part 2 to model registers. Compared to the C version we now have NO => 0, YES => 1 semantic and no need for bitwise operations.
Remember, in Ada we roll with strong user-defined types for the greater goods. Therefore something like Workqueue_Flags does not match the expected Flags : Ic.Unsigned parameter of our __alloc_workqueue_key thin binding. What should we do? You create a variable Wq_Flags : Ic.Unsigned and overlay it the address of Flags : Workqueue_Flags which you can now pass in to __alloc_workqueue_key.
Wq_Flags : Ic.Unsigned with Address => Flags'Address; -- voila!
Ioremap and iowrite32
The core work of the raw_io version happens in Set_Gpio. Using Ioremap, we retrieve the kernel mapped IO memory location for the GPIO_OUT register physical address. We then write the content of our Gpio_Control to this IO memory location through Io_Write_32.
-- kernel.ads
type Iomem_Access is new System.Address;
-- led.adb
package K renames Kernel;
package C renames Controllers;
procedure Set_Gpio (Pin : C.Pin; S : Led.State) is
function Bit (S : Led.State) return C.Bit renames Led.State'Enum_Rep;
Base_Addr : K.Iomem_Access;
Control : C.Gpio_Control := (Bits => (others => 0),
Locks => (others => 0));
Control_C : K.U32 with Address => Control'Address;
begin
...
Control.Bits (Pin.Reg_Bit) := Bit (S); -- set the GPIO flags
...
Base_Addr := Ioremap (C.Get_Register_Phys_Address (Pin.Port, C.GPIO_OUT),
Control_C'Size); -- get kernel mapped register addr.
K.Io_Write_32 (Control_C, Base_Addr); -- write our GPIO flags to this addr.
...
end;
Let’s take the hard paths of full reconstruction to illustrate interesting stuff. We first implement ioremap. On the C side we find
Here we are both lucky and unlucky. __ioremap is low hanging while __pgprot(PROT_DEVICE_nGnRE) turns out to be a rabbit hole. I skip the intermediate expansion by reporting the final result
The macro pattern _AT(pteval_t, x) can be cleared right away. IIUC, it serves to handle calling both from assembly and C. When you are concerned by the C case, like we do, it boils down to x, eg. ((pteval_t)(1)) << 10) becomes 1 << 10.
arm64_kernel_unmapped_at_el0 is in part ‘kernel configuration dependant’, defaulting to ‘yes’, so let’s simplify our job and bring it in, PTE_NG which is the choice ? (((pteval_t)(1)) << 11), for all cases.
(((pteval_t)((1))) << 2))) turns out to be PTE_ATTRINDX(t) with MT_DEVICE_nGnRE as input. Inspecting the kernel sources, there are four other values intended as input to PTE_ATTRINDX(t). PTE_ATTRINDX behaves like a function so let implement it as such.
type Pgprot_T is mod 2**64; -- type will hold on 64 bits
type Memory_T is range 0 .. 5;
MT_DEVICE_NGnRnE : constant Memory_T := 0;
MT_DEVICE_NGnRE : constant Memory_T := 1;
...
MT_NORMAL_WT : constant Memory_T := 5;
function PTE_ATTRINDX (Mt : Memory_T) return Pgprot_T is
(Pgprot_T(Mt * 2#1#e+2)); -- base # based_integer # exponent
Here I want to show another way to replicate C behavior, this time using bitwise operations. Something like PTE_TYPE_MASK value ((pteval_t)(3)) << 0 cannot be approached like we did before. 3 takes two bits and is somewhat a magic number. What we can do is improve on the representation. We are doing bit masks so why not express using binary numbers directly. It even makes sense graphically.
-- kernel.ads
type Phys_Addr_T is new System.Address;
type Iomem_Access is new System.Address;
-- kernel.adb
function Ioremap (Phys_Addr : Phys_Addr_T;
Size : Ic.Size_T) return Iomem_Access is
...
Pgprot : Pgprot_T := (PTE_TYPE_MASK or
PTE_AF or
PTE_SHARED or
PTE_NG or
PTE_PXN or
PTE_UXN or
PTE_DIRTY or
PTE_DBM or
PTE_ATTRINDX (MT_DEVICE_NGnRE));
begin
return Ioremap_C (Phys_Addr, Size, Pgprot);
end;
So what is interesting here?
Ada is flexible. The original Pgprot_T values arrangement did not allow record mapping like we previously did for type Workqueue_Flags. We adapted by replicating the C implementation, OR‘ing all values to create a final mask.
Everything has been tidied up by strong typing. We are now stuck with disciplined stuff.
Representation is explicit, expressed in the intended base.
Once again this typing machinery lives at the most restrictive scope, inside the Ioremap function. Because Ada “scoping” has few special rules, refactoring up/out of scopes usually boils down to a simple blocks swapping game.
Emitting assembly
Now we give a look at ioread32 and iowrite32. Turns out those are, again, a cascade of static inline and macros ending up directly emitting GCC assembly directives (detailing only iowrite32).
This Io_Write_32 implementation is not portable as we rebuilt the macro following the expansion tailored for arm64. A C wrapper would be less trouble while ensuring portability. Nevertheless, we felt this experiment was a good opportunity to show assembly directives in Ada.
That’s it!
I hope you appreciated this moderately dense overview of Ada in the context of Linux kernel module developpement. I think we can agree that Ada is a really disciplined and powerful contender when it comes to system, pedal to the metal, programming. I thank you for your time and concern. Do not hesitate to reach out and, happy Ada coding!
I want to thank Quentin Ochem, Nicolas Setton, Fabien Chouteau, Jerome Lambourg, Michael Frank, Derek Schacht, Arnaud Charlet, Pat Bernardi, Leo Germond, and Artium Nihamkin for their different insights and feedback to nail this experiment.
Olivier Henley
The author, Olivier Henley, is a UX Engineer at AdaCore. His role is exploring new markets through technical stories. Prior to joining AdaCore, Olivier was a consultant software engineer for Autodesk. Prior to that, Olivier worked on AAA game titles such as For Honor and Rainbow Six Siege in addition to many R&D gaming endeavors at Ubisoft Montreal. Olivier graduated from the Electrical Engineering program in Polytechnique Montreal. He is a co-author of patent US8884949B1, describing the invention of a novel temporal filter implicating NI technology. An Ada advocate, Olivier actively curates GitHub’s Awesome-Ada list.
For this three part series, we implemented a ‘pedal to the metal’ GPIO driven, flashing of a LED, in the context of a Linux kernel module for the NVIDIA Jetson Nano development board (kernel-based v4.9.294, arm64) in my favorite programming language … Ada!
You can find the whole project published at https://github.com/ohenley/adacore_jetson. It is known to build and run properly. All instructions to be up and running in 5 minutes are included in the accompanying front-facing README.md. Do not hesitate to fill a Github issue if you find any problem.
Disclaimer: This text is meant to appeal to both Ada and non-Ada coders. Therefore I try to strike a balance between code story simplicity, didactic tractability, and features density. As I said to a colleague, this is the text I would have liked to cross before starting this experiment.
Pascal on steroids you quoted?
led.ads (specification file, Ada equivalent to C .h header file) is where we model a simple interface for our LED.
with Controllers;
package Led is -- this bit of Ada code provides an interface to our LED
package C Renames Controllers;
type State is (Off, On);
type Led_Type (Size : Natural) is tagged private;
subtype Tag Is String;
procedure Init (L : out Led_Type; P : C.Pin; T : Tag; S : State);
procedure Flip_State (L : in out Led_Type);
procedure Final (L : Led_Type);
private
for State use (Off => 0, On => 1);
function "not" (S : State) return State is
(if S = On then Off else On);
type Led_Type (Size : Natural) is tagged record
P : C.Pin;
T : Tag (1 .. Size);
S : State;
end record;
end Led;
For those new to Ada, many interesting things happen for a language operating at the metal.
First, type are user-defined and strong. Therefore the compile-time analysis is super-rich and checking extremely strict. Many bugs do not survive compilation. If you want to push the envelope, move to the SPARK Ada subset. You can then start to prove your code for the absence of runtime errors. It’s that serious business.
We with the Controllers package. Ada with is a stateless semantic inclusion at the language level, not just a preprocessor text inclusion like #include. Eg. No more redefinition contexts, accompanying guard boilerplate, and whatnot.
Led is packaged. Nothing inside Led can clash outside. It can then be with’ed, and use’d at any scope. Ada scoping, namespacing, signature, etc. are powerful and sound all across the board. Explaining everything does not fit here.
Here renames is used as an idiom to preserve absolute namespacing but code story succinct. In huge codebases, tractability remains clear, which is very welcomed.
Ada enum State has full-image and range representation. We use a numeric representation clause, which will serve later.
A tagged record lets you inherit a type (like in OOP) and use the “dot” notation.
We subtype a Tag as a String for semantic clarity.
out means, we need to have an initialized object before returning “out”, in out the passed “in” initialized object will be modified before returning “out”.
Record (loosely a C struct equivalent) by specifying our Tag Size.
We override the “not” operator for the State type as a function expression.
We have public/private information visibility that lets us structure our code and communicate it to others. A neat example, because a package is at the language level, you remove any type in the public part, add data in the body file and you end up with a Singleton. That easy.
The driver translation
The top-level code story resides in flash_led.adb. Immediately when the module is loaded by the kernel, Ada_Init_Module executes, called from our main.c entry point. It first imports the elaboration procedure flash_ledinit generated by GNATbind, runs it, Init our LED object, and then setup/registers the delayed work queue.
with Kernel;
with Controllers;
with Interfaces.C; use Interfaces.C;
...
package K renames Kernel;
package C renames Controllers;
Wq : K.Workqueue_Struct_Access := K.Null_Wq;
Delayed_Work : aliased K.Delayed_Work; -- subject to alias by some pointer on it
Pin : C.Pin := C.Jetson_Nano_Header_Pins (18);
Led_Tag : Led.Tag := "my_led";
My_Led : Led_Type (Led_Tag'Size);
Half_Period_Ms : Unsigned := 500;
...
procedure Ada_Init_Module is
procedure Ada_Linux_Init with
Import => True,
Convention => Ada,
External_Name => "flash_ledinit";
begin
Ada_Linux_Init;
My_Led.Init (P => Pin, T => Led_Tag, S => Off);
...
if Wq = K.Null_Wq then -- Ada equal
Wq := K.Create_Singlethread_Wq ("flash_led_wq");
end if;
if Wq /= K.Null_Wq then -- Ada not equal
K.Queue_Delayed_Work(Wq,
Delayed_Work'Access, -- an Ada pointer
K.Msecs_To_Jiffies (Half_Period_Ms));
end if;
end;
In the callback, instead of printing to the kernel message buffer, we call the Flip_State implementation of our LED object and re-register to the delayed work queue. It now flashes.
procedure Work_Callback (Work : K.Work_Struct_Access) is
begin
My_Led.Flip_State;
K.Queue_Delayed_Work (Wq,
Delayed_Work'Access, -- An Ada pointer
K.Msecs_To_Jiffies (Half_Period_Ms));
end;
Housekeeping
If you search the web for images of “NVIDIA Jetson Development board GPIO header pinout” you will find such diagram.
Right away, you figure there are about 5 data fields describing a single pinout
Board physical pin number (#).
Default function (name).
Alternate function (name).
Linux GPIO (#).
Tegra SoC GPIO (name.#).
Looking at this diagram we find hints of the different mapping happening at the Tegra SoC, Linux, and physical pinout level. Each “interface” has its own addressing scheme. The Tegra SoC has logical naming and offers default and alternate functions for a given GPIO line. Linux maintains its own GPIO numbering of the lines so does the physical layout of the board.
From where I stand I want to connect a LED circuit to a board pin and control it without fuss, by using any addressing scheme available. For this we created an array of variant records instantiation, modeling the pin characteristics for the whole header pinouts. Nothing cryptic or ambiguous, just precise and clear structured data.
Because everything in this Jetson_Nano_Header_Pins data assembly is unique and unrelated it cannot be generalized further, it has to live somewhere, plainly. Let’s check how we model a single pin as Pin_Data.
type Function_Type is (GPIO, VDC3_3, VDC5_0, GND, NIL, ..., I2S_DOUT);
type Gpio_Linux_Nbr is range 0 .. 255; -- # cat /sys/kernel/debug/gpio
type Gpio_Tegra_Port is (PA, PB, ..., PEE, NIL);
type Gpio_Tegra_Register_Bit is range 0 .. 7;
type Pin_Data (Default : Function_Type := NIL) is record
Alternate: Function_Type := NIL;
case Default is
when VDC3_3 .. GND =>
Null; -- nothing to add
when others =>
Linux_Nbr : Gpio_Linux_Nbr;
Port : Gpio_Tegra_Port;
Reg_Bit : Gpio_Tegra_Register_Bit;
Pinmux_Offset : Storage_Offset;
end case;
end record;
Pin_Data type is a variant record, meaning, based on a Function_Type, it will contain “variable” data. Notice how we range over the Function_Type values to describe the switch cases. This gives us the capability to model all pins configuration.
When you consult the Technical Reference Manual (TRM) of the Nano board, you find that GPIO register controls are layed out following an arithmetic pattern. Using some hardware entry point constants and the specifics of a pin data held into Jetson_Nano_Header_Pins, one can resolve any register needed.
Gpio_Banks : constant Banks_Array :=
(To_Address (16#6000_D000#),
...
To_Address (16#6000_D700#));
type Register is (GPIO_CNF, GPIO_OE, GPIO_OUT, ..., GPIO_INT_CLR);
type Registers_Offsets_Array is array (Register) of Storage_Offset;
Registers_Offsets : constant Registers_Offsets_Array :=
(GPIO_CNF => 16#00#,
... ,
GPIO_INT_CLR => 16#70#);
function Get_Bank_Phys_Addr (Port : Gpio_Tegra_Port) return System.Address is
(Gpio_Banks (Gpio_Tegra_Port'Pos (Port) / 4 + 1));
function Get_Register_Phys_Addr (Port : Gpio_Tegra_Port; Reg : Register) return System.Address is
(Get_Bank_Phys_Address (Port) +
Registers_Offsets (Reg) +
(Gpio_Tegra_Port'Pos (Port) mod 4) * 4);
In this experiment, it is mainly used to request the kernel memory mapping of such GPIO register.
Now, let’s model a common Pinmux register found in the TRM.
package K renames Kernel;
...
type Bit is mod 2**1; -- will hold in 1 bit
type Two_Bits is mod 2**2; -- will hold in 2 bits
type Pinmux_Control is record
Pm : Two_Bits;
Pupd : Two_Bits;
Tristate : Bit;
Park : Bit;
E_Input : Bit;
Lock : Bit;
E_Hsm : Bit;
E_Schmt : Bit;
Drive_Type : Two_Bits;
end record with Size => K.U32'Size;
for Pinmux_Control use record
Pm at 0 range 0 .. 1; -- At byte 0 range bit 0 to bit 1
Pupd at 0 range 2 .. 3;
Tristate at 0 range 4 .. 4;
Park at 0 range 5 .. 5;
E_Input at 0 range 6 .. 6;
Lock at 0 range 7 .. 7;
E_Hsm at 0 range 9 .. 9;
E_Schmt at 0 range 12 .. 12;
Drive_Type at 0 range 13 .. 14;
end record;
I think the code speaks for itself.
We specify types Bit and Two_Bits to cover exactly the binary width conveyed by their names.
We compose the different bitfields over a record size of 32 bits.
We explicitly layout the bitfields using byte addressing and bit range.
You can now directly address bitfield/s by name and not worry about any bitwise arithmetic mishap. Ok so now what about logically addressing a bitfield/s? You pack inside arrays. We do have an example in the modeling of the GPIO register.
type Gpio_Tegra_Register_Bit is range 0 .. 7;
...
type Bit is mod 2**1; -- will hold in 1 bit
...
type Gpio_Bit_Array is array (Gpio_Tegra_Register_Bit) of Bit with Pack;
type Gpio_Control is record
Bits : Gpio_Bit_Array;
Locks : Gpio_Bit_Array;
end record with Size => K.U32'Size;
for Gpio_Control use record
Bits at 0 range 0 .. 7;
Locks at 1 range 0 .. 7; -- At byte 1 range bit 0 to bit 7
end record;
Now we can do.
procedure Set_Gpio (Pin : C.Pin; S : Led.State) is
function Bit (S: Led.State) return C.Bit renames Led.State'Enum_Rep;
-- remember we gave the Led.State Enum a numeric Representation clause.
Control : C.Gpio_Control := (Bits => (others => 0), -- init all to 0
Locks => (others => 0));
...
begin
...
Control.Bits (Pin.Reg_Bit) := Bit (S); -- Kewl!
...
end;
Verbosity
I had to give you a feel of what is to gain by modeling using Ada. To me, it is about semantic clarity, modeling affinity, and structural integrity. Ada offers flexibility through a structured approach to low-level details. Once set foot in Ada, domain modeling becomes easy because as you saw, you are given provisions to incisively specify things using strong user-defined types. The stringent compiler constraints your architecture to fall in place on every iteration. From experience, it is truly amazing how the GNAT toolchain helps you iterate quickly while keeping technical debt in check.
Ada is not too complex, nor too verbose; those are mundane concerns.
Ada demands you to demonstrate that your modeling makes sense for thousands of lines of code; it is code production under continuous streamlining.
What’s next?
In the last entry, we will finally meet the kernel. If I kept your interest and you want to close the loop, move here. Cheers!
I want to thank Quentin Ochem, Nicolas Setton, Fabien Chouteau, Jerome Lambourg, Michael Frank, Derek Schacht, Arnaud Charlet, Pat Bernardi, Leo Germond, and Artium Nihamkin for their different insights and feedback to nail this experiment.
Olivier Henley
The author, Olivier Henley, is a UX Engineer at AdaCore. His role is exploring new markets through technical stories. Prior to joining AdaCore, Olivier was a consultant software engineer for Autodesk. Prior to that, Olivier worked on AAA game titles such as For Honor and Rainbow Six Siege in addition to many R&D gaming endeavors at Ubisoft Montreal. Olivier graduated from the Electrical Engineering program in Polytechnique Montreal. He is a co-author of patent US8884949B1, describing the invention of a novel temporal filter implicating NI technology. An Ada advocate, Olivier actively curates GitHub’s Awesome-Ada list.