Home Blog Page 158

The one-millionth commit: The search for the lucky Linux kernel contributor

This week has been “a week of millions” for the Linux Foundation, with our announcement that over 1 million people have taken our free Introduction to Linux course. As part of the research for our recently published 2020 Linux Kernel History Report, the Kernel Project itself determined that it had surpassed one million code commits. Here is how we established the identity of this lucky Kernel Project contributor. 

Methodology:

The historical repo of BitKeeper (converted to Git) has 63,428 commits. We then found the merge at which Linus Torvalds’ repo has at least 936,572 commits (his repo has at least this many commits).

At commit 92c59e126b21fd212195358a0d296e787e444087 the repo had 936,456 commits (116 shy of the million)

>git checkout 92c59e126b21fd212195358a0d296e787e444087

>git log --oneline | wc

 936456 7483489 62991540


The next merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc passed that number, with 937,105

> git checkout 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc

> git log --oneline | wc

 937105 7489456 63037625

So on merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc Linus’ repo passed the 1M mark (to be precise, 1,000,533 including BitKeeper commits):

commit 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc 92c59e126b21fd212195358a0d296e787e444087 f510ca05271b6f71bd532fe743b39f628110223f (HEAD)

Merge: 92c59e126b21 f510ca05271b

Author: Linus Torvalds <torvalds@linux-foundation.org>

Date:   Mon Aug 3 19:19:34 2020 -0700


Merge tag 'arm-dt-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

At this point, we can simply list the 936,572nd commit in the log:

>git log --oneline | tail -936572 | head -1

85b23fbc7d88 x86/cpufeatures: Add enumeration for SERIALIZE instruction

And the committer is…

git log -1 85b23fbc7d88

commit 85b23fbc7d88f8c6e3951721802d7845bc39663d

Author: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>

Date:   Sun Jul 26 21:31:29 2020 -0700

    x86/cpufeatures: Add enumeration for SERIALIZE instruction

Ricardo’s momentous commit to the Kernel was to add enumeration support for the SERIALIZE instruction, supported in Intel’s forthcoming Sapphire Rapids and Alder Lake microarchitectures for their 10-nanometer server and workstation chips. Ricardo is a software engineer who has been working on Linux feature support for Intel’s microprocessors for 12 years as part of the company’s CPU enabling team.

For more about Intel Corporation’s Ricardo Neri, the one-millionth Linux Kernel code committer, please read and watch our interview, conducted by Swapnil Bhartiya on Linux.com.

The post The one-millionth commit: The search for the lucky Linux kernel contributor appeared first on Linux Foundation.

The one-millionth commit: The search for the lucky Linux kernel contributor

This week has been “a week of millions” for the Linux Foundation, with our announcement that over 1 million people have taken our free Introduction to Linux course. As part of the research for our recently published 2020 Linux Kernel History Report, the Kernel Project itself determined that it had surpassed one million code commits. Here is how we established the identity of this lucky Kernel Project contributor. 

Methodology:

The historical repo of BitKeeper (converted to Git) has 63,428 commits. We then found the merge at which Linus Torvalds’ repo has at least 936,572 commits (his repo has at least this many commits).

At commit 92c59e126b21fd212195358a0d296e787e444087 the repo had 936,456 commits (116 shy of the million)

>git checkout 92c59e126b21fd212195358a0d296e787e444087

>git log --oneline | wc

 936456 7483489 62991540


The next merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc passed that number, with 937,105

> git checkout 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc

> git log --oneline | wc

 937105 7489456 63037625

So on merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc Linus’ repo passed the 1M mark (to be precise, 1,000,533 including BitKeeper commits):

commit 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc 92c59e126b21fd212195358a0d296e787e444087 f510ca05271b6f71bd532fe743b39f628110223f (HEAD)

Merge: 92c59e126b21 f510ca05271b

Author: Linus Torvalds <torvalds@linux-foundation.org>

Date:   Mon Aug 3 19:19:34 2020 -0700


Merge tag 'arm-dt-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

At this point, we can simply list the 936,572nd commit in the log:

>git log --oneline | tail -936572 | head -1

85b23fbc7d88 x86/cpufeatures: Add enumeration for SERIALIZE instruction

And the committer is…

git log -1 85b23fbc7d88

commit 85b23fbc7d88f8c6e3951721802d7845bc39663d

Author: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>

Date:   Sun Jul 26 21:31:29 2020 -0700

    x86/cpufeatures: Add enumeration for SERIALIZE instruction

Ricardo’s momentous commit to the Kernel was to add enumeration support for the SERIALIZE instruction, supported in Intel’s forthcoming Sapphire Rapids and Alder Lake microarchitectures for their 10-nanometer server and workstation chips. Ricardo is a software engineer who has been working on Linux feature support for Intel’s microprocessors for 12 years as part of the company’s CPU enabling team.

For more about Intel Corporation’s Ricardo Neri, the one-millionth Linux Kernel code committer, please read and watch our interview, conducted by Swapnil Bhartiya on Linux.com.

The post The one-millionth commit: The search for the lucky Linux kernel contributor appeared first on Linux Foundation.

Meet the contributor of the 1-millionth commit: Ricardo Neri

August was a historic month for Linux. The largest open source project on the planet enjoyed its one-millionth code commit. The honor goes to Ricardo Neri, the Linux kernel engineer at Intel. Swapnil Bhartiya, founder, and host at TFiR sat down with Neri on behalf of the Linux Foundation to discuss Neri’s journey and involvement with the Linux kernel community.

A lightly edited transcript of the interview:

Swapnil Bhartiya: Hi, this is Bhartiya, on behalf of the Linux Foundation, and today we have with us Ricardo Neri, Linux Software Engineer at Intel, whose code contribution has become 1 millionth contribution to the Linux kernel.

Ricardo Neri: Hi, thank you. Thank you very much.

Swapnil Bhartiya: Ricardo, tell us a little bit about yourself, your journey. When was the first time you came in contact with open source or Linux in general?

Ricardo Neri: That was, I think in 2008. It was a time when the iPhone came out and at the time I used to work in Symbian, but then because the iPhone came out, Symbian died. So I was transferred to a new team, which was working on audio drivers for Linux and the VS. So, maybe by chance I landed on that team and that’s how I started 12 years ago.

Swapnil Bhartiya: You started contributing to the kernel as part of your organization, but you had personal interactions with the kernel community. How was that interaction?

Ricardo Neri: It was very daunting because I had heard that it was really hard to convince maintainers to take your code. And also, I don’t know, maybe intimidating because the people in the community were very smart and also, they had strong opinions for various things. So yeah, maybe I’d say it was intimidating but exciting at the same time.

Swapnil Bhartiya: How have you seen the community itself evolve over time?

Ricardo Neri: Just building on my previous comment I saw at the time that maintainers, they care deeply about the quality of the code that may be drove them to make harsh comments on code from people. And maybe that was a barrier for new people to start contributing. But I have seen a change in the last years like a new code of conduct and rules are agreed upon for people, maybe if they are hesitant or they are not so sure about the quality of their code, just to take it out there and they will not have such a harsh reply as it used to be in the early years when I joined the open source community. I think that is a change that I have observed. Another change that I have observed is more companies are now embracing Open Source. In the early days, the industry was still dominated by closed source software but now I have seen companies building more and more business models around open source software, where the value of the product is not software, but the things that you do with it.

Swapnil Bhartiya: What is interesting is that the contributions to the kernel are coming from all around the globe. You don’t have to be in a specific place to become part of the project. So, what role do you think Linux has played in democratizing software development where you don’t have to prove yourself before you get involved. You send a patch. If the patch is good, they will take your patch. If it’s not good, they will not take it. They don’t have to look at your resume or CV that, hey, have you done any work before or not? So how much role has Linux played in democratizing software development itself?

Ricardo Neri: Yeah, I think it has played a big role because as you said, you don’t have to have a college degree or a computer science degree to start contributing to it because the currency, as you say, is a quality of code. So I have seen, myself, I am not a computer scientist or a software engineer. My background is electrical engineering. So probably I can be a good example of that. I didn’t need to go to college for five years and study computer science to start contributing. Anyone, with the interest to learn and to do something, can start contributing. I am not the only example. There are other people who have a biology degree and they now have become key contributors to Linux.

You can just go to the Linux kernel mailing list, read all the patches, maybe contribute your own reviews. And maybe you start sending your patch. All you need is essentially a workstation with the compiler and the source code. And you can find a bug or an improvement, and you can just do it. You don’t need anything more on that.

Swapnil Bhartiya: Yeah. I fully agree with you. Have you attended any of these Linux Plumbing or any other conferences and events?

Ricardo Neri: Yes, actually I was just attending the Linux Plumbers Conference a few hours ago. I was in the power management micro-conference. Yes, and in previous years I have also been attending Open Source Summit, which used to be LinuxCon.

Swapnil Bhartiya: When you interact with the kernel community over email, it is a bit daunting and you feel intimidated because you don’t know how they will respond to the patch. But when you go and meet these developers in-person, when you sit down for either breakfast or for beer in the evening, you suddenly find that they are as human as we are. So, when you meet them in person, how does the chemistry, the trust, the relationship changes?

Ricardo Neri: Yeah, that is very true. Because, as you said, if you interact with these people only through the mailing list, you can only see words without any context of it. And as you said, this is prone to misinterpretation on both sides. But as equally as you said, when you meet with them, maybe in a virtual event or in person, you see that they are actually friendly. They do care about the quality of the code, but they are approachable and friendly in my experience. And that is also the experience that I have heard from all of my coworkers, who are also new to this community. They have similar feedback as I do.

Swapnil Bhartiya: Do you have any interesting anecdotes to share from any of these events, which are like, “Hey, I met that person or this person or we just sat down. We were debating for months over the patch. We sat down and suddenly we saw the solution.” Any interesting news story that you have to share?

Ricardo Neri: I noticed just in this Plumbers Conference this year, that discussing things over the mailing list can take time because you need to put your comments in written form and then wait for the answer and so forth. And have several iterations of that process. But if you sit down in a room or in a virtual room, the conversation is more fluid and faster. You can arrive at conclusions or to designs or to agreements that would otherwise take maybe weeks for a month to do in the mailing list. So, yeah, I think I have noticed that.

Swapnil Bhartiya: Let’s talk about your contribution. What was this code contribution that historically became a 1 millionth contribution?

Ricardo Neri: That is related to the work that I do with Intel, in which I am part of the CPU enabling team. Whenever Intel comes up with a new feature in the processor, our team is responsible for taking that new feature and making it consumable by the Linux kernel. In this particular case, this is for a new instruction called SERIALIZE, which essentially serialized execution of the code. It puts a landmark in which all the execution before that instruction gets done, before starting to execute the code after that instruction. And that was solving problems that we had in the past. Because for instance, you can achieve the same goal using an instruction called CPUID or return from interrupt. But those instructions have certain side effects and can also have a performance penalty. So this serialize instruction allows you to divide the execution of code, but without having those side effects that you will need to fix up in the software. So it helps to make the software simpler and you have a performance bonus side of it as well.

Swapnil Bhartiya: Do you contribute code in your capacity as an Intel engineer, or do you also contribute some code in your free time as well?

Ricardo Neri: Right now, I am only contributing coding in the capacity as an Intel engineer.

Swapnil Bhartiya: So, this is the reason I ask this question is that in the early days of open source most of the contribution was coming through people working in their spare time, but today a majority of contributors are getting paid by companies to do that work. Working on Open Source is no longer a part-time hobby. How have you seen this change itself, where you get paid to work on open source?

Ricardo Neri: That’s very true. As I was mentioning earlier that now companies have found ways to build business models around open source software. A good example is Red Hat where the software is free, but they build their business around the software and not regard the software as the product; it’s a vehicle to deliver value to their customers. And the same is true for semiconductors companies such as Intel, which are in the business of selling computer chips. But today, you cannot just only sell the chip. You also need to provide a full solution to the customer. And that, of course, includes the software. And that is also true for other companies that were able to build business models around open source software.

In my early days when I was new to Linux, I had many, many colleagues that were in Linux because they believed in it. They believed in the value of open source software. Then they happen to stumble on a job that they were paid to do the things in which they believed. I remember them giving talks in my university about how to build a Linux scanner, how to configure it for your own needs. And they did it for free. During my university days, I remember having installfests in which you could just take your laptop and people would help you to install Linux. People that had a true belief in open source software and were willing to help you for free.

Swapnil Bhartiya: What role has open source played in, as we were talking earlier, that you don’t have to prove yourself, you don’t have to be in a specific region to get involved? So, talk about what role open source has played in creating a level playing field in giving access to underrepresented minorities and give them not only tools but also a voice.

Ricardo Neri: I think that, yeah, probably it’s similar to what I was saying at the beginning that in the traditional model in which you have to go to college and then spend four years there and not work and have good grades. You need to have certain opportunities in life to be able to do that, to have the luxury of attending college for five years, and gain a degree. But in software, for instance, you don’t need that. All you need is willingness. Just the willingness of learning and contributing to it. So I think that for underrepresented minority groups, statistically, they have a lesser chance of attending college and getting a degree.

I have also seen companies realizing the fact that you don’t actually need to be a computer scientist to start writing software. That has opened doors for people of different backgrounds and very diverse backgrounds in which you don’t have to be part of a certain career path or school path that can land you a job in this industry. You can just start wherever you want.

There are many efforts in the community. The GNOME Foundation has scholarships to help recruit people from underrepresented groups to start contributing and they get mentoring. Because that is an important point. The software is free and anyone can contribute to it. But if you have a mentor, if you have someone that can help you navigate an open source software community that will help you a lot and it will go a long way to get you established in that community. You can start contributing very simple patches. But over time you have that guidance, you can optimize your time and your effort to make the things that will have an impact, and will maybe someday make you a key contributor to the community.

Swapnil Bhartiya: Thank you.

Ricardo Neri: Thank you very much.

Xen on Raspberry Pi 4 adventures

Written by Stefano Stabellini and Roman Shaposhnik

Raspberry Pi (RPi) has been a key enabling device for the Arm community for years, given the low price and widespread adoption. According to the RPi Foundation, over 35 million have been sold, with 44% of these sold into industry. We have always been eager to get the Xen hypervisor running on it, but technical differences between RPi and other Arm platforms made it impractical for the longest time. Specifically, a non-standard interrupt controller without virtualization support.

Then the Raspberry Pi 4 came along, together with a regular GIC-400 interrupt controller that Xen supports out of the box. Finally, we could run Xen on an RPi device. Soon Roman Shaposhnik of Project EVE and a few other community members started asking about it on the xen-devel mailing list. “It should be easy,” we answered. “It might even work out of the box,” we wrote in our reply. We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.

The first hurdle was the availability of low memory addresses. RPi4 has devices that can only access the first 1GB of RAM. The amount of memory below 1GB in Dom0 was not enough. Julien Grall solved this problem with a simple one-line fix to increase the memory allocation below 1GB for Dom0 on RPi4. The patch is now present in Xen 4.14.

“This lower-than-1GB limitation is uncommon, but now that it is fixed, it is just going to work.” We were wrong again. The Xen subsystem in Linux uses virt_to_phys to convert virtual addresses to physical addresses, which works for most virtual addresses but not all. It turns out that the RPi4 Linux kernel would sometimes pass virtual addresses that cannot be translated to physical addresses using virt_to_phys, and doing so would result in serious errors. The fix was to use a different address translation function when appropriate. The patch is now present in Linux’s master branch.

We felt confident that we finally reached the end of the line. “Memory allocations – check. Memory translations — check. We are good to go!” No, not yet. It turns out that the most significant issue was yet to be discovered. The Linux kernel has always had the concept of physical addresses and DMA addresses, where DMA addresses are used to program devices and could be different from physical addresses. In practice, none of the x86, ARM, and ARM64 platforms where Xen could run had DMA addresses different from physical addresses. The Xen subsystem in Linux is exploiting the DMA/physical address duality for its own address translations. It uses it to convert physical addresses, as seen by the guest, to physical addresses, as seen by Xen.

To our surprise and astonishment, the Raspberry Pi 4 was the very first platform to have physical addresses different from DMA addresses, causing the Xen subsystem in Linux to break. It wasn’t easy to narrow down the issue. Once we understood the problem, a dozen patches later, we had full support for handling DMA/physical address conversions in Linux. The Linux patches are in master and will be available in Linux 5.9.

Solving the address translation issue was the end of our fun hacking adventure. With the Xen and Linux patches applied, Xen and Dom0 work flawlessly. Once Linux 5.9 is out, we will have Xen working on RPi4 out of the box.

We will show you how to run Xen on RPi4, the real Xen hacker way, and as part of a downstream distribution for a much easier end-user experience.

Hacking Xen on Raspberry Pi 4

If you intend to hack on Xen on ARM and would like to use the RPi4 to do it, here is what you need to do to get Xen up and running using UBoot and TFTP. I like to use TFTP because it makes it extremely fast to update any binary during development.  See this tutorial on how to set up and configure a TFTP server. You also need a UART connection to get early output from Xen and Linux; please refer to this article.

Use the rpi-imager to format an SD card with the regular default Raspberry Pi OS. Mount the first SD card partition and edit config.txt. Make sure to add the following:

    kernel=u-boot.bin

    enable_uart=1

    arm_64bit=1

Download a suitable UBoot binary for RPi4 (u-boot.bin) from any distro, for instance OpenSUSE. Download the JeOS image, then open it and save u-boot.bin:

    xz -d openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw.xz

    kpartx -a ./openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw

    mount /dev/mapper/loop0p1 /mnt

    cp /mnt/u-boot.bin /tmp

Place u-boot.bin in the first SD card partition together with config.txt. Next time the system boots, you will get a UBoot prompt that allows you to load Xen, the Linux kernel for Dom0, the Dom0 rootfs, and the device tree from a TFTP server over the network. I automated the loading steps by placing a UBoot boot.scr script on the SD card:

    setenv serverip 192.168.0.1

    setenv ipaddr 192.168.0.2

    tftpb 0xC00000 boot2.scr

    source 0xC00000

Where:

- serverip is the IP of your TFTP server

- ipaddr is the IP of the RPi4

Use mkimage to generate boot.scr and place it next to config.txt and u-boot.bin:

    mkimage -T script -A arm64 -C none -a 0x2400000 -e 0x2400000 -d boot.source boot.scr

Where:

- boot.source is the input

- boot.scr is the output

UBoot will automatically execute the provided boot.scr, which sets up the network and fetches a second script (boot2.scr) from the TFTP server. boot2.scr should come with all the instructions to load Xen and the other required binaries. You can generate boot2.scr using ImageBuilder.

Make sure to use Xen 4.14 or later. The Linux kernel should be master (or 5.9 when it is out, 5.4-rc4 works.) The Linux ARM64 default config works fine as kernel config. Any 64-bit rootfs should work for Dom0. Use the device tree that comes with upstream Linux for RPi4 (arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb). RPi4 has two UARTs; the default is bcm2835-aux-uart at address 0x7e215040. It is specified as “serial1” in the device tree instead of serial0. You can tell Xen to use serial1 by specifying on the Xen command line:

    console=dtuart dtuart=serial1 sync_console

 The Xen command line is provided by the boot2.scr script generated by ImageBuilder as “xen,xen-bootargs“. After editing boot2.source you can regenerate boot2.scr with mkimage:

    mkimage -A arm64 -T script -C none -a 0xC00000 -e 0xC00000 -d boot2.source boot2.scr

Xen on Raspberry Pi 4: an easy button

Getting your hands dirty by building and booting Xen on Raspberry Pi 4 from scratch can be not only deeply satisfying but can also give you a lot of insight into how everything fits together on ARM. Sometimes, however, you just want to get a quick taste for what it would feel to have Xen on this board. This is typically not a problem for Xen, since pretty much every Linux distribution provides Xen packages and having a fully functional Xen running on your system is a mere “apt” or “zypper” invocation away. However, given that Raspberry Pi 4 support is only a few months old, the integration work hasn’t been done yet. The only operating system with fully integrated and tested support for Xen on Raspberry Pi 4 is LF Edge’s Project EVE.

Project EVE is a secure-by-design operating system that supports running Edge Containers on compute devices deployed in the field. These devices can be IoT gateways, Industrial PCs, or general-purpose ruggedized computers. All applications running on EVE are represented as Edge Containers and are subject to container orchestration policies driven by k3s. Edge containers themselves can encapsulate Virtual Machines, Containers, or Unikernels. 

You can find more about EVE on the project’s website at http://projecteve.dev and its GitHub repo https://github.com/lf-edge/eve/blob/master/docs/README.md. The latest instructions for creating a bootable media for Raspberry Pi 4 are also available at: 

https://github.com/lf-edge/eve/blob/master/docs/README.md

Because EVE publishes fully baked downloadable binaries, using it to give Xen on Raspberry Pi 4 a try is as simple as:

$ docker pull lfedge/eve:5.9.0-rpi-xen-arm64 # you can pick a different 5.x.y release if you like

$ docker run lfedge/eve:5.9.0-rpi-xen-arm64 live > live.raw

This is followed by flashing the resulting live.raw binary onto an SD card using your favorite tool. 

Once those steps are done, you can insert the card into your Raspberry Pi 4, connect the keyboard and the monitor and enjoy a minimalistic Linux distribution (based on Alpine Linux and Linuxkit) that is Project EVE running as Dom0 under Xen.

As far as Linux distributions go, EVE presents a somewhat novel design for an operating system, but at the same time, it is heavily inspired by ideas from Qubes OS, ChromeOS, Core OS, and Smart OS. If you want to take it beyond simple console tasks and explore how to run user domains on it, we recommend heading over to EVE’s sister project Eden: https://github.com/lf-edge/eden#raspberry-pi-4-support and following a short tutorial over there.

If anything goes wrong, you can always find an active community of EVE and Eden users on LF Edge’s Slack channels starting with #eve over at http://lfedge.slack.com/ — we’d love to hear your feedback.

In the meantime – happy hacking!

By the Time You Finish Reading This, Your Tech Job Post May Be Outdated

As the rate of technological advancement and change continues to accelerate, new tools are being developed and released at such a swift pace that no individual tech professional can stay on top of them all. Consequently, this leads to talent gaps that can delay digital transformation. For example, a recent study found that “only 23% of organizations believe they have the talent required to successfully complete their cloud native journey.”

But how do you outline skill and experience requirements for technology that is evolving so rapidly?

How open-source software transformed the business world (ZDNet)

Steven J. Vaughn-Nichols writes at ZDNet:

Eric S. Raymond, one of open-source’s founders, said in his seminal work, The Cathedral and the Bazaar,  “Every good work of [open-source] software starts by scratching a developer’s personal itch.” There’s a lot of truth to that. Vital programs such as the Apache web server, MySQL, and Linux began that way and numerous smaller programs did too. But it’s not likely many people had a personal itch to create giant vertical programs such as telecommunications’ OpenDaylight and OPNFV or Automotive Grade Linux (AGL)’s Unified Code Base. Today, vertical companies focused on narrow interests also embrace open-source methods and software with open arms.

Read more at ZDNet

Software-defined vertical industries: transformation through open source

“When I say that innovation is being democratized, I mean that users of products and services-both firms and individual consumers-are increasingly able to innovate for themselves. User-centered innovation processes offer great advantages over the manufacturer-centric innovation development systems that have been the mainstay of commerce for hundreds of years. Users that innovate can develop exactly what they want, rather than relying on manufacturers to act as their (often very imperfect) agents.”  — Eric von Hippel, Democratizing Innovation

Overview

What do some of the world’s largest, most regulated, complex, centuries-old industries such as banking, telecommunications, and energy have in common with rapid development, bleeding-edge innovative, creative industries such as the motion pictures industry?

They’re all dependent on open source software. 

That would be a great answer and correct, but it doesn’t tell the whole story. A complete answer is these industries not only depend on open source, but they’re building open source into the fabric of their R&D and development models. They are all dependent on the speed of innovation that collaborating in open source enables. 

As a recent McKinsey & Co. report described, the “biggest differentiator” for top-quartile companies in an industry vertical was “open source adoption,” where they shifted from users to contributors. The report’s data shows that top-quartile company adoption of open source has three times the impact on innovation than companies in other quartiles.

Over the last 20 years, the Linux Foundation has expanded from a single project, the Linux kernel, to hundreds of distinct project communities. The “foundation-as-a-service” model developed by Linux Foundation supports communities collaborating on open source across key horizontal technology domains, such as cloud, security, blockchain, and the web. 

However, many of these project communities align across vertical industry groupings, such as automotive, motion pictures, finance, telecommunications, energy, and public health initiatives. They may have started as individual efforts looking for a neutral home at the Linux Foundation. Still, over time these communities found it useful to collaborate as the organizations supporting the projects expanded their collaboration to other areas.

This paper will delve into the major vertical industry initiatives served by the Linux Foundation. We will highlight the most notable open source projects and why we believe these key industry verticals, some over 100 years old, have transformed themselves using open source software.

The post Software-defined vertical industries: transformation through open source appeared first on The Linux Foundation.

Free Intro to Linux Course Surpasses One Million Enrollments

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced its Introduction to Linux training course on the edX platform, currently in its sixth edition, has surpassed one million enrollments. The course helps students develop a good working knowledge of Linux using both the graphical interface and command-line across the major Linux distribution families. No prior knowledge or experience is required, making the course a popular first step for individuals interested in pursuing a career in IT.

Challenges and Trends of Cloud Infrastructure: A Q&A with Ying Xiong, Cloud Lab, Futurewei Technologies, Inc.

Ahead of Open Networking & Edge Summit 2020 (being held virtually next week on September 28-30), Linux.com hosted a Q&A with Ying Xiong of Futurewei — a Diamond Sponsor of ONES 2020, where he discussed addressing the challenges and trends of cloud infrastructure in the enterprise digital transformation journey and for the new types of workloads such as AI, 5G and IoT apps.

We hope you enjoy the interview! If you are interested in attending Open Networking & Edge Summit 2020, where you can learn more about the future of Networking, Edge and Cloud, click here to register for just US$50: https://bit.ly/32F8LXX. View the full schedule here: https://bit.ly/33Ct4Vh

Linux.com: Tell us a bit about your open source journey in Networking, Edge, and Cloud, and specifically help people understand how Futurewei operates independently from Huawei

Ying Xiong: At Futurewei cloud lab, we are actively involved in open source communities and contribute to many open source projects including Kubernetes + KubeEdge, Akraino Edge Stack, Cloud Foundry, and OpenStack. We have attended CNCF conferences, Open Source Summit, Embedded Linux conferences, and Cloud Foundry Summit almost every year since 2015 and delivered keynotes and session talks at many of these conferences or summits. Individually, some of us served as board members in LF, CNCF, and LF Edge as well as OpenStack foundations. Currently, Futurewei is an independent member of LF, CNCF, and LF Edge.

Linux.com: Digital Transformation and Cloud Infrastructure are two important topics being discussed in the community. Please tell us some key challenges you see in these.

Ying Xiong: In today’s transformational digital journey, Cloud infrastructure and services have been established as the core part of Enterprise’s IT and their digital transformation. More and more enterprises are leveraging cloud computing technologies to accelerate their business innovations by either migrating their applications and data to a public cloud or building their own private cloud or using a hybrid cloud model. The rise of emerging 5G, AI, Edge Computing, and IoT application landscape is offering Cloud Computing further exciting opportunities as well as challenges to meet today’s and tomorrow’s enterprise digitization needs. The following is a list of challenges and trends we’ve observed that face enterprises and cloud technologies themself:

  • As more and more applications move to the cloud, there is an increasing demand for cloud infrastructure to manage the ever-increasing pool of compute nodes with scale and provision and deploy ever-increasing workloads with consistent speed.

This challenge has been driving the new development and/or optimization of distributed cluster management platforms, new cloud networking solutions, and lightweight virtualization technologies such as Container and Serverless.  Current and future compute cluster management platforms will be continuously challenged to manage 100K+ compute nodes in a cluster and be able to provision and startup hundreds and even thousands of application instances within a minute.  There is very limited support for extremely high scalable networking in the virtualized cloud environment, primarily because contemporary cloud networking virtualization solutions are still cobbled together on top of age-old static networking designs. Such solutions are incapable of provisioning & management scale of 10M+ network dynamic endpoints in the cloud.

  • Both Cloud providers and Enterprises have been asking for a “unified ” resource management and orchestration capability as a single pane of glass in order to provide support for managing heterogeneous resource types (bare-metal, VMs, containers, Serverless, Uni-Kernels, etc.) seamlessly.

Modern cloud-native applications are mostly designed for scale-out architectures that are more suited for containerized environments. A typical enterprise cloud environment isn’t just about containers only as containers may not be appropriate for all enterprise workloads and use cases. Most enterprises still run a large number of legacy apps that run on bare metal and traditional VM environments. As a result, the future cloud infrastructure needs to be a “unified” platform in order to meet this challenge and at the same time reduce the management cost for both cloud providers and enterprise customers. 

  • With the convergence of traditional cloud computing and edge computing, and the emergence of new types of workloads such as 5G, AI and IoT applications, customers and the cloud infrastructure platforms are being challenged to manage not only data center resources but also the edge compute nodes to support the new types of distributed applications cross data center and edge site.

The current open source cloud platforms mostly treat Edge and AI as an afterthought. The new open source cloud platform needs to be architected with Edge as part of the overall architecture from day one. For example, AI modeling can be done on the Cloud, while AI inferencing can be done on the Edge connecting to billions of IoT devices and sensors running 5G speed networks. Cloud-Edge computing combined with the optimized latency performance of 5G Core processing can reduce round-trip-time by up to two orders of magnitude in situations where there is tight control over all parts of the communication chain. This has enabled a brand-new class of intelligent cloud applications in the areas of industrial robotic/drone automation, V2X, and AR/VR infotainment, associated innovative business models, etc.

  • Hybrid cloud and multi-cloud trends have become the cornerstone of Enterprise cloud strategy, and application portability cross-cloud becomes a requirement to many companies. Open API and compatibility with the industry cloud ecosystem challenge the new generation of cloud infrastructure technology development.

Linux.com: What are the key Technology building blocks you envision to help accelerate the journey of Telecom and Cloud Service Providers?

Ying Xiong: With these challenges and trends I mentioned above, we believe that as an industry and an open source community, there is a need for building the next generation open source, hyper-scale and unified cloud infrastructure that works with existing cloud technologies and APIs, and can help enterprises, as well as cloud providers, meet the continuously growing technology challenges. We believe the following are technology building blocks that will help accelerate cloud service providers’ journeys, including Telecom cloud.

  • Unified Infrastructure — Provision and manage cloud resources such as VMs, containers, bare metals as well as serverless compute units. A single infrastructure platform allows cloud providers to simplify cloud compute and network management and significantly reduce manage cost. It also accelerates new cloud services development and manager.
  • True multi-tenant & strong isolation cloud – Provide trusted computing to both customers and service providers.  This building block, including hardware isolation technologies such as SGX, is especially important for the future of cloud computing
  • Hyper-scale cloud networking – Provide fast & large-scale provisioning and management of virtual networks such as VPCs and subnets and network endpoints for cloud applications and services.  Cloud network is the bottleneck for high scalability and high-performance cloud for many cloud providers currently. It is one of basic and critical building blocks for service providers that need millions of virtual network provisioning within a region.
  • Distributed cloud-edge infrastructure – Extend traditional cloud computing to the edge and provide capabilities to provision and manage compute, network resources, and workloads at edge nodes that are closer to the customers and customer data. Sometimes we call this distributed cloud to support new types of distributed applications such as AI, 5G, and IoT apps.
  • Intelligent cloud infrastructure – We believe that future cloud technologies are increasingly building intelligence into the infrastructure to serve better and manage new types of applications while increasing resource utilization for the operators. For example, intelligent scheduling and/or placement of where to run workloads between cloud and edge to achieve better user experience with extremely low latency is increasingly important in building new cloud infrastructure.

Linux.com: Can you highlight a few open source projects that help resolve some of the challenges you have outlined?

Ying Xiong: An open source cloud, the cloud built by open source technologies such as Openstack and Kubernetes, has led the way in the innovation of cloud computing technology, and we have seen more and more companies leveraging these cloud technologies to accelerate their business innovations. Simultaneously, as we discussed previously, new types of applications and/or workloads pose new challenges to the cloud platforms. 

One of the most recent key initiatives from us is the Centaurus open source project aiming to address some of the challenges I mentioned earlier.  The project is a cloud infrastructure platform that can be used to build public or private clouds. It unifies the orchestration, network provisioning, and management of cloud compute and network resources at a regional scale. It offers the same API experience to provide and manage virtual machines, containers, serverless and other types of cloud resources. Centaurus combines traditional IaaS and PaaS layers into one infrastructure platform that can simplify cloud management and reduce cloud providers’ management costs. 

The Centaurus project currently includes the following two open source projects:

  • Arktos is a compute cluster management system designed for large scale clouds. It is evolved from Kubernetes and addresses key challenges such as scalability, hard multi-tenancy, and unified runtime to take cloud-native infrastructure to the next level.
  • Mizar is an open-source high-performance cloud-network powered by eXpress Data Path (XDP) and Geneve protocol for a highly scalable cloud. It is a simple and efficient solution that lets you create a multi-tenant overlay network of many endpoints with extensible network functions.

Linux.com: What is Project Centaurus trying to solve? What is the status and where can people find more information?

Ying Xiong: The vision of the Centaurus open source project is to build a unified and large-scale distributed cloud infrastructure platform meeting the challenges discussed in the previous sections. With innovations in high-performance cloud network solutions, unified runtime environment, and hyper-scale cluster management, Centaurus is designed to meet the infrastructure requirements for the new types of cloud workloads such as 5G, AI, Edge, and IoT applications.  Specifically, the Centaurus project is trying to achieve:

  • Unified infrastructure for managing various cloud resources (such as VMs, containers, serverless, bare-metal machines, and others) natively.
  • High-performance cloud network data plane for extremely low latency network traffic forwarding and routing in the cloud.
  • Hyper-scale compute cluster management supports 50K+ compute nodes in a single cluster and 10M+ network endpoint provisioning in a region.
  • Natively support of edge cloud, the cloud extension to manage compute and network resources at edge sites from the cloud.

We would like to invite the open source community to join us to realize the vision of the Centaurus project and to build the ecosystem for the benefits of open source communities.  You can find more information regarding the project documentation and relevant collateral (white paper, blogs, etc.) from the Centaurus website at https://www.centauruscloud.io/. There are currently two sub-projects currently under Centaurus project, Arktos, and Mizar, that are already open source with a few releases.

Linux.com: How is this project complementary to projects in CNCF, LF Edge or LF Networking umbrella? 

Ying Xiong: We are targeting to launch Centaurus as an independent project under The Linux Foundation since it is trying to solve different sets of challenges or problems than other cloud computing projects in LF. With that being said, we are still looking at potential options and trying to find the best place to donate and host the Centaurus project, which can deliver max benefits for the open source communities and the industry.

Technically, as you may see, Centaurus has compute, network, and edge components and focuses on a complete IaaS+ platform. In contrast, CNCF focuses on container orchestration, LF Edge focusing on Edge infrastructure, and LF networking on network architecture and solution. However, Centaurus is designed with cloud-native architecture, and its components are independent projects that can be used independently with other cloud technologies. Vice versa, we welcome and expect that components from projects in CNCF, LF Edge, and LF Networking and other open source foundations can be plugged into Centaurus as well.   

Linux.com: Anything else you want to add to help grow participation and support? 

Ying Xiong: As a quick recap, Centaurus is an open source Distributed Cloud Native Infrastructure + umbrella project for the 5G, AI, and Edge era. Centaurus currently includes the two core open source projects, a Compute project (Arktos) and a Networking project (Mizar).

With the open source community’s participation and support, the Centaurus platform can offer enterprises the hyper-scale and unified management capabilities that will dramatically change the economics of enterprise IT.

We hope the information we have provided here helps pique community interest. We invite all of the open source community members to join us in making Centaurus a viable open cloud infrastructure platform for the future of Enterprise IT digitization journey. It is still in the early stage for Centaurus, and we hope the community can join us and make it a reality. By being part of the most popular open source foundation, a neutral place for hosting the Centaurus project under the umbrella of Linux Foundation will definitely garner tremendous interest from the open source community. We look forward to making all this a great success for the community as a whole.

Extracting kernel stack function arguments from Linux x86-64 kernel crash dumps

This blog post covers in detail how to extract stack function arguments from kernel crash dumps.
Click to Read More at Oracle Linux Kernel Development