Home Blog Page 158

Bringing COBOL to the Modern World

COBOL is powering most of the critical infrastructure that involves any kind of monetary transaction. In this special interview conducted during the recent Open Mainframe Summit, we talked about the relevance of COBOL today and the role of the new COBOL working group that was announced at the summit. Joining us were Cameron Seay, Adjunct Professor at East Carolina University and Derek Britton of the Application Modernizing Group at Micro Focus. Micro Focus recently joined the Open Mainframe Project and is now also involved with the working group.

Here is an edited version of the discussion:

Swapnil Bhartiya: First of all, Cam and Derek, welcome to the show. If you look at COBOL, it’s very old technology. Who is still using COBOL today? Cam, I would like to hear your insight first.

Cameron Seay: Every large commercial bank I know of uses COBOL. Every large insurance company, every large federal agency, every large retailer uses COBOL to some degree, and it processes a large percentage of the world’s financial transactions. For example, if you go to Walmart and you make a sale, that transaction is probably recorded using a COBOL program. So, it’s used a lot, a large percentage of the global business is still done in COBOL.

Swapnil Bhartiya: Micro Focus is I think one of the few companies that offer support around COBOL. Derek, please tell people the importance of COBOL in today’s modern world.

Derek Britton: Well, if we go back in time, there weren’t that many choices on the market. If you wanted robust technology to build your business systems, COBOL was one of the very few choices and so it’s surprising when there are so many choices around today and yet, many of the world’s largest industries, largest organizations still rely on COBOL. If COBOL wasn’t being used, so many of those systems that people trust and rely on — whether you’re moving money around, whether you’re running someone’s payroll, whether you’re getting insurance quotation, shipping a parcel, booking a holiday. All of these things are happening with COVID at the backend and the value you’re getting from that is not just that it’s carried on, but it runs with the same results again and again and again, without fail.

The importance of COBOL is not just its pervasiveness, which I think is significant and perhaps not that well understood, but also it’s reliability. And because it’s welded very closely to the mainframe environments and to CICS and some other core elements of the mainframe and other platforms as well. It uses and trusts a lot of technology that is unrivaled in terms of its reliability, scalability and its performance. That’s why it remains so important to the global economy and to so many industries. It does what it needs to do, which is business processing, so fantastically well.

Swapnil Bhartiya: Excellent, thanks for talking about that. Now, you guys recently joined the project and the foundation as well, so talk about why you joined the Open Mainframe Project and what are the projects that you will be involved with, of course. I know you’re involved with the working group, but talk about your involvement with the project.

Derek Britton: Well, our initial interest with the Open Mainframe Project goes back a couple of years. We’re longtime proponents of the mainframe platform, of course, here at Micro Focus. We’ve had a range of technologies that run on z/OS. But our interest in the wider mainframe community—and that would be the Open Mainframe Project—probably comes as a result of the time we’ve spent with the SHARE community and other IBM-sponsored communities, where the discussion was about the best way to embrace this trusted technology in the digital era. This is becoming a very topical conversation and that’s also true for COBOL, which I’m sure we’ll come back to.

Our interest in the OMP has been going on for the last couple of years and we were finally able to reach an agreement between both organizations to join the group this year, specifically because of a number of initiatives that we have going on at Micro Focus and that a number of our customers have talked to us about specifically in the area of mainframe DevOps. As vital as the mainframe platform is, there’s a growing desire to use it to deliver greater and greater value to the business, which typically means trying to accelerate delivery cycles and get more done.

Of course, now the mainframe is so inextricably connected with other parts of the IT ecosystem that those points of connection and the number of moving parts have to be handled, integrated with, and managed as part of a delivery process. It’s an important part of our customers’ roadmap and, therefore, our roadmap to ensure that they get the very best of technology in the mainframe world. Whether it’s tried-and-trusted technology, whether it’s new emerging vendor technology, or whether in many cases, it becomes open source technology. We wanted to play our part in those kinds of projects and a number of initiatives around.

Swapnil Bhartiya: Is there an increase in interest in COBOL that we are seeing there now that there is a dedicated working group? And if you can also talk a bit about what will be the role of this group.

Cameron Seay: If your question was, is there an increased interest in working in COBOL because of the working group, the working group actually came as a result of a renewed interest in the written new discovery in COBOL. The governor of New Jersey made a comment that their unemployment was not able to be processed because of COBOL’s obsolescence, or inefficiency, or inadequacy to some degree. And that sparked quite a furor in the mainframe world because it wasn’t COBOL at all. COBOL had nothing to do with the inability of New Jersey to deliver the unemployment checks. Further, we’re aware that New Jersey is just typical of every state. Every state that I know of—there may be some exceptions I’m not aware of, I know it’s certainly true for California and New York—is dependent upon COBOL to process their day-to-day business applications.

So, then Derek and some other people inside the OMP got together and started having some conversations, myself included, and said “We maybe need to form a COBOL working group to renew this interest in COBOL and establish the facts around COBOL.” So that’s kind of what the working group is trying to do, and we’re trying to increase that familiarity, visibility and interest in COBOL.

Swapnil Bhartiya: Derek, I want to bring the same question to you also. Is there any particular reason that we are seeing an increase in interest in COBOL and what is that reason?

Derek Britton: Yeah, that’s a great question and I think there are a few reasons. First of all, I think a really important milestone for COBOL was actually last year when it turned 60 years old. I think one of your earlier questions is related to COBOL’s age being 60. Of course, COBOL isn’t a 60-year-old language but the idea is 60 years old, for sure. If you drive a 2020 motor car, you’re driving a 2020 motor car, you’re not driving a hundred-year-old idea. No one thinks a modern telephone is an old idea, either. It’s not old technology, sorry.
The idea might’ve been from a long time ago, but the technology has advanced, and the same thing is true in code. But when we celebrated COBOL’s 60th anniversary last year—a few of the vendors did and a number of organizations did, too—there was an outpouring of interest in the technology. A lot of times, COBOL just quietly goes about its business of running the world’s economy without any fuss. Like I said, it’s very, very reliable and it never really breaks. So, it was never anything to talk about. People were sort of pleasantly surprised, I think, to learn of its age, to learn of the age of the idea. Now, of course, Micro Focus and IBM and some of the other vendors continue to update and adapt COBOL so that it continues to evolve and be relevant today.

It’s actually a 2020 technology rather than a 1960 one, but that was the first one. Secondly, the pandemic caused a lot of businesses to have to change how they process core systems and how they interact with their customers. That put extra strain on certain organizations or certain government agencies and, in a couple of cases, COBOL was incorrectly made the scapegoat for some of the challenges that those organizations face, whether it was a skills issue or whether it was a technology issue. Under the cover, COBOL was working just fine. So the interest has been positive regarding the anniversary, but I think the reports have been inaccurate and perhaps a little unkind about COBOL. Those were the two reasons they came together.

I remember when I first spoke to Cam and to some of the other people on the working group, you said it was a very good idea once and for all that we told the truth about COBOL, that the industry finally understood how viable it is, how valuable it is, based on the facts behind COBOL’s usage. So one of the things we’re going to do is try to quantify and qualify as best we can, how widely COBOL is used, what do you use it for, who is using, and then present a more factual story about the technology so people can make a more informed decision about technical strategy. Rather than base it on hearsay or some reputation about something being a bit rusty and out-of-date, which is probably the reputation that’s being espoused by someone who would have you replace it with something else, and their motivation might be for different reasons. There’s nothing wrong with COBOL and it’s very, very viable and our job I think really is to tell that truth and make sure people understand it,

Swapnil Bhartiya: What other projects, efforts, or initiatives are going on there at the Linux Foundation or Open Mainframe Project around COBOL? Can you talk about that?

Cameron Seay: Well, certainly. There is currently a course being developed by folks in the community who have developed an online course in COBOL. It’s the rudiments of it. It’s for novices, but it’s great for a continuing education program. So, that’s one of the things going on around COBOL. Another thing is there’s a lot going on in mainframe development in the OMP now. There’s an application framework that has been developed called Zoe that will allow you to develop applications for z/OS. It’s interesting that the focus of the Open Mainframe Project when it first began was Linux on the mainframe, but actually the first real project that came out of it was a z/OS-based product, Zoe, and so we’re interested in that, too. Those are just a couple of peripheral projects that the COBOL working group is going to work with.

There are other things we want to do from a curriculum standpoint down the road, but fundamentally, we just want to be a fact-finding, fact-gathering operation first, and Derek Britton has been taking leadership and putting together a substantial reference list so that we can get the facts about COBOL. Then, we’re going to do other things, but that we want to get that right first.

Swapnil Bhartiya: So there are as you mentioned a couple of projects. Is there any overlap between these projects or how different they are? Do they all serve a different purpose? It looks like when you’re explaining the goal and role of the working group, it sounds like it’s also the training or education group with the same kind of activities. Let me rephrase it properly: what are some of the pressing needs you see for the COBOL community, how are these efforts/groups are trying to help them, and how are they not overlapping or stepping on each other’s toes?

Cameron Seay: That’s an ongoing thing. Susharshna and I really work hard to make sure that we’re not working at either across purposes or there’s duplication of effort. We’re kind of clear about our roles. For the world at large, for the public at large, the working group—and Derek may have a different view on this because we all don’t think alike, we all don’t see this thing exactly the same—but I see it as information first. We want people to get accurate current information about COBOL.

Then, we want to provide some vehicle that COBOL can be reintroduced back into the general academic curriculum because it used to be. I studied COBOL at a four-year university. Most people did when they took programming in the ’80s and the ’90s, they took COBOL, but that’s not true anymore. Our COBOL course at East Carolina this semester is the only COBOL course in the entire USC system. That’s got to change. So information, exposure, accurate information exposure, and some kind of return to the general curriculum, those are the three things that we we can provide to the community at large.

Swapnil Bhartiya: If you look at Micro Focus, you are working in the industry, you are actually solving the problem for your customers. What role do these groups or other efforts that are going on there play for the whole ecosystem?

Derek Britton: Well, I think if we go back to Cam’s answer, I think he’s absolutely right that the industry, if you project forward another generation in 25 years’ time who are going to be managing these core business systems that currently still need to run the world’s largest organizations. I know we’re in a digital era and I know that things are changing at an unprecedented pace, but most of the world’s largest organizations, successful organizations still want to be in those positions in generations to come. So who is it? Who are those practitioners that are coming through the education system right now, who are going to be leaders in those organizations’ IT departments in the future?

And there is a concern not just for COBOL, but actually, many IT skills across the board. Is there going to be enough talent to actually run the organizations of the future? And that’s true, it’s a true question mark about COBOL. So Micro Focus, which has its own academic initiative and its own training program as does IBM as do many of the other vendors, we all applaud the work of all community groups. The OMP is obviously a fabulous example because it is genuinely an open group. Genuinely, it’s a meritocracy of people with good ideas coming together to try to do the right thing. We applaud the efforts to ensure that there continues to be enough supply of talented IT professionals in the future to meet the growing and growing demand. IT is not going away. It’s going to become strategically more and more important to these organizations.

Our part to play in Micro Focus is really to work shoulder-to-shoulder with organizations like the OMP because between us, we will create enough groundswell of training and opportunity for that next generation. Many people will tell you there just isn’t enough of that training going on and there aren’t enough of those opportunities available, even though one survey that Micro Focus ran last year on the back of the COBOL’s 60th anniversary suggests that around 92% of all application owners of COBOL systems confirmed that those applications remain strategic to their organization. So, if the applications are not going anywhere, who’s going to be looking after them in the next generation? And that’s the real challenge that I think the industry faces as a whole, which is why Micro Focus is so committed to get behind the wheel of making sure that we can make a difference.

Swapnil Bhartiya: We discussed that the interest in COBOL is increasing as COBOL is playing a very critical role in the modern economy. What kind of future do you see for COBOL and where do you see it going? I mean, it’s been around for 60 years, so it knows how to survive through times. Still, where do you see it go? Cam, I would love to start with you.

Cameron Seay: Yeah, absolutely. We are trying to estimate how much COBOL is actually in use. That estimate is running into hundreds of billions of lines of code. I know that, for example, Bank of America admits to at least 50 million lines of COBOL code. That’s a lot of COBOL, and you’re not going to replace it over time, there’s no reason to. So the solution to this problem, and this is what we’re going to do, is we’re going to figure out a way to teach people COBOL. It’s not a complex language to learn. Any organization that sees lack of COBOL skills as an impediment and justification to move to another platform is [employing] a ridiculous solution, that solution is not feasible. If they try to do that, they’re going to fail because there’s too much risk and, most of all, too much expense.

So, we’re going to figure out a way to begin to teach people COBOL again. I do it, a COBOL class at East Carolina. That is a solution to this problem because the code’s not going anywhere nor is there a reason for it to go anywhere, it works! It’s a simple language, it’s as fast as it needs to be, it’s as secure as it needs to be, and no one that I’ve talked to, computer scientists all over the world, no one can give me any application, that any language is going to work better than COBOL. There may be some that work as good or nearly as good, but you’re going to have to migrate them, but there’s nothing, there’s no improvement that you can make on these applications from a performance standpoint and from a security standpoint. The applications are going to stay where they are, and we’re just going to have to teach people COBOL. That’s the solution, that’s what’s going to happen. How and when, I don’t know, but that’s what’s going to happen.

Swapnil Bhartiya: If you look at the crisis that we were going through, almost everything, every business is moving online to the cloud. All those transactions that people are already doing in person are all moving online, so it has become critical. From your perspective, what kind of future do you see?

Derek Britton: Well, that’s a great question because the world is a very, very different place to how architecture was designed however long ago. Companies of today are not using that architecture. So there is some question mark there about what’s COBOL’s future. I agree with Cam. Anyone that has COBOL is not necessarily going to be able to throw that away anytime soon because, frankly, it might be difficult. It might be easy, but that’s not really the question, is it? Is it a good business decision? The answer is it’s a terrible business decision to throw it away.

In addition to that, I would contend that there are a number of modern-day digital use cases where actually the usage of COBOL is going to increase rather than decrease. We see this all the time with our larger organizations who are using it for pretty much the whole of the backend of their core business. So, whether it’s a banking organization or an insurer or a logistics company, what they’re trying to do obviously is find new and exciting business opportunities.

But, upon which they will be basing their core business systems that already run most of the business today, and then trying to use that to adapt, to enhance, to innovate. There are insurers who are selling the insurance quotation system to other smaller insurances as a service. Now, of course, their insurance quotation system is probably the version that isn’t quite as quick as the one that runs on their mainframe, but they’re making that available as a service to other organizations. Banking organizations are doing much the same thing with a range of banking services, maybe payment systems. These are all services that can be provided to other organizations.

The same is true in the ISB market where really, really robust COBOL-based financial services, packages, ERP systems, which are COBOL based, and they have been made available as cloud-based as-a-service packages or upon other platforms to meet new market needs. The thing about COBOL that few people understand is not only is it easy to learn, but it’s easy to move to somewhere else. So, if your client is now running Linux and it says, “Well, now I want it to run these core COBOL business systems there, too.” Well, maybe they’ve taken a move to AIX to a Power system, but the same COBOL system can be reused, replicated as necessary, which is a little known secret about the language.

This goes back to the original design, of course. Back in the day, there was no such thing as the “standard platform” in 1960. There wasn’t a single platform that you could reasonably rely on that would give you a decent answer, not very quickly anyway. So, in order for us to know that COBOL works, we have to have the same results compiled about running on different machines. It needs to be the same result running at the same speed, and from that point, that’s when the portability of the system came to life. That’s what they set out to do, built that way by design.

Swapnil Bhartiya: Cam, Derek, thank you so much for taking the time out today to talk about COBOL, how important it is in today’s world. I’m pretty sure that when we spend our whole day, some of the activities that we have done online touch COBOL or are powered by COBOL.

The one-millionth commit: The search for the lucky Linux kernel contributor

This week has been “a week of millions” for the Linux Foundation, with our announcement that over 1 million people have taken our free Introduction to Linux course. As part of the research for our recently published 2020 Linux Kernel History Report, the Kernel Project itself determined that it had surpassed one million code commits. Here is how we established the identity of this lucky Kernel Project contributor. 

Methodology:

The historical repo of BitKeeper (converted to Git) has 63,428 commits. We then found the merge at which Linus Torvalds’ repo has at least 936,572 commits (his repo has at least this many commits).

At commit 92c59e126b21fd212195358a0d296e787e444087 the repo had 936,456 commits (116 shy of the million)

>git checkout 92c59e126b21fd212195358a0d296e787e444087

>git log --oneline | wc

 936456 7483489 62991540


The next merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc passed that number, with 937,105

> git checkout 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc

> git log --oneline | wc

 937105 7489456 63037625

So on merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc Linus’ repo passed the 1M mark (to be precise, 1,000,533 including BitKeeper commits):

commit 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc 92c59e126b21fd212195358a0d296e787e444087 f510ca05271b6f71bd532fe743b39f628110223f (HEAD)

Merge: 92c59e126b21 f510ca05271b

Author: Linus Torvalds <torvalds@linux-foundation.org>

Date:   Mon Aug 3 19:19:34 2020 -0700


Merge tag 'arm-dt-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

At this point, we can simply list the 936,572nd commit in the log:

>git log --oneline | tail -936572 | head -1

85b23fbc7d88 x86/cpufeatures: Add enumeration for SERIALIZE instruction

And the committer is…

git log -1 85b23fbc7d88

commit 85b23fbc7d88f8c6e3951721802d7845bc39663d

Author: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>

Date:   Sun Jul 26 21:31:29 2020 -0700

    x86/cpufeatures: Add enumeration for SERIALIZE instruction

Ricardo’s momentous commit to the Kernel was to add enumeration support for the SERIALIZE instruction, supported in Intel’s forthcoming Sapphire Rapids and Alder Lake microarchitectures for their 10-nanometer server and workstation chips. Ricardo is a software engineer who has been working on Linux feature support for Intel’s microprocessors for 12 years as part of the company’s CPU enabling team.

For more about Intel Corporation’s Ricardo Neri, the one-millionth Linux Kernel code committer, please read and watch our interview, conducted by Swapnil Bhartiya on Linux.com.

The post The one-millionth commit: The search for the lucky Linux kernel contributor appeared first on Linux Foundation.

The one-millionth commit: The search for the lucky Linux kernel contributor

This week has been “a week of millions” for the Linux Foundation, with our announcement that over 1 million people have taken our free Introduction to Linux course. As part of the research for our recently published 2020 Linux Kernel History Report, the Kernel Project itself determined that it had surpassed one million code commits. Here is how we established the identity of this lucky Kernel Project contributor. 

Methodology:

The historical repo of BitKeeper (converted to Git) has 63,428 commits. We then found the merge at which Linus Torvalds’ repo has at least 936,572 commits (his repo has at least this many commits).

At commit 92c59e126b21fd212195358a0d296e787e444087 the repo had 936,456 commits (116 shy of the million)

>git checkout 92c59e126b21fd212195358a0d296e787e444087

>git log --oneline | wc

 936456 7483489 62991540


The next merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc passed that number, with 937,105

> git checkout 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc

> git log --oneline | wc

 937105 7489456 63037625

So on merge 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc Linus’ repo passed the 1M mark (to be precise, 1,000,533 including BitKeeper commits):

commit 2f3fbfdaf77f3ac417d0511fac221f76af79f6fc 92c59e126b21fd212195358a0d296e787e444087 f510ca05271b6f71bd532fe743b39f628110223f (HEAD)

Merge: 92c59e126b21 f510ca05271b

Author: Linus Torvalds <torvalds@linux-foundation.org>

Date:   Mon Aug 3 19:19:34 2020 -0700


Merge tag 'arm-dt-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

At this point, we can simply list the 936,572nd commit in the log:

>git log --oneline | tail -936572 | head -1

85b23fbc7d88 x86/cpufeatures: Add enumeration for SERIALIZE instruction

And the committer is…

git log -1 85b23fbc7d88

commit 85b23fbc7d88f8c6e3951721802d7845bc39663d

Author: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>

Date:   Sun Jul 26 21:31:29 2020 -0700

    x86/cpufeatures: Add enumeration for SERIALIZE instruction

Ricardo’s momentous commit to the Kernel was to add enumeration support for the SERIALIZE instruction, supported in Intel’s forthcoming Sapphire Rapids and Alder Lake microarchitectures for their 10-nanometer server and workstation chips. Ricardo is a software engineer who has been working on Linux feature support for Intel’s microprocessors for 12 years as part of the company’s CPU enabling team.

For more about Intel Corporation’s Ricardo Neri, the one-millionth Linux Kernel code committer, please read and watch our interview, conducted by Swapnil Bhartiya on Linux.com.

The post The one-millionth commit: The search for the lucky Linux kernel contributor appeared first on Linux Foundation.

Meet the contributor of the 1-millionth commit: Ricardo Neri

August was a historic month for Linux. The largest open source project on the planet enjoyed its one-millionth code commit. The honor goes to Ricardo Neri, the Linux kernel engineer at Intel. Swapnil Bhartiya, founder, and host at TFiR sat down with Neri on behalf of the Linux Foundation to discuss Neri’s journey and involvement with the Linux kernel community.

A lightly edited transcript of the interview:

Swapnil Bhartiya: Hi, this is Bhartiya, on behalf of the Linux Foundation, and today we have with us Ricardo Neri, Linux Software Engineer at Intel, whose code contribution has become 1 millionth contribution to the Linux kernel.

Ricardo Neri: Hi, thank you. Thank you very much.

Swapnil Bhartiya: Ricardo, tell us a little bit about yourself, your journey. When was the first time you came in contact with open source or Linux in general?

Ricardo Neri: That was, I think in 2008. It was a time when the iPhone came out and at the time I used to work in Symbian, but then because the iPhone came out, Symbian died. So I was transferred to a new team, which was working on audio drivers for Linux and the VS. So, maybe by chance I landed on that team and that’s how I started 12 years ago.

Swapnil Bhartiya: You started contributing to the kernel as part of your organization, but you had personal interactions with the kernel community. How was that interaction?

Ricardo Neri: It was very daunting because I had heard that it was really hard to convince maintainers to take your code. And also, I don’t know, maybe intimidating because the people in the community were very smart and also, they had strong opinions for various things. So yeah, maybe I’d say it was intimidating but exciting at the same time.

Swapnil Bhartiya: How have you seen the community itself evolve over time?

Ricardo Neri: Just building on my previous comment I saw at the time that maintainers, they care deeply about the quality of the code that may be drove them to make harsh comments on code from people. And maybe that was a barrier for new people to start contributing. But I have seen a change in the last years like a new code of conduct and rules are agreed upon for people, maybe if they are hesitant or they are not so sure about the quality of their code, just to take it out there and they will not have such a harsh reply as it used to be in the early years when I joined the open source community. I think that is a change that I have observed. Another change that I have observed is more companies are now embracing Open Source. In the early days, the industry was still dominated by closed source software but now I have seen companies building more and more business models around open source software, where the value of the product is not software, but the things that you do with it.

Swapnil Bhartiya: What is interesting is that the contributions to the kernel are coming from all around the globe. You don’t have to be in a specific place to become part of the project. So, what role do you think Linux has played in democratizing software development where you don’t have to prove yourself before you get involved. You send a patch. If the patch is good, they will take your patch. If it’s not good, they will not take it. They don’t have to look at your resume or CV that, hey, have you done any work before or not? So how much role has Linux played in democratizing software development itself?

Ricardo Neri: Yeah, I think it has played a big role because as you said, you don’t have to have a college degree or a computer science degree to start contributing to it because the currency, as you say, is a quality of code. So I have seen, myself, I am not a computer scientist or a software engineer. My background is electrical engineering. So probably I can be a good example of that. I didn’t need to go to college for five years and study computer science to start contributing. Anyone, with the interest to learn and to do something, can start contributing. I am not the only example. There are other people who have a biology degree and they now have become key contributors to Linux.

You can just go to the Linux kernel mailing list, read all the patches, maybe contribute your own reviews. And maybe you start sending your patch. All you need is essentially a workstation with the compiler and the source code. And you can find a bug or an improvement, and you can just do it. You don’t need anything more on that.

Swapnil Bhartiya: Yeah. I fully agree with you. Have you attended any of these Linux Plumbing or any other conferences and events?

Ricardo Neri: Yes, actually I was just attending the Linux Plumbers Conference a few hours ago. I was in the power management micro-conference. Yes, and in previous years I have also been attending Open Source Summit, which used to be LinuxCon.

Swapnil Bhartiya: When you interact with the kernel community over email, it is a bit daunting and you feel intimidated because you don’t know how they will respond to the patch. But when you go and meet these developers in-person, when you sit down for either breakfast or for beer in the evening, you suddenly find that they are as human as we are. So, when you meet them in person, how does the chemistry, the trust, the relationship changes?

Ricardo Neri: Yeah, that is very true. Because, as you said, if you interact with these people only through the mailing list, you can only see words without any context of it. And as you said, this is prone to misinterpretation on both sides. But as equally as you said, when you meet with them, maybe in a virtual event or in person, you see that they are actually friendly. They do care about the quality of the code, but they are approachable and friendly in my experience. And that is also the experience that I have heard from all of my coworkers, who are also new to this community. They have similar feedback as I do.

Swapnil Bhartiya: Do you have any interesting anecdotes to share from any of these events, which are like, “Hey, I met that person or this person or we just sat down. We were debating for months over the patch. We sat down and suddenly we saw the solution.” Any interesting news story that you have to share?

Ricardo Neri: I noticed just in this Plumbers Conference this year, that discussing things over the mailing list can take time because you need to put your comments in written form and then wait for the answer and so forth. And have several iterations of that process. But if you sit down in a room or in a virtual room, the conversation is more fluid and faster. You can arrive at conclusions or to designs or to agreements that would otherwise take maybe weeks for a month to do in the mailing list. So, yeah, I think I have noticed that.

Swapnil Bhartiya: Let’s talk about your contribution. What was this code contribution that historically became a 1 millionth contribution?

Ricardo Neri: That is related to the work that I do with Intel, in which I am part of the CPU enabling team. Whenever Intel comes up with a new feature in the processor, our team is responsible for taking that new feature and making it consumable by the Linux kernel. In this particular case, this is for a new instruction called SERIALIZE, which essentially serialized execution of the code. It puts a landmark in which all the execution before that instruction gets done, before starting to execute the code after that instruction. And that was solving problems that we had in the past. Because for instance, you can achieve the same goal using an instruction called CPUID or return from interrupt. But those instructions have certain side effects and can also have a performance penalty. So this serialize instruction allows you to divide the execution of code, but without having those side effects that you will need to fix up in the software. So it helps to make the software simpler and you have a performance bonus side of it as well.

Swapnil Bhartiya: Do you contribute code in your capacity as an Intel engineer, or do you also contribute some code in your free time as well?

Ricardo Neri: Right now, I am only contributing coding in the capacity as an Intel engineer.

Swapnil Bhartiya: So, this is the reason I ask this question is that in the early days of open source most of the contribution was coming through people working in their spare time, but today a majority of contributors are getting paid by companies to do that work. Working on Open Source is no longer a part-time hobby. How have you seen this change itself, where you get paid to work on open source?

Ricardo Neri: That’s very true. As I was mentioning earlier that now companies have found ways to build business models around open source software. A good example is Red Hat where the software is free, but they build their business around the software and not regard the software as the product; it’s a vehicle to deliver value to their customers. And the same is true for semiconductors companies such as Intel, which are in the business of selling computer chips. But today, you cannot just only sell the chip. You also need to provide a full solution to the customer. And that, of course, includes the software. And that is also true for other companies that were able to build business models around open source software.

In my early days when I was new to Linux, I had many, many colleagues that were in Linux because they believed in it. They believed in the value of open source software. Then they happen to stumble on a job that they were paid to do the things in which they believed. I remember them giving talks in my university about how to build a Linux scanner, how to configure it for your own needs. And they did it for free. During my university days, I remember having installfests in which you could just take your laptop and people would help you to install Linux. People that had a true belief in open source software and were willing to help you for free.

Swapnil Bhartiya: What role has open source played in, as we were talking earlier, that you don’t have to prove yourself, you don’t have to be in a specific region to get involved? So, talk about what role open source has played in creating a level playing field in giving access to underrepresented minorities and give them not only tools but also a voice.

Ricardo Neri: I think that, yeah, probably it’s similar to what I was saying at the beginning that in the traditional model in which you have to go to college and then spend four years there and not work and have good grades. You need to have certain opportunities in life to be able to do that, to have the luxury of attending college for five years, and gain a degree. But in software, for instance, you don’t need that. All you need is willingness. Just the willingness of learning and contributing to it. So I think that for underrepresented minority groups, statistically, they have a lesser chance of attending college and getting a degree.

I have also seen companies realizing the fact that you don’t actually need to be a computer scientist to start writing software. That has opened doors for people of different backgrounds and very diverse backgrounds in which you don’t have to be part of a certain career path or school path that can land you a job in this industry. You can just start wherever you want.

There are many efforts in the community. The GNOME Foundation has scholarships to help recruit people from underrepresented groups to start contributing and they get mentoring. Because that is an important point. The software is free and anyone can contribute to it. But if you have a mentor, if you have someone that can help you navigate an open source software community that will help you a lot and it will go a long way to get you established in that community. You can start contributing very simple patches. But over time you have that guidance, you can optimize your time and your effort to make the things that will have an impact, and will maybe someday make you a key contributor to the community.

Swapnil Bhartiya: Thank you.

Ricardo Neri: Thank you very much.

Xen on Raspberry Pi 4 adventures

Written by Stefano Stabellini and Roman Shaposhnik

Raspberry Pi (RPi) has been a key enabling device for the Arm community for years, given the low price and widespread adoption. According to the RPi Foundation, over 35 million have been sold, with 44% of these sold into industry. We have always been eager to get the Xen hypervisor running on it, but technical differences between RPi and other Arm platforms made it impractical for the longest time. Specifically, a non-standard interrupt controller without virtualization support.

Then the Raspberry Pi 4 came along, together with a regular GIC-400 interrupt controller that Xen supports out of the box. Finally, we could run Xen on an RPi device. Soon Roman Shaposhnik of Project EVE and a few other community members started asking about it on the xen-devel mailing list. “It should be easy,” we answered. “It might even work out of the box,” we wrote in our reply. We were utterly oblivious that we were about to embark on an adventure deep in the belly of the Xen memory allocator and Linux address translation layers.

The first hurdle was the availability of low memory addresses. RPi4 has devices that can only access the first 1GB of RAM. The amount of memory below 1GB in Dom0 was not enough. Julien Grall solved this problem with a simple one-line fix to increase the memory allocation below 1GB for Dom0 on RPi4. The patch is now present in Xen 4.14.

“This lower-than-1GB limitation is uncommon, but now that it is fixed, it is just going to work.” We were wrong again. The Xen subsystem in Linux uses virt_to_phys to convert virtual addresses to physical addresses, which works for most virtual addresses but not all. It turns out that the RPi4 Linux kernel would sometimes pass virtual addresses that cannot be translated to physical addresses using virt_to_phys, and doing so would result in serious errors. The fix was to use a different address translation function when appropriate. The patch is now present in Linux’s master branch.

We felt confident that we finally reached the end of the line. “Memory allocations – check. Memory translations — check. We are good to go!” No, not yet. It turns out that the most significant issue was yet to be discovered. The Linux kernel has always had the concept of physical addresses and DMA addresses, where DMA addresses are used to program devices and could be different from physical addresses. In practice, none of the x86, ARM, and ARM64 platforms where Xen could run had DMA addresses different from physical addresses. The Xen subsystem in Linux is exploiting the DMA/physical address duality for its own address translations. It uses it to convert physical addresses, as seen by the guest, to physical addresses, as seen by Xen.

To our surprise and astonishment, the Raspberry Pi 4 was the very first platform to have physical addresses different from DMA addresses, causing the Xen subsystem in Linux to break. It wasn’t easy to narrow down the issue. Once we understood the problem, a dozen patches later, we had full support for handling DMA/physical address conversions in Linux. The Linux patches are in master and will be available in Linux 5.9.

Solving the address translation issue was the end of our fun hacking adventure. With the Xen and Linux patches applied, Xen and Dom0 work flawlessly. Once Linux 5.9 is out, we will have Xen working on RPi4 out of the box.

We will show you how to run Xen on RPi4, the real Xen hacker way, and as part of a downstream distribution for a much easier end-user experience.

Hacking Xen on Raspberry Pi 4

If you intend to hack on Xen on ARM and would like to use the RPi4 to do it, here is what you need to do to get Xen up and running using UBoot and TFTP. I like to use TFTP because it makes it extremely fast to update any binary during development.  See this tutorial on how to set up and configure a TFTP server. You also need a UART connection to get early output from Xen and Linux; please refer to this article.

Use the rpi-imager to format an SD card with the regular default Raspberry Pi OS. Mount the first SD card partition and edit config.txt. Make sure to add the following:

    kernel=u-boot.bin

    enable_uart=1

    arm_64bit=1

Download a suitable UBoot binary for RPi4 (u-boot.bin) from any distro, for instance OpenSUSE. Download the JeOS image, then open it and save u-boot.bin:

    xz -d openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw.xz

    kpartx -a ./openSUSE-Tumbleweed-ARM-JeOS-raspberrypi4.aarch64.raw

    mount /dev/mapper/loop0p1 /mnt

    cp /mnt/u-boot.bin /tmp

Place u-boot.bin in the first SD card partition together with config.txt. Next time the system boots, you will get a UBoot prompt that allows you to load Xen, the Linux kernel for Dom0, the Dom0 rootfs, and the device tree from a TFTP server over the network. I automated the loading steps by placing a UBoot boot.scr script on the SD card:

    setenv serverip 192.168.0.1

    setenv ipaddr 192.168.0.2

    tftpb 0xC00000 boot2.scr

    source 0xC00000

Where:

- serverip is the IP of your TFTP server

- ipaddr is the IP of the RPi4

Use mkimage to generate boot.scr and place it next to config.txt and u-boot.bin:

    mkimage -T script -A arm64 -C none -a 0x2400000 -e 0x2400000 -d boot.source boot.scr

Where:

- boot.source is the input

- boot.scr is the output

UBoot will automatically execute the provided boot.scr, which sets up the network and fetches a second script (boot2.scr) from the TFTP server. boot2.scr should come with all the instructions to load Xen and the other required binaries. You can generate boot2.scr using ImageBuilder.

Make sure to use Xen 4.14 or later. The Linux kernel should be master (or 5.9 when it is out, 5.4-rc4 works.) The Linux ARM64 default config works fine as kernel config. Any 64-bit rootfs should work for Dom0. Use the device tree that comes with upstream Linux for RPi4 (arch/arm64/boot/dts/broadcom/bcm2711-rpi-4-b.dtb). RPi4 has two UARTs; the default is bcm2835-aux-uart at address 0x7e215040. It is specified as “serial1” in the device tree instead of serial0. You can tell Xen to use serial1 by specifying on the Xen command line:

    console=dtuart dtuart=serial1 sync_console

 The Xen command line is provided by the boot2.scr script generated by ImageBuilder as “xen,xen-bootargs“. After editing boot2.source you can regenerate boot2.scr with mkimage:

    mkimage -A arm64 -T script -C none -a 0xC00000 -e 0xC00000 -d boot2.source boot2.scr

Xen on Raspberry Pi 4: an easy button

Getting your hands dirty by building and booting Xen on Raspberry Pi 4 from scratch can be not only deeply satisfying but can also give you a lot of insight into how everything fits together on ARM. Sometimes, however, you just want to get a quick taste for what it would feel to have Xen on this board. This is typically not a problem for Xen, since pretty much every Linux distribution provides Xen packages and having a fully functional Xen running on your system is a mere “apt” or “zypper” invocation away. However, given that Raspberry Pi 4 support is only a few months old, the integration work hasn’t been done yet. The only operating system with fully integrated and tested support for Xen on Raspberry Pi 4 is LF Edge’s Project EVE.

Project EVE is a secure-by-design operating system that supports running Edge Containers on compute devices deployed in the field. These devices can be IoT gateways, Industrial PCs, or general-purpose ruggedized computers. All applications running on EVE are represented as Edge Containers and are subject to container orchestration policies driven by k3s. Edge containers themselves can encapsulate Virtual Machines, Containers, or Unikernels. 

You can find more about EVE on the project’s website at http://projecteve.dev and its GitHub repo https://github.com/lf-edge/eve/blob/master/docs/README.md. The latest instructions for creating a bootable media for Raspberry Pi 4 are also available at: 

https://github.com/lf-edge/eve/blob/master/docs/README.md

Because EVE publishes fully baked downloadable binaries, using it to give Xen on Raspberry Pi 4 a try is as simple as:

$ docker pull lfedge/eve:5.9.0-rpi-xen-arm64 # you can pick a different 5.x.y release if you like

$ docker run lfedge/eve:5.9.0-rpi-xen-arm64 live > live.raw

This is followed by flashing the resulting live.raw binary onto an SD card using your favorite tool. 

Once those steps are done, you can insert the card into your Raspberry Pi 4, connect the keyboard and the monitor and enjoy a minimalistic Linux distribution (based on Alpine Linux and Linuxkit) that is Project EVE running as Dom0 under Xen.

As far as Linux distributions go, EVE presents a somewhat novel design for an operating system, but at the same time, it is heavily inspired by ideas from Qubes OS, ChromeOS, Core OS, and Smart OS. If you want to take it beyond simple console tasks and explore how to run user domains on it, we recommend heading over to EVE’s sister project Eden: https://github.com/lf-edge/eden#raspberry-pi-4-support and following a short tutorial over there.

If anything goes wrong, you can always find an active community of EVE and Eden users on LF Edge’s Slack channels starting with #eve over at http://lfedge.slack.com/ — we’d love to hear your feedback.

In the meantime – happy hacking!

By the Time You Finish Reading This, Your Tech Job Post May Be Outdated

As the rate of technological advancement and change continues to accelerate, new tools are being developed and released at such a swift pace that no individual tech professional can stay on top of them all. Consequently, this leads to talent gaps that can delay digital transformation. For example, a recent study found that “only 23% of organizations believe they have the talent required to successfully complete their cloud native journey.”

But how do you outline skill and experience requirements for technology that is evolving so rapidly?

How open-source software transformed the business world (ZDNet)

Steven J. Vaughn-Nichols writes at ZDNet:

Eric S. Raymond, one of open-source’s founders, said in his seminal work, The Cathedral and the Bazaar,  “Every good work of [open-source] software starts by scratching a developer’s personal itch.” There’s a lot of truth to that. Vital programs such as the Apache web server, MySQL, and Linux began that way and numerous smaller programs did too. But it’s not likely many people had a personal itch to create giant vertical programs such as telecommunications’ OpenDaylight and OPNFV or Automotive Grade Linux (AGL)’s Unified Code Base. Today, vertical companies focused on narrow interests also embrace open-source methods and software with open arms.

Read more at ZDNet

Software-defined vertical industries: transformation through open source

“When I say that innovation is being democratized, I mean that users of products and services-both firms and individual consumers-are increasingly able to innovate for themselves. User-centered innovation processes offer great advantages over the manufacturer-centric innovation development systems that have been the mainstay of commerce for hundreds of years. Users that innovate can develop exactly what they want, rather than relying on manufacturers to act as their (often very imperfect) agents.”  — Eric von Hippel, Democratizing Innovation

Overview

What do some of the world’s largest, most regulated, complex, centuries-old industries such as banking, telecommunications, and energy have in common with rapid development, bleeding-edge innovative, creative industries such as the motion pictures industry?

They’re all dependent on open source software. 

That would be a great answer and correct, but it doesn’t tell the whole story. A complete answer is these industries not only depend on open source, but they’re building open source into the fabric of their R&D and development models. They are all dependent on the speed of innovation that collaborating in open source enables. 

As a recent McKinsey & Co. report described, the “biggest differentiator” for top-quartile companies in an industry vertical was “open source adoption,” where they shifted from users to contributors. The report’s data shows that top-quartile company adoption of open source has three times the impact on innovation than companies in other quartiles.

Over the last 20 years, the Linux Foundation has expanded from a single project, the Linux kernel, to hundreds of distinct project communities. The “foundation-as-a-service” model developed by Linux Foundation supports communities collaborating on open source across key horizontal technology domains, such as cloud, security, blockchain, and the web. 

However, many of these project communities align across vertical industry groupings, such as automotive, motion pictures, finance, telecommunications, energy, and public health initiatives. They may have started as individual efforts looking for a neutral home at the Linux Foundation. Still, over time these communities found it useful to collaborate as the organizations supporting the projects expanded their collaboration to other areas.

This paper will delve into the major vertical industry initiatives served by the Linux Foundation. We will highlight the most notable open source projects and why we believe these key industry verticals, some over 100 years old, have transformed themselves using open source software.

The post Software-defined vertical industries: transformation through open source appeared first on The Linux Foundation.

Free Intro to Linux Course Surpasses One Million Enrollments

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced its Introduction to Linux training course on the edX platform, currently in its sixth edition, has surpassed one million enrollments. The course helps students develop a good working knowledge of Linux using both the graphical interface and command-line across the major Linux distribution families. No prior knowledge or experience is required, making the course a popular first step for individuals interested in pursuing a career in IT.

Challenges and Trends of Cloud Infrastructure: A Q&A with Ying Xiong, Cloud Lab, Futurewei Technologies, Inc.

Ahead of Open Networking & Edge Summit 2020 (being held virtually next week on September 28-30), Linux.com hosted a Q&A with Ying Xiong of Futurewei — a Diamond Sponsor of ONES 2020, where he discussed addressing the challenges and trends of cloud infrastructure in the enterprise digital transformation journey and for the new types of workloads such as AI, 5G and IoT apps.

We hope you enjoy the interview! If you are interested in attending Open Networking & Edge Summit 2020, where you can learn more about the future of Networking, Edge and Cloud, click here to register for just US$50: https://bit.ly/32F8LXX. View the full schedule here: https://bit.ly/33Ct4Vh

Linux.com: Tell us a bit about your open source journey in Networking, Edge, and Cloud, and specifically help people understand how Futurewei operates independently from Huawei

Ying Xiong: At Futurewei cloud lab, we are actively involved in open source communities and contribute to many open source projects including Kubernetes + KubeEdge, Akraino Edge Stack, Cloud Foundry, and OpenStack. We have attended CNCF conferences, Open Source Summit, Embedded Linux conferences, and Cloud Foundry Summit almost every year since 2015 and delivered keynotes and session talks at many of these conferences or summits. Individually, some of us served as board members in LF, CNCF, and LF Edge as well as OpenStack foundations. Currently, Futurewei is an independent member of LF, CNCF, and LF Edge.

Linux.com: Digital Transformation and Cloud Infrastructure are two important topics being discussed in the community. Please tell us some key challenges you see in these.

Ying Xiong: In today’s transformational digital journey, Cloud infrastructure and services have been established as the core part of Enterprise’s IT and their digital transformation. More and more enterprises are leveraging cloud computing technologies to accelerate their business innovations by either migrating their applications and data to a public cloud or building their own private cloud or using a hybrid cloud model. The rise of emerging 5G, AI, Edge Computing, and IoT application landscape is offering Cloud Computing further exciting opportunities as well as challenges to meet today’s and tomorrow’s enterprise digitization needs. The following is a list of challenges and trends we’ve observed that face enterprises and cloud technologies themself:

  • As more and more applications move to the cloud, there is an increasing demand for cloud infrastructure to manage the ever-increasing pool of compute nodes with scale and provision and deploy ever-increasing workloads with consistent speed.

This challenge has been driving the new development and/or optimization of distributed cluster management platforms, new cloud networking solutions, and lightweight virtualization technologies such as Container and Serverless.  Current and future compute cluster management platforms will be continuously challenged to manage 100K+ compute nodes in a cluster and be able to provision and startup hundreds and even thousands of application instances within a minute.  There is very limited support for extremely high scalable networking in the virtualized cloud environment, primarily because contemporary cloud networking virtualization solutions are still cobbled together on top of age-old static networking designs. Such solutions are incapable of provisioning & management scale of 10M+ network dynamic endpoints in the cloud.

  • Both Cloud providers and Enterprises have been asking for a “unified ” resource management and orchestration capability as a single pane of glass in order to provide support for managing heterogeneous resource types (bare-metal, VMs, containers, Serverless, Uni-Kernels, etc.) seamlessly.

Modern cloud-native applications are mostly designed for scale-out architectures that are more suited for containerized environments. A typical enterprise cloud environment isn’t just about containers only as containers may not be appropriate for all enterprise workloads and use cases. Most enterprises still run a large number of legacy apps that run on bare metal and traditional VM environments. As a result, the future cloud infrastructure needs to be a “unified” platform in order to meet this challenge and at the same time reduce the management cost for both cloud providers and enterprise customers. 

  • With the convergence of traditional cloud computing and edge computing, and the emergence of new types of workloads such as 5G, AI and IoT applications, customers and the cloud infrastructure platforms are being challenged to manage not only data center resources but also the edge compute nodes to support the new types of distributed applications cross data center and edge site.

The current open source cloud platforms mostly treat Edge and AI as an afterthought. The new open source cloud platform needs to be architected with Edge as part of the overall architecture from day one. For example, AI modeling can be done on the Cloud, while AI inferencing can be done on the Edge connecting to billions of IoT devices and sensors running 5G speed networks. Cloud-Edge computing combined with the optimized latency performance of 5G Core processing can reduce round-trip-time by up to two orders of magnitude in situations where there is tight control over all parts of the communication chain. This has enabled a brand-new class of intelligent cloud applications in the areas of industrial robotic/drone automation, V2X, and AR/VR infotainment, associated innovative business models, etc.

  • Hybrid cloud and multi-cloud trends have become the cornerstone of Enterprise cloud strategy, and application portability cross-cloud becomes a requirement to many companies. Open API and compatibility with the industry cloud ecosystem challenge the new generation of cloud infrastructure technology development.

Linux.com: What are the key Technology building blocks you envision to help accelerate the journey of Telecom and Cloud Service Providers?

Ying Xiong: With these challenges and trends I mentioned above, we believe that as an industry and an open source community, there is a need for building the next generation open source, hyper-scale and unified cloud infrastructure that works with existing cloud technologies and APIs, and can help enterprises, as well as cloud providers, meet the continuously growing technology challenges. We believe the following are technology building blocks that will help accelerate cloud service providers’ journeys, including Telecom cloud.

  • Unified Infrastructure — Provision and manage cloud resources such as VMs, containers, bare metals as well as serverless compute units. A single infrastructure platform allows cloud providers to simplify cloud compute and network management and significantly reduce manage cost. It also accelerates new cloud services development and manager.
  • True multi-tenant & strong isolation cloud – Provide trusted computing to both customers and service providers.  This building block, including hardware isolation technologies such as SGX, is especially important for the future of cloud computing
  • Hyper-scale cloud networking – Provide fast & large-scale provisioning and management of virtual networks such as VPCs and subnets and network endpoints for cloud applications and services.  Cloud network is the bottleneck for high scalability and high-performance cloud for many cloud providers currently. It is one of basic and critical building blocks for service providers that need millions of virtual network provisioning within a region.
  • Distributed cloud-edge infrastructure – Extend traditional cloud computing to the edge and provide capabilities to provision and manage compute, network resources, and workloads at edge nodes that are closer to the customers and customer data. Sometimes we call this distributed cloud to support new types of distributed applications such as AI, 5G, and IoT apps.
  • Intelligent cloud infrastructure – We believe that future cloud technologies are increasingly building intelligence into the infrastructure to serve better and manage new types of applications while increasing resource utilization for the operators. For example, intelligent scheduling and/or placement of where to run workloads between cloud and edge to achieve better user experience with extremely low latency is increasingly important in building new cloud infrastructure.

Linux.com: Can you highlight a few open source projects that help resolve some of the challenges you have outlined?

Ying Xiong: An open source cloud, the cloud built by open source technologies such as Openstack and Kubernetes, has led the way in the innovation of cloud computing technology, and we have seen more and more companies leveraging these cloud technologies to accelerate their business innovations. Simultaneously, as we discussed previously, new types of applications and/or workloads pose new challenges to the cloud platforms. 

One of the most recent key initiatives from us is the Centaurus open source project aiming to address some of the challenges I mentioned earlier.  The project is a cloud infrastructure platform that can be used to build public or private clouds. It unifies the orchestration, network provisioning, and management of cloud compute and network resources at a regional scale. It offers the same API experience to provide and manage virtual machines, containers, serverless and other types of cloud resources. Centaurus combines traditional IaaS and PaaS layers into one infrastructure platform that can simplify cloud management and reduce cloud providers’ management costs. 

The Centaurus project currently includes the following two open source projects:

  • Arktos is a compute cluster management system designed for large scale clouds. It is evolved from Kubernetes and addresses key challenges such as scalability, hard multi-tenancy, and unified runtime to take cloud-native infrastructure to the next level.
  • Mizar is an open-source high-performance cloud-network powered by eXpress Data Path (XDP) and Geneve protocol for a highly scalable cloud. It is a simple and efficient solution that lets you create a multi-tenant overlay network of many endpoints with extensible network functions.

Linux.com: What is Project Centaurus trying to solve? What is the status and where can people find more information?

Ying Xiong: The vision of the Centaurus open source project is to build a unified and large-scale distributed cloud infrastructure platform meeting the challenges discussed in the previous sections. With innovations in high-performance cloud network solutions, unified runtime environment, and hyper-scale cluster management, Centaurus is designed to meet the infrastructure requirements for the new types of cloud workloads such as 5G, AI, Edge, and IoT applications.  Specifically, the Centaurus project is trying to achieve:

  • Unified infrastructure for managing various cloud resources (such as VMs, containers, serverless, bare-metal machines, and others) natively.
  • High-performance cloud network data plane for extremely low latency network traffic forwarding and routing in the cloud.
  • Hyper-scale compute cluster management supports 50K+ compute nodes in a single cluster and 10M+ network endpoint provisioning in a region.
  • Natively support of edge cloud, the cloud extension to manage compute and network resources at edge sites from the cloud.

We would like to invite the open source community to join us to realize the vision of the Centaurus project and to build the ecosystem for the benefits of open source communities.  You can find more information regarding the project documentation and relevant collateral (white paper, blogs, etc.) from the Centaurus website at https://www.centauruscloud.io/. There are currently two sub-projects currently under Centaurus project, Arktos, and Mizar, that are already open source with a few releases.

Linux.com: How is this project complementary to projects in CNCF, LF Edge or LF Networking umbrella? 

Ying Xiong: We are targeting to launch Centaurus as an independent project under The Linux Foundation since it is trying to solve different sets of challenges or problems than other cloud computing projects in LF. With that being said, we are still looking at potential options and trying to find the best place to donate and host the Centaurus project, which can deliver max benefits for the open source communities and the industry.

Technically, as you may see, Centaurus has compute, network, and edge components and focuses on a complete IaaS+ platform. In contrast, CNCF focuses on container orchestration, LF Edge focusing on Edge infrastructure, and LF networking on network architecture and solution. However, Centaurus is designed with cloud-native architecture, and its components are independent projects that can be used independently with other cloud technologies. Vice versa, we welcome and expect that components from projects in CNCF, LF Edge, and LF Networking and other open source foundations can be plugged into Centaurus as well.   

Linux.com: Anything else you want to add to help grow participation and support? 

Ying Xiong: As a quick recap, Centaurus is an open source Distributed Cloud Native Infrastructure + umbrella project for the 5G, AI, and Edge era. Centaurus currently includes the two core open source projects, a Compute project (Arktos) and a Networking project (Mizar).

With the open source community’s participation and support, the Centaurus platform can offer enterprises the hyper-scale and unified management capabilities that will dramatically change the economics of enterprise IT.

We hope the information we have provided here helps pique community interest. We invite all of the open source community members to join us in making Centaurus a viable open cloud infrastructure platform for the future of Enterprise IT digitization journey. It is still in the early stage for Centaurus, and we hope the community can join us and make it a reality. By being part of the most popular open source foundation, a neutral place for hosting the Centaurus project under the umbrella of Linux Foundation will definitely garner tremendous interest from the open source community. We look forward to making all this a great success for the community as a whole.